By: Lakshya Mahajan
Shown above: diagram of what parallel computing looks like
Image source: https://www.teldat.com/blog/parallel-computing-bit-instruction-task-level-parallelism-multicore-computers/
Introduction:
Ever wondered how computers can process so much data? Well, it is not just one computer that is doing all the gruntwork; there are several of them.
Shown above: many parallel computing facilities look like this
Image source: https://towardsdatascience.com/modern-parallel-and-distributed-python-a-quick-tutorial-on-ray-99f8d70369b8
What is parallel computing?
Parallel computing, put simply, is when multiple computers carry out calculations at the same time. This is done by taking a large problem and splitting it into smaller tasks, which are then given to individual computers to process. The results are then put together to give a solution to the problem.
What are the different types of parallel computing?
There are four different types of parallel computing: instruction-level parallelism, task parallelism, bit-level parallelism, and superword-level parallelism. In instruction-level parallelism, the computer, instead of running every instruction one after another, reorders instructions and executes them simultaneously in such a way that the end result does not end up changing. In task parallelism, tasks are broken down into sub-tasks and allocated to separate processors for execution. In bit-level parallelism, the number of bits in a processor is increased to increase calculation speed; for example, in an 8-bit processor, a certain calculation may take a longer amount of time than in a 16-bit processor. Super word level parallelism is based on block vectorization and unrolling loops.
Where is parallel computing used?
Across many different disciplines, parallel computing plays an increasingly important role in parsing data. It is used for different purposes in fields such as medicine, environment conservation, 3D animation, and research. In the field of medicine, for example, it is being used for medical imaging.
What is the relationship between parallel computing and fault tolerance?
Parallel computing also serves the purpose of creating a fault-tolerant system. This means that if a computer in a parallel computing system goes offline, other computers will take up the operation that was intended for it, providing redundancy to the system.
Works Cited
Almasi, George S., and Allan Gottlieb. Highly parallel computing. Benjamin/Cummings, 1989. Accessed 21 March 2022.
“Exploiting Superword Level Parallelism with Multimedia Instruction Sets.” Research, http://groups.csail.mit.edu/cag/slp/SLP-PLDI-2000.pdf. Accessed 21 March 2022.
Fox, Pamela. “Redundancy and fault tolerance (article).” Khan Academy, https://www.khanacademy.org/computing/computers-and-internet/xcae6f4a7ff015e7d:the-internet/xcae6f4a7ff015e7d:routing-with-redundancy/a/redundancy-fault-tolerance. Accessed 21 March 2022.
Gossett, Stephen. “9 Parallel Processing Examples You Should Know.” Built In, 6 November 2019, https://builtin.com/hardware/parallel-processing-example. Accessed 21 March 2022.
Comentários