Hardware architecture (parallel computing)
Let’s discuss about parallel computing and hardware architecture of parallel computing in this post. Note that there are two types of computing but we only learn parallel computing here. As we are going to learn parallel computing for that we should know following terms.
- Era of computing – The two fundamental and dominant models of computing are sequential and parallel. The sequential computing era began in the 1940s and the parallel (and distributed) computing era followed it within a decade.
- Computing – So, now the question arises that what is Computing? Computing is any goal-oriented activity requiring, benefiting from, or creating computers. Computing includes designing, developing and building hardware and software systems; designing a mathematical sequence of steps known as an algorithm; processing, structuring and managing various kinds of information
- Type of Computing – Following are two types of computing :
- Parallel computing
- Distributed computing
Parallel computing – As in this article, we are going to learn Parallel computing so what is parallel processing? Processing of multiple tasks simultaneously on multiple processors is called parallel processing. The parallel program consists of multiple active processes (tasks) simultaneously solving a given problem. As we learn what is parallel computing and there type now we are going more deeply on the topic of the parallel computing and understand the concept of the hardware architecture of parallel computing. Hardware architecture of parallel computing – The hardware architecture of parallel computing is distributed along the following categories as given below : 1. Single-instruction, single-data (SISD) systems 2. Single-instruction, multiple-data (SIMD) systems 3. Multiple-instruction, single-data (MISD) systems 4. Multiple-instruction, multiple-data (MIMD) systems Refer to learn about the hardware architecture of parallel computing – Flynn’s taxonomy Hardware computing – Computer hardware is the collection of physical parts of a computer system. This includes the computer case, monitor, keyboard, and mouse. It also includes all the parts inside the computer case, such as the hard disk drive, motherboard, video card, and many others. Computer hardware is what you can physically touch.
Speedup: Parallel computing can significantly reduce the time it takes to solve a complex problem by breaking it down into smaller parts that can be solved simultaneously.
Scalability: Parallel computing architectures can easily scale up to handle larger datasets or more complex computations.
Fault-tolerance: Parallel computing can be fault-tolerant, which means that if one processor or node fails, the others can continue to work.
Cost-effective: By using multiple processors or nodes, parallel computing can be more cost-effective than using a single high-performance processor.
Increased performance: Parallel computing can increase the performance of certain types of computations, such as simulations, data analysis, and machine learning.
Complexity: Parallel computing architectures can be complex and difficult to program, which can require specialized skills and knowledge.
Communication overhead: Communication between processors or nodes can be slow, which can limit the performance gains of parallel computing.
Synchronization issues: When multiple processors or nodes work together, synchronization issues can arise, which can lead to errors or reduced performance.
Limited scalability: Some parallel computing architectures may have limited scalability, which can limit their usefulness for certain types of computations.
Hardware limitations: Parallel computing architectures may require specialized hardware, which can be expensive and difficult to obtain.
Please Login to comment...