It is named after computer scientist Gene Amdahl( a computer architect from IBM and Amdahl corporation), and was presented at the AFIPS Spring Joint Computer Conference in 1967. It is also known as Amdahl’s argument. It is a formula which gives the theoretical speedup in latency of the execution of a task at a fixed workload that can be expected of a system whose resources are improved. In other words, it is a formula used to find the maximum improvement possible by just improving a particular part of a system. It is often used in parallel computing to predict the theoretical speedup when using multiple processors.
Speedup is defined as the ratio of performance for the entire task using the enhancement and performance for the entire task without using the enhancement or speedup can be defined as the ratio of execution time for the entire task without using the enhancement and execution time for the entire task using the enhancement.
If Pe is the performance for entire task using the enhancement when possible, Pw is the performance for entire task without using the enhancement, Ew is the execution time for entire task without using the enhancement and Ee is the execution time for entire task using the enhancement when possible then,
Speedup = Pe/Pw
Speedup = Ew/Ee
Amdahl’s law uses two factors to find speedup from some enhancement –
Fraction enhanced is always less than 1.
Speedup Enhanced is always greater than 1.
The overall Speedup is the ratio of the execution time:-
Let Speedup be S, old execution time be T, new execution time be T’ , execution time that is taken by portion A(that will be enhanced) is t, execution time that is taken by portion A(after enhancing) is t’, execution time that is taken by portion that won’t be enhanced is tn, Fraction enhanced is f’, Speedup enhanced is S’.
Now from the above equation,
Don’t stop now and take your learning to the next level. Learn all the important concepts of Data Structures and Algorithms with the help of the most trusted course: DSA Self Paced. Become industry ready at a student-friendly price.
- Computer Organization | Basic Computer Instructions
- Differences between Computer Architecture and Computer Organization
- Computer Organization | Performance of Computer
- MPU Communication in Computer Organization
- BUS Arbitration in Computer Organization
- Computer Organization | Booth's Algorithm
- Cache Memory in Computer Organization
- Computer Organization | Different Instruction Cycles
- Computer Organization | Micro-Operation
- Peripherals Devices in Computer Organization
- Last Minute Notes Computer Organization
- Purpose of an Interrupt in Computer Organization
- Computer Organization | RISC and CISC
- Computer Organization | Von Neumann architecture
- Synchronous Data Transfer in Computer Organization
- Computer Organization | Locality and Cache friendly code
- Computer Organization and Architecture | Pipelining | Set 3 (Types and Stalling)
- Computer Organization | Asynchronous input output synchronization
- Computer Organization | Problem Solving on Instruction Format
- Computer Organization | Hardwired v/s Micro-programmed Control Unit
If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to email@example.com. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.
Improved By : VaibhavRai3