3

I am trying to understand the following slide enter image description here

The definition is kind of unclear to me. Sources like wikipedia say that Amdahl's measures the speedup in latency of the execution of a task at fixed workload that can be expected of a system whose resources are improved. To me speedup is basically how faster a task runs over other task. Speedup in this case is used in a different way. Can you clarify what Amdahl's law measures in an easier way and what speed up really is?

Yilmaz
  • 35,338
  • 10
  • 157
  • 202
TheMathNoob
  • 307
  • 3
  • 16
  • Speedup is how much faster would the same program run on system B compared to system A, if B has some additional feature or some different behavior than A. – Leeor Oct 02 '16 at 10:57
  • Ok, but can you explain the idea behind Amdahl's law? – TheMathNoob Oct 03 '16 at 22:38

3 Answers3

2

The definition of speedup here is:

Speedup = Baseline Running Time / New Running Time

This means that if the running time is BRT and the parallelizable portion is P, then:

BRT = (1 - P) * BRT + P * BRT

Now if a speedup of S was obtained on the P portion of the running time, then the new improved running time (IRT) is:

IRT = (1 - P) * BRT + P * (BRT / S)
    = (1 - P) * BRT + (P / S) * BRT
    = ((1 - P) + (P / S)) * BRT

Therefore:

BRT / IRT = 1 / ((1 - P) + (P / S))

This is the overall speedup. This is Amdahl's law.

To me speedup is basically how faster a task runs over other task.

Yes, speedup can be defined in different ways. This can be a little confusing.

Hadi Brais
  • 22,259
  • 3
  • 54
  • 95
1

Amdhal's Law measures the theoretical maximum speed up, this is almost never achieved, The formula is easy to under stand once you know what different parts mean, Speed up based on amount of code which can be parallelized.

Okay so the formula is Speedup = 1/ 1-f + f/p,

  • 1 means the whole code,
  • 1-f means the amount of serial code (can't be parallelized),
  • f means code that can be parallelized,
  • p means number of processors,

So, if we say there are 10 processors and we have 40% of code that can be parallelized.
the formula is speedup = 1/ 1-40% (0.4) + 40%(0.4)/10

Not a professional and you might want to check this, but if i remember correctly this is how it should work :)

Peter Cordes
  • 328,167
  • 45
  • 605
  • 847
  • 2
    Your formula is missing parens. With the standard order of operations, what you wrote means `(1/1) - f + (f/p)`. But yes mostly a good answer; Amdahl's law assumes perfect scaling for the parallel portions of the workload so it's purely theoretical and unrealistic for most problems. – Peter Cordes May 14 '19 at 18:14
1

No system is truly parallel. they might start parallel, then executed serially and then parallel again in each different workflow. In general, we have to take locks, coordinate threads, synchronize the code. So there will be serial portions within a parallel process. During this serial portion, multiple threads/processes that are executing, get queueing. Amdahl's law tells how much the serial portion affects the performance (throughput) graph. As you see in the image:

if it was a perfect parallel system, the rate would have been perfectly linear. If there is a serial portion within a process, it does not matter 5 percent or 10 percent, the rate of the graph will be flat after a given point. Amdahl's law calculates how soon the graph is going to flatten out. If it is flattened out that means throughput has decreased.

enter image description here

The formula on the slide is saying that the amount of speedup a program will see by using more parallel cores is based on how much of the program is serial.

Yilmaz
  • 35,338
  • 10
  • 157
  • 202