52

I've been asked to measure the performance of a fortran program that solves differential equations on a multi-CPU system. My employer insists that I measure FLOP/s (Floating operations per second) and compare the results with benchmarks (LINPACK) but I am not convinced that it's the way to go, simply because no one can explain to me what a FLOP is.

I did some research on what exactly a FLOP is and I got some pretty contradicting answers. One of the most popular answers I got was '1 FLOP = An addition and a multiplication operation'. Is that true? If so, again, physically, what exactly does that mean?

Whatever method I end up using, it has to be scalable. Some of versions of the code solve systems with multi-million unknowns and takes days to execute.

What would be some other, effective, ways of measuring performance in my case (summary of my case being 'fortran code that does a whole lot of arithmetic calculations over and over again for days on several hundred CPUs)?

osgx
  • 90,338
  • 53
  • 357
  • 513
caglarozdag
  • 699
  • 1
  • 8
  • 13

9 Answers9

56

It's a pretty decent measure of performance, as long as you understand exactly what it measures.

FLOPS is, as the name implies FLoating point OPerations per Second, exactly what constitutes a FLOP might vary by CPU. (Some CPU's can perform addition and multiplication as one operation, others can't, for example). That means that as a performance measure, it is fairly close to the hardware, which means that 1) you have to know your hardware to compute the ideal FLOPS on the given architecture, and you have to know your algorithm and implementation to figure out how many floating point ops it actually consists of.

In any case, it's a useful tool for examining how well you utilize the CPU. If you know the CPU's theoretical peak performance in FLOPS, you can work out how efficiently you use the CPU's floating point units, which are often one of the hard to utilize efficiently. A program which runs 30% of the FLOPS the CPU is capable of, has room for optimization. One which runs at 70% is probably not going to get much more efficient unless you change the basic algorithm. For math-heavy algorithms like yours, that is pretty much the standard way to measure performance. You could simply measure how long a program takes to run, but that varies wildly depending on CPU. But if your program has a 50% CPU utilization (relative to the peak FLOPS count), that is a somewhat more constant value (it'll still vary between radically different CPU architectures, but it's a lot more consistent than execution time).

But knowing that "My CPU is capable of X GFLOPS, and I'm only actually achieving a throughput of, say, 20% of that" is very valuable information in high-performance software. It means that something other than the floating point ops is holding you back, and preventing the FP units from working efficiently. And since the FP units constitute the bulk of the work, that means your software has a problem.

It's easy to measure "My program runs in X minutes", and if you feel that is unacceptable then sure, you can go "I wonder if I can chop 30% off that", but you don't know if that is possible unless you work out exactly how much work is being done, and exactly what the CPU is capable of at peak. How much time do you want to spend optimizing this, if you don't even know whether the CPU is fundamentally capable of running any more instructions per second?

It's very easy to prevent the CPU's FP unit from being utilized efficiently, by having too many dependencies between FP ops, or by having too many branches or similar preventing efficient scheduling. And if that is what is holding your implementation back, you need to know that. You need to know that "I'm not getting the FP throughput that should be possible, so clearly other parts of my code are preventing FP instructions from being available when the CPU is ready to issue one".

Why do you need other ways to measure performance? What's wrong with just working out the FLOPS count as your boss asked you to? ;)

jalf
  • 243,077
  • 51
  • 345
  • 550
  • Thanks for the help! I didn't know CPU architectures had set FLOP rate values. – caglarozdag Nov 30 '08 at 19:05
  • Well, they don't. What they do have is clear limits on how long each type of instruction takes, and how often one can be issued (and how many of them can run in parallel). From that, you can derive the theoretical peak FLOPS. – jalf Nov 30 '08 at 19:14
  • And per jalf's last comment, keep in mind that getting peak FLOPS out of a CPU is very hard, because it implies that your computation is structured in such a way as to get full FP issue every cycle. This is not always the case with real applications, but it does give you an idea of how much potential performance improvement you can get if your code is compute-limited. If you want more on how FLOPS are measured for supercomputers, look at the way Top500.org does it: http://www.top500.org/faq. They explain their benchmarks on that page. – Todd Gamblin Jun 12 '09 at 18:15
  • 1
    @caglarozdag: as @tgamblin says, you can generally forget about reaching peak FLOPS on any CPU. It just won't happen. In most applications I've seen, you should pat yourself on the back if you reach 50% of the CPU's theoretical peak FLOPS. Maybe you'll be able to hit 70% but then you really should consider yourself lucky. But if you're running at 10% (which will be the case surprisingly often), there's probably some room for improvement. – jalf Jun 01 '10 at 22:47
  • Side-but-related question: Why is performance measured in FLOPs per Second (FLOPS) and why is number of Integer operations per second not taken into consideration? – Shailen Sep 07 '14 at 21:11
  • @shailenTJ because FLOPS originated with the scientific computing community, and those people really couldn't care less about integer performance. They want floating-point number crunching, and that number is effectively all that matters then. Historically, integer instructions have been much, much cheaper than floating-point ops, so the efficiency with regards to floating point ops would almost always dominate overall performance. – jalf Sep 08 '14 at 14:40
  • There are other measures of performance, though. MIPS (Millions of Instructions per Second) is another one that's more integer-centric. Basically, people measure performance in the way that matters to *them*. And FLOPS is interesting to people who run a lot of floating point code. :) – jalf Sep 08 '14 at 14:42
  • These days, memory is very often the limiting factor in efficiency. Even if your data is hot in cache, if you don't do enough with it for every time you load it into registers, you're going to bottleneck on store and/or load operations without saturating the SIMD FMA units. (e.g. on Haswell and later, 2 of every 4 uops have to be FMA, and indexed addressing modes can't stay micro-fused with an FMA; they split into a separate uop). – Peter Cordes Oct 08 '20 at 00:46
  • So computational intensity is an ever-increasing problem with CPUs bulking up on FP execution resources. e.g. a simple dot product bottlenecks on 2 loads per FMA, so even with no cache misses would only manage 50% efficiency. You may need to combine more work into each pass over your data. especially if you only cache-block for L2 cache, not L1d, to not bottleneck on L2 cache bandwidth. AVX512 makes this "worse", even more execution resources to keep fed with data needs twice as much memory bandwidth. (Or twice the computational intensity for the same bandwidth.) – Peter Cordes Oct 08 '20 at 00:48
  • I guess my real point is that more and more algorithms simply can't expect to approach 100% FLOP/s on modern CPUs. As well as it being harder and harder to get there (requiring more careful cache-blocking) for ones that can. So a new CPU that offers more FLOP/s might do so in a way that your algorithm can't exploit, because on your current system it already bottlenecks on front-end instruction throughput, or loads or stores. (or memory bandwidth or cache misses). – Peter Cordes Oct 08 '20 at 01:56
  • But sure, for seeing how well your current program makes use of your current CPU, it's not bad as an upper bound, if you can't find any algorithmic savings. – Peter Cordes Oct 08 '20 at 02:08
30

I'd just like to add a couple of finer points:

  • division is special. Since most processors can do an addition, comparison, or multiplication in a single cycle, those are all counted as one flop. But division always takes longer. How much longer depends on the processor, but there's sort of a defacto standard in the HPC community to count one division as 4 flops.

  • If a processor has a fused multiply-add instruction that does a multiplication and an addition in a single instruction -- generally A += B * C -- that counts as 2 operations.

  • Always be careful in distinguishing between single-precision flops and double-precision flops. A processor that is capable of so many single-precision gigaflops may only be capable of a small fraction of that many double-precision gigaflops. The AMD Athlon and Phenom processors can generally do half as many double-precision flops as single precision. The ATI Firestream processors can generally do 1/5th as many double-precision flops as single precision. If someone is trying to sell you a processor or a software package and they just quote flops without saying which, you should call them on it.

  • The terms megaflop, gigaflop, teraflop, etc. are in common use. These refer to factors of 1000, not 1024. E.g., 1 megaflop = 1,000,000 flop/sec not 1,048,576. Just as with disk drive sizes, there is some confusion on this.

Die in Sente
  • 9,546
  • 3
  • 35
  • 41
  • 2
    Do you have a citation for division being special / counted as 4 flops? Would love to read more in detail. – mprat Jan 23 '17 at 15:13
  • 2
    @mprat: The cost ratio between division and multiplication depends on *how* you use it. "4 flops" sounds pretty bogus for modern CPUs (since ~2015). An occasional x86-64 AVX `vdivps` that's not on the critical path for latency costs about the same in terms of execution throughput resources as a `vmulps`. OoO exec can hide the latency, and it's only 1 uop for the front (on some CPUs), so as long as you don't need to divide more frequently than the not-fully-pipelined divider can handle. [Floating point division vs floating point multiplication](https://stackoverflow.com/a/45899202) – Peter Cordes Oct 08 '20 at 02:05
13

Old question with old, if popular, answers that are not exactly great, IMO.

A “FLOP” is a floating-point math operation. “FLOPS” can mean either of two things:

  • The simple plural of “FLOP” (i.e. “operation X takes 50 FLOPs”)
  • The rate of FLOPs in the first sense (i.e. floating-point math operations per second)

Where it is not clear from context, which of these is meant is often disambiguated by writing the former as “FLOPs” and the latter as “FLOP/s”.

FLOPs are so-called to distinguish them from other kinds of CPU operations, such as integer math operations, logical operations, bitwise operations, memory operations, and branching operations, which have different costs (read “take different lengths of time”) associated with them.

The practice of “FLOP counting” dates back to the very early days of scientific computing, when FLOPs were, relatively speaking, extremely expensive, taking many CPU cycles each. An 80387 math coprocessor, for example, took something like 300 cycles for a single multiplication. This was at a time before pipelining and before the gulf between CPU clock speeds and memory speeds had really opened up: memory operations took just a cycle or two, and branching (“decision making”) was similarly cheap. Back then, if you could eliminate a single FLOP in favor of a dozen memory accesses, you made a gain. If you could eliminate a single FLOP in favor of a dozen branches, you made a gain. So, in the past, it made sense to count FLOPs and not worry much about memory references and branches because FLOPs strongly dominated execution time because they were individually very expensive relative to other kinds of operation.

More recently, the situation has reversed. FLOPs have become very cheap — any modern Intel core can perform about two FLOPs per cycle (although division remains relatively expensive) — and memory accesses and branches are comparatively much more expensive: a L1 cache hit costs maybe 3 or 4 cycles, a fetch from main memory costs 150–200. Given this inversion, it is no longer the case that eliminating a FLOP in favor of a memory access will result in a gain; in fact, that's unlikely. Similarly, it is often cheaper to “just do” a FLOP, even if it's redundant, rather than decide whether to do it or not. This is pretty much the complete opposite of the situation 25 years ago.

Unfortunately, the practice of blind FLOP-counting as an absolute metric of algorithmic merit has persisted well past its sell-by date. Modern scientific computing is much more about memory bandwidth management — trying to keep the execution units that do the FLOPs constantly fed with data — than it is about reducing the number of FLOPs. The reference to LINPACK (which was essentially obsoleted by LAPACK 20 years ago) leads me to suspect that your employer is probably of a very old school that hasn't internalized the fact that establishing performance expectations is not just a matter of FLOP counting any more. A solver that does twice as many FLOPs could still be twenty times faster than another if it has a much more favorable memory access pattern and data layout.

The upshot of all this is that performance assessment of computationally intensive software has become a lot more complex than it used to be. The fact that FLOPs have become cheap is hugely complicated by the massive variability in the costs of memory operations and branches. When it comes to assessing algorithms, simple FLOP counting simply doesn't inform overall performance expectations any more.

Perhaps a better way of thinking about performance expectations and assessment is provided by the so-called roofline model, which is far from perfect, but has the advantage of making you think about the trade-off between floating-point and memory bandwidth issues at the same time, providing a more informative and insightful “2D picture” that enables the comparison of performance measurements and performance expectations.

It's worth a look.

Emmet
  • 6,192
  • 26
  • 39
  • 1
    Overall good answer; one nit-pick: *a L1 cache hit costs maybe 3 or 4 cycles* - That's the approximate *latency* (more like 5 or 6 cycles for SIMD FP loads on Haswell/Skylake), but it's pipelined at 2 loads per clock throughput. Out-of-order exec can easily hide L1d hit latency in most code (where the array indexing doesn't depend on a previous FP calculation), that's why we have OoO exec. – Peter Cordes Oct 08 '20 at 02:03
4

"compare the results with benchmarks" and do what?

FLOPS means you need

1) FLOPs per some unit of work.

2) time for that unit of work.

Let's say you have some input file that does 1,000 iterations through some loop. The loop is a handy unit of work. It gets executed 1,000 times. It takes an hour.

The loop has some adds and multiplies and a few divides and a square root. You can count adds, multiplies and divides. You can count this in the source, looking for +, * and /. You can find the assembler-language output from the compiler, and count them there, too. You may get different numbers. Which one is right? Ask your boss.

You can count the square roots, but you don't know what it really does in terms of multiplies and adds. So, you'll have to do something like benchmark multiply vs. square root to get a sense of how long a square root takes.

Now you know the FLOPS in your loop. And you know the time to run it 1,000 times. You know FLOPS per second.

Then you look at LINPACK and find you're slower. Now what? Your program isn't LINPACK, and it's slower than LINPACK. Odds are really good that your code will be slower. Unless your code was written and optimized over the same number of years a LINPACK, you'll be slower.

Here's the other part. Your processor has some defined FLOPS rating against various benchmarks. Your algorithm is not one of those benchmarks, so you fall short of the benchmarks. Is this bad? Or is this the obvious consequence of not being a benchmark?

What's the actionable outcome going to be?

Measurement against some benchmark code base is only going to tell you that you're algorithm isn't the benchmark algorithm. It's a foregone conclusion that you'll be different; usually slower.

Obviously, the result of measuring against LINPACK will be (a) you're different and therefore (b) you need to optimize.

Measurement is only really valuable when done against yourself. Not some hypothetical instruction mix, but your own instruction mix. Measure your own performance. Make a change. See if your performance -- compared with yourself -- get better or worse.

FLOPS don't matter. What matters is time per unit of work. You'll never match the design parameters of your hardware because you're not running the benchmark that your hardware designers expected.

LINPACK doesn't matter. What matters is your code base and the changes you're making to change performance.

S.Lott
  • 384,516
  • 81
  • 508
  • 779
  • I believe LINPACK solves a matrix system (a somewhat arbitrary problem the developers of LINPACK chose at the time) and its a coincidence that our algorithm has essentially the same purpose. I think my boss realizes it's going to be slower than LINPACK, just wants to see how much slower. – caglarozdag Nov 30 '08 at 20:32
  • FLOPS is very useful for one specific purpose: That of measuring utilization of the CPU's FP unit. And since that is generally fairly hard to utilize efficiently, FLOPS is useful info for HPC stuff. It tells you your program's structure prevents efficient utilization of the most vital CPU resource. – jalf Nov 30 '08 at 20:39
1

A FLOPS is, as you said, a floating-point operation per second. As an example, if you take exactly one second for an operation (such as adding, subtracting, multiplying or dividing two values and returning the result), your performance is simply 1 FLOPS. A recent CPU will easily achieve several GigaFLOPS, i.e. several billion floating-point operations per second.

Sören Kuklau
  • 19,454
  • 7
  • 52
  • 86
1

I would just try to make it go as fast as possible, and that requires finding out where it is spending time, especially if there are function calls that could be avoided.

I do this by the simple method of just interrupting it a few times while it is running, and seeing what it is doing. Here are the kinds of things I find:

  • Much of the time it is in the process of computing the derivative and/or Jacobian. Much of this time can go into math function calls such as exp(), log(), and sqrt(). Often these are repeated with identical arguments and can be memo-ized. (Massive speedup.)

  • Much of the time is spent calculating derivatives too many times because the integration tolerances are tighter than necessary. (Faster)

  • If an implicit integration algorithm (such as DLSODE Gear) is being used because the equations are thought to be stiff, chances are they are not, and something like Runge-Kutta could be used. (DVERK). (Faster still)

  • Possibly a matrix-exponent algorithm could be used if the model is linear (DGPADM). This is a big win both for performance and precision, and is immune to stiffness. (Way faster)

  • Higher up the call-stack, it could be that the same integrations are being performed repeatedly with slightly different parameters, so as to determine a forward or central-difference gradient of the solution with respect to those parameters. If the differential equations are themselves differentiable, it may be possible to get those gradients analytically, or by augmenting the equations with sensitivity equations. This is not only much faster, but much more precise, which can speed things up still higher up the stack.

You can look at each level of the stack as an opportunity to find things to optimize, and the speedups will compound. Then when you go to multi-cpu, assuming it is parallelizable, that should provide its own multiplicative factor.

So back to FLOPs. You could try to maximize FLOPs / second, but it can also be much more useful to minimze FLOPs / run, by optimizing at all levels of the stack. In any case, just measuring them tells you almost nothing.

Tejus Prasad
  • 6,322
  • 7
  • 47
  • 75
Mike Dunlavey
  • 40,059
  • 14
  • 91
  • 135
0

Your employer is right.
The only way to measure effectiveness of your Fortran program (or of any other program, btw) is to test it against standard benchmarks, if they exist.

And, about FLOPs, it stands for "floating point operations per second" - see the definition on Wikipedia.

friol
  • 6,996
  • 4
  • 44
  • 81
0

I don't think measuring FLOPS will be very useful.

The number of FLOPS achieved will tell you how busy your algorithm is keeping the CPU, but won't tell you how well your algorithm itself is performing.

You may find two different algorithms which cause the processor to perform the same number of FLOPS but one provides you with the desired result in half the time.

I think you'd be better off looking at a much 'higher level' statistic such as the number of differential equations solved per unit of time (that is, after all, the purpose of your algorithm).

On the other hand, measuring the number of FLOPS achieved may help you to improve your algorithm as it will tell you how busy you are keeping the CPU.

Chris Roberts
  • 18,622
  • 12
  • 60
  • 67
  • FLOPS shows you how far your current implementation is from an optimal implementation *of the same algorithm*. Yes, if you know of a more efficient algorithm, you should use that, but hopefully you're already using the best known algorithm, and then FLOPS matters in optimizing that. Yes, it's useful – jalf Nov 30 '08 at 19:16
0

How to Measure T-FLOPS

"(# of parallel GPU processing cores multiplied by peak clock speed in MHz multiplied by two) divided by 1,000,000

The number two in the formula stems from the fact that some GPU instructions can deliver two operations per cycle, and since teraFLOP is a measure of a GPU's maximum graphical potential, we use that metric.

Let's see how we can use that formula to calculate the teraFLOPS in the Xbox One. The system's integrated graphics has 768 parallel processing cores. The GPU's peak clock speed is 853MHz. When we multiply 768 by 853 and then again by two, and then divide that number by 1,000,000, we get 1.31 teraFLOPS."

https://www.gamespot.com/gallery/console-gpu-power-compared-ranking-systems-by-flop/2900-1334/


Price comparison of GPUs from 2016: "These are theoretical performance figures, which we understand to generally be between somewhat optimistic and ten times too high. So this data suggests real prices of around $0.03-$0.3/GFLOPS. We collected both single and double precision figures, but the cheapest were similar."

https://aiimpacts.org/current-flops-prices/

m1m1k
  • 1,375
  • 13
  • 14