4

I have a program that takes too much time, so I want to optimize my code a bit.

I have used the double type for every variable so far. If I change to be of type float, will any performance benefits occur?

Pascal Cuoq
  • 79,187
  • 7
  • 161
  • 281
taha
  • 172
  • 8
  • 3
    I'd expect roughly none, unless your compiler autovectorizes it. Time it and see. – user2357112 Jul 29 '14 at 01:36
  • I imagine this may be somewhat platform-dependent, and that your hardware may play a role here. – David Frye Jul 29 '14 at 01:37
  • 2
    Given the naive nature of the question, I'd say it's quite likely that there are some simple optimizations that will make a real difference in your code's performance and that changing the type isn't one of them. – David Schwartz Jul 29 '14 at 01:39
  • The hardware platform, and the amount of time that your application actually spends doing floating point operations will have a lot to do with any performance gain you'll see. Your question provides insufficient info to hazard a guess. – user2338215 Jul 29 '14 at 01:39
  • 1
    Stupid question from someone who never deals with much more than application-scope performance: isn't `double` a native type to the processor, so shouldn't it *theoretically* be faster CPU-wise? Again, sorry, that might be a stupid question. I just remember seeing that somewhere and it seems relevant, albeit undetectable or outweighed by other factors. – Matthew Haugen Jul 29 '14 at 01:44
  • 1
    @MatthewHaugen: Many common CPUs have hardware support for both IEEE single and double floats. (This includes x86, PowerPC, ARM...) – Dietrich Epp Jul 29 '14 at 01:44
  • You need to profile your code to find the bottleneck. I doubt this is it. – OMGtechy Jul 29 '14 at 02:44
  • First I think you should do program profiling before optimization so as to find out where a lot of time is spent. There are various bottleneck where much time is spent, it could be the data structure, memory, algorithm, data dependency or loop. However to determine exactly these hot spots, you have to use a profiler tool. If it`s double that takes to much CPU time then you could optimize it, but I doubt if it`s data type. You might also consider writing a parallel program. – Juniar Jul 29 '14 at 05:31
  • One thing changing `double` to `float` will definitely not change, even if `double` is emulated in software, is the time complexity of your program. Removing the tag. – Pascal Cuoq Jul 29 '14 at 07:17

4 Answers4

8

It is impossible to answer this question with any certainty: it will depend on your code and your hardware. The change will have many possible effects:

  • Memory usage will be reduced.
  • Cache misses will be fewer.
  • CPU instructions will take fewer cycles.
  • The compiler may autovectorize, or autovectorize differently.
  • Numerical algorithms in your application may no longer converge correctly.

The only way to tell the actual performance difference is to test it yourself. Sounds like a simple search & replace job.

Dietrich Epp
  • 205,541
  • 37
  • 345
  • 415
  • Whether any CPU instructions take fewer cycles is platform dependent. – Patricia Shanahan Jul 29 '14 at 01:41
  • @PatriciaShanahan: That's listed under "possible" effects. Some platforms also have floats and doubles that are the same size, so the memory usage and cache misses are also platform dependent -- not to mention that some hardware has no cache! – Dietrich Epp Jul 29 '14 at 01:42
  • @PatriciaShanahan, although most basic operations take the same time I think `SQRTSS` has always been faster than `SQRTSD`. – Z boson Jul 29 '14 at 09:13
3

Most likely, you will only see noticeable improvements if your code works on a very large block of memory. If you are doing double operations on an array of millions of values, you'll cut your memory bandwidth in half by switching to float. (I'm assuming you are on a standard architecture where float is 32 bits and double is 64 bits.)

In terms of reducing load on the CPU, I wouldn't expect to see a significant change. Maybe a small difference for some operations, but probably a few percent at best.

StilesCrisis
  • 15,972
  • 4
  • 39
  • 62
2

Modern processors execute most FP operations in about the same amount of time for double-precision operands as for single-precision. The only significant speed differences for going down to single-precision are:

  • Smaller size, potentially leading to more cache coherence. This is not a significant concern for most algorithms.
  • More slots in SIMD (4 versus 2 for SSE without AVX). Obviously only a concern if you're SIMDizing your code.
  • Faster division, square roots, and transcendentals. This difference can be significant in some extreme inner loops, but in general your FP ops won't be a big chunk of your total runtime.

Overall, it just isn't likely to be a significant win, except for niche cases. And if you're not familiar with the nature of floating point imprecision and how to reduce it, it's probably best to stick to double-precision and the increased wiggle room it offers you.

Sneftel
  • 40,271
  • 12
  • 71
  • 104
0

You shouldn't do it if you want better performance, you should do it if you need the precision.

Related Reference

Community
  • 1
  • 1
Devarsh Desai
  • 5,984
  • 3
  • 19
  • 21