2

I always assume that dividing a double by an integer will lead to faster code because the compiler will select the better microcode to compute:

double a;
double b = a/3.0;
double c = a/3; // will compute faster than b

For a single operation it does not matter, but for repetitive operations it can make difference. Is my assumption always correct or compiler or CPU dependent or whatever?

The same question applies for multiplication; i.e. will 3 * a be faster than 3.0 * a?

mkrieger1
  • 19,194
  • 5
  • 54
  • 65
kstn
  • 537
  • 4
  • 14
  • 4
    If the second operand is constant, there will be no difference. – HolyBlackCat Aug 15 '21 at 09:08
  • As far as I can tell, there is no mixing of types internally in the arithmetic/floating point unit. At the machine level both operands are of the same type... converted previously if needed. `a` and `3.0` have the same type (`double`)... `a` and `3` require a (implicit) conversion (by the compiler, not at runtime). – pmg Aug 15 '21 at 09:13
  • Read this first :) Floating points are handled in hardware (unless you have a small processor) https://stackoverflow.com/questions/4584637/double-or-float-which-is-faster – Pepijn Kramer Aug 15 '21 at 09:13
  • 1
    Is there a difference between C and C++? If so, the question should clarify which language is meant. If not, this should be addressed in an answer. – mkrieger1 Aug 15 '21 at 09:15
  • @mkrieger1 I mean mostly for c++. But I think it will apply the same for C because compiler's math arithmetic is the same. – kstn Aug 15 '21 at 09:20
  • Compilers do not generate "_microcode_" - on some CISC architectures individual _machine instructions_ are implemented in microcode - the microcode is intrinsic to the processor. There is no machine instruction to divide a `double` by an `int` to produce a `double` result - there is no likely performance benefit and the need would be niche at best, the compiler will generate FPU instructions on platforms with an FPU. Even with software floating point (no FPU), there is probably no significant advantage in having a specific `double`/`int` operator overload. – Clifford Aug 15 '21 at 16:47

2 Answers2

8

Your assumption is not correct because both your divide operations will be performed with two double operands. In the second case, c = a/3, the integer literal will be converted to a double value by the compiler before any code is generated.

From this Draft C++ Standard:

8.3 Usual arithmetic conversions          [expr.arith.conv]

1    Many binary operators that expect operands of arithmetic or enumeration type cause conversions and yield result types in a similar way. The purpose is to yield a common type, which is also the type of the result. This pattern is called the usual arithmetic conversions, which are defined as follows:

(1.3) – Otherwise, if either operand is double, the other shall be converted to double.


Note that, in this Draft C11 Standard, §6.3.1.8 (Usual arithmetic conversions) has equivalent (indeed, near-identical) text.

Adrian Mole
  • 49,934
  • 160
  • 51
  • 83
  • 3
    Note that, for a target architecture where there is an optimization to be had using code that divides a `double` by an `int`, then any decent compiler will use such code for a literal like `3.0`, just as it would for `3` - it will surely spot that the fractional part is zero. – Adrian Mole Aug 15 '21 at 13:42
1

There is no difference. The integer operand is implicitly converted to a double, so they end up practically equivalent.

eerorika
  • 232,697
  • 12
  • 197
  • 326