1

I am writing a mathematical function to provide the volume of a cube in order to get the total weight of an object where the weight per square inch is 0.28361111111111109 (stored as constant double) this is obviously a very specific and precise number.

Back to the question though, is it any more difficult to compute an equation with this number as opposed to a rounded version of something like 0.28461 or is there basically no difference in the way a computer handles the computation?

Precision is ideal, but not at the cost of performance.

This applies for me more specifically to the .NET framework, but it possibly applies more generally to programming.


For what it's worth the code in particular is:

public var Weight
{
    get
    {
        double weight_per_square_inch = 0.28361111111111109;

        double volume = this.Length * this.Thickness * this.Width;

        return (volume * weight_per_square_inch) * this.Quantity;
    }
}
Bigbob556677
  • 1,805
  • 1
  • 13
  • 38
  • 3
    There shouldn't be a difference when using one of the built-in floating point types, but if you move to arbitrary precision libraries, ones that support arbitrary *large* precision, the performance will depend on the precision you choose to operate with. – Lasse V. Karlsen Jun 21 '19 at 14:43
  • 2
    "Precision is ideal, but not at the cost of performance." I suggest to **measure** if this affects performance, e.g. by computing a million times. I doubt there is any difference, but anyway... – MakePeaceGreatAgain Jun 21 '19 at 14:43
  • More information about type speed (not suggesting as a dupe): https://stackoverflow.com/q/329613/1043380 – gunr2171 Jun 21 '19 at 14:43
  • But if you're truly interested in finding the answer to such questions yourself, I would suggest you scour Stack Overflow for other performance-related questions with examples of using BenchmarkDotNet, as this would allow you to construct tests yourself. – Lasse V. Karlsen Jun 21 '19 at 14:44
  • @gunr2171 If I understand this question correctly it is not about `double` vs. `decimal`, but abourt rounded `double` vs. non-rounded (which is an impossible thing though). – MakePeaceGreatAgain Jun 21 '19 at 14:44
  • @LasseVågsætherKarlsen so your saying it probably treats *all* variables of a built in type the same? – Bigbob556677 Jun 21 '19 at 14:45
  • 3
    double and float is handled by the cpu, and as far as I know, the speed at which the cpu performs supported mathematical operations on these does not depend on the precision, at least not in a way that will impact your program. That is, there could very well be a difference between float and double, but as far as I know, operations using double 0.1 is done at the same speed as operations using double 0.11111111111111111111 – Lasse V. Karlsen Jun 21 '19 at 14:45
  • A `double` is a `double`is a `double`. The only thing you should really care for is the loss in precision when multiplying three doubles. I doubt a few nano-seconds (if at all) legitimate that inaccuracy. – MakePeaceGreatAgain Jun 21 '19 at 14:54
  • 1
    For operations significantly more complicated than multiplication, you can indeed find that certain floating-point operations are faster than others if "convenient" constants are involved. But `0.28461` is not a convenient constant, and multiplication is not an operation with a variable time (on modern machines, at least; way back when in the days of 12-bit machines it was a different story). When you start talking bignums/arbitrary precision arithmetic, it's still a different story. – Jeroen Mostert Jun 21 '19 at 15:12
  • 1
    The type "double" tells the compiler that you want to work with 64 bit floating point numbers. Your source code may truncate the decimal representation of the number, but under the hood its always going to be a 64 bit number. A modern CPU can handle multiple 64 bit floating point operations per clock cycle, and billions of cycles per second so its unlikely that speed will be an issue. – Steve Todd Jun 21 '19 at 15:13
  • Just to add my thoughts, I did a quick and dirty benchmark - probably not accurate **multiplying and dividing** `single` and `double` types took around 220ms to do 100000000 operations for numbers with both 5 or 10 significant digits. For `decimal` types, multiplying 5 digits took around 15 times longer and 10 digits took around 18 times longer at around 3000ms and 3800ms. Dividing decimals .. WOW. 20800ms and 29000ms! – David Wilson Jun 22 '19 at 13:41

2 Answers2

2

A double won't be slower if you put more precision but probable won't hold 17 decimal in precision properly. You'll want to look at other type like Decimal that is slower but support the high precision that you need.

the_lotus
  • 12,668
  • 3
  • 36
  • 53
1

If you need precision then you can go with changing to Decimal type. I have seen big difference in the numbers when we calculating for millions of items in our application. Decimal values do have influence when you do calculation for huge volumes.

But if you need performance then double will be a best fit.

Its up to you to decide which type to choose based on your business requirement.