-3

I would be interrested to know if the compiler in .NET can detect a simple division by 2 and form it to a multiplication by 0.5.

Just thought about it, probably that would be a small part where I can improve my game code.

(Actually I wouldnt worry about that normally, but it's something I could easily think of in the future when I write game code.)

Does actually myFloat/2 get treated the same way as myFloat/2f?

SwissCoder
  • 2,514
  • 4
  • 28
  • 40
  • 3
    Is x an integer, float, double, what? – Servy Feb 09 '12 at 18:04
  • 1
    Here's a very similar question: http://stackoverflow.com/questions/5053810/c-xna-multiplication-faster-than-division – DanTheMan Feb 09 '12 at 18:04
  • Use a profiler. Most likely, you won't gain any improvements by replacing x/2 with x*0.5f (or the other way round), but in a completely different area. – dtb Feb 09 '12 at 18:04
  • Not `>> 1` for integers? – HABO Feb 09 '12 at 18:05
  • 8
    You have two horses. You wish to know which is faster. So you ask strangers on the internet? I would be more inclined to *race the horses*. If you want to know which coding technique is faster, **try them both and then you'll know**. – Eric Lippert Feb 09 '12 at 18:08
  • You also may want to test if the answers are the same for the given type of `x`. – Austin Salonen Feb 09 '12 at 18:21
  • 1
    To follow up on @EricLippert's comment, I suspect that you can save yourself a little time by checking the jitted code first, and you'll see that the code is the same. In his analogy, you can probably satisfy yourself that you're racing the same horse against itself without actually running the race. In any event, dividing a float by two is as simple as decrementing the exponent, so it must be a blindingly fast operation. – phoog Feb 09 '12 at 19:13
  • 1
    I think I could not measure any speed difference on my lightning fast CPU. So I really not understand why you suggest that. I wanted to know about the rules that take place when dividing by an int compared to multiply with a float. DanTheMan did post a nice link to a similar question that helped me. Also the given answers help. @phoog: can you tell me how I can Inspect the code after it was generated from the sources? If I use ILSpy I only get the CLR-Code but not Assembler or something like that. – SwissCoder Feb 10 '12 at 11:54
  • 2
    Throw an exception, run outside the debugger, attach when it's thrown. Then look at the disassembly window. Also, this question has already been asked, and answered in fact by me - the answer is that no, it does not make the optimization, writing it as *0.5f is in fact faster. – harold Feb 10 '12 at 13:50
  • 1
    Some integer divisions are optimized to a double-width multiply of which the upper half is used, with some adjustments. This is slightly slower (but not much) than the floating point multiply on some processors and slightly faster on others. – harold Feb 10 '12 at 14:09
  • thanks a lot harold. I'm sorry that I didn't find your question – SwissCoder Feb 10 '12 at 14:23

2 Answers2

1

Yes. There is no difference in performance. In fact, the floating point hardware can't divide a float by an int, so the compiler will have to change any int to a float anyway before performing the arithmetic.

Edit: this is, of course, assuming that x is a float. Otherwise, never mind.

Mr Lister
  • 45,515
  • 15
  • 108
  • 150
  • thank you. It's sad to see almost any kind of question beeing locked down so quick. I knew the reaction would be like that. But it's not actually a very noobish question I think. I completely forgot about the binary stuff, and how easy it should be to divide by a factor of 2. – SwissCoder Feb 10 '12 at 11:46
1

Oftentimes division by two will be changed into bit shifting. A left shift multiplies by two, a right shift divides by two. Bit shifting is much easier for a processor to do than multiplication or division.

Servy
  • 202,030
  • 26
  • 332
  • 449