9

Wikipedia, the one true source of knowledge, states:

On most older microprocessors, bitwise operations are slightly faster than addition and subtraction operations and usually significantly faster than multiplication and division operations. On modern architectures, this is not the case: bitwise operations are generally the same speed as addition (though still faster than multiplication).

Is there a practical reason to learn bitwise operation hacks or it is now just something you learn for theory and curiosity?

CamelCamelCamel
  • 5,200
  • 8
  • 61
  • 93
  • 1
    wikipedia "true source of knowledge"? it seems a little bit exagerated... – ShinTakezou Jul 02 '11 at 21:20
  • 12
    @ShinTakezou: it's called sarcasm. – geekosaur Jul 02 '11 at 21:21
  • 1
    I can't get clues about the fact it's sarcasm except by the fact that I think the claim is exagerated, and since I know persons who think wikipedia is really a great great source of knowledge and that it's always right, and since I don't know OP's real opinion and PoV about it, I have to take the claim seriously. Glad to know that it is sarcasm. – ShinTakezou Jul 02 '11 at 21:26
  • 1
    In this case our one true source of knowledge is even perfectly correct for modern CPUs. See [here](http://www.agner.org/optimize/instruction_tables.pdf) for Intel's instruction tables. The reciprocal throughput for a ADD/SUB and AND/OR/XOR is pretty identical ;) – Voo Jul 02 '11 at 22:49
  • How dare you dishonor the name of the Wikimedia Foundation! [For a GNU dawn!](http://xkcd.com/225/) – Mateen Ulhaq Jul 03 '11 at 02:32
  • [Real world use cases of bitwise operators](https://stackoverflow.com/q/2096916/995714) – phuclv May 30 '22 at 03:40

9 Answers9

17

Bitwise operations are worth studying because they have many applications. It is not their main use to substitute arithmetic operations. Cryptography, computer graphics, hash functions, compression algorithms, and network protocols are just some examples where bitwise operations are extremely useful.

The lines you quoted from the Wikipedia article just tried to give some clues about the speed of bitwise operations. Unfortunately the article fails to provide some good examples of applications.

Mackie Messer
  • 7,118
  • 3
  • 35
  • 40
12

Bitwise operations are still useful. For instance, they can be used to create "flags" using a single variable, and save on the number of variables you would use to indicate various conditions. Concerning performance on arithmetic operations, it is better to leave the compiler do the optimization (unless you are some sort of guru).

Asclepius
  • 57,944
  • 17
  • 167
  • 143
Cratylus
  • 52,998
  • 69
  • 209
  • 339
  • This is indispensible for stuff like protocols or hardware configuration as well as a potential source of much bigger performance improvements than the operation cycle counts would imply since it can dramatically reduce the memory bandwidth requirements of a code. – Per Knytt Jul 02 '11 at 21:32
  • 2
    "it is better to leave the compiler do the optimization (unless you are some sort of guru)" - Or writing the compiler, or working with a crap compiler... Learning all the hacks may not be essential, but learning how to use bitwise operations should be considered as important as learning how to use '+' and ',' correctly. – James Greenhalgh Jul 03 '11 at 00:24
  • 1
    @James Not sure about that. As an earlier post here (sadly deleted now) showed it's just way too easy to get that stuff wrong. Also bitstuff is mostly useful as a performance optimization for math-heavy stuff that should be in some nice library. For space optimizations (and there are some pitfalls in that) you can use bitfields which hide that stuff nicely again. So basically you'd only need to do this to write some low level protocol stuff because C leaves the order of bitfields once again undefined.. – Voo Jul 03 '11 at 02:02
7

They're useful for getting to understand how binary "works"; otherwise, no. In fact, I'd say that even if the bitwise hacks are faster on a given architecture, it's the compiler's job to make use of that fact — not yours. Write what you mean.

geekosaur
  • 59,309
  • 11
  • 123
  • 114
  • +1 for "it's the compiler's job." That said, sometimes what you mean is to | together a bunch of bit masks, then & the result with a value to see if various flags are set. – Sherm Pendley Jul 02 '11 at 21:29
  • @Sherm: Sure — but then you don't "mean" addition or subtraction, so the bitwise operations are semantically correct (and again, it's the compiler's job to figure out the best way to do it). – geekosaur Jul 02 '11 at 21:31
  • 1
    code is not always compiled, depending on the platform and language, so if performance is still a concern, bitwise operations can be handy to know and use – ricosrealm Jul 02 '11 at 22:37
  • I wouldn't put it beyond modern interpreters to optimize some trivial bit tricks - and even if not. Tight math loops are rarely run on interpreted languages - the result will so or so not be especially great (factor 50+ or what between a numpy and regular python implementation?) – Voo Jul 02 '11 at 22:51
2

The only case where it makes sense to use them is if you're actually using your numbers as bitvectors. For instance, if you're modeling some sort of hardware and the variables represent registers.

If you want to perform arithmetic, use the arithmetic operators.

Nathan Fellman
  • 122,701
  • 101
  • 260
  • 319
1

Funny nobody saw fit to mention the ctype[] array in C/C++ - also implemented in Java. This concept is extremely useful in language processing, especially when using different alphabets, or when parsing a sentence.

ctype[] is an array of 256 short integers, and in each integer, there are bits representing different character types. For example, ctype[;A'] - ctype['Z'] have bits set to show they are upper-case letters of the alphabet; ctype['0']-ctype['9'] have bits set to show they are numeric. To see if a character x is alphanumeric, you can write something like 'if (ctype[x] & (UC | LC | NUM))' which is somewhat faster and much more elegant than writing 'if ('A' = x <= 'Z' || ....'.

Once you start thinking bitwise, you find lots of places to use it. For instance, I had two text buffers. I wrote one to the other, replacing all occurrences of FINDstring with REPLACEstring as I went. Then for the next find-replace pair, I simply switched the buffer indices, so I was always writing from buffer[in] to buffer[out]. 'in' started as 0, 'out' as 1. After completing a copy I simply wrote 'in ^= 1; out ^= 1;'. And after handling all the replacements I just wrote buffer[out] to disk, not needing to know what 'out' was at that time.

If you think this is low-level, consider that certain mental errors such as deja-vu and its twin jamais-vu are caused by cerebral bit errors!

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
frank
  • 11
  • 1
1

Depends what your problem is. If you are controlling hardware you need ways to set single bits within an integer.

Buy an OGD1 PCI board (open graphics card) and talk to it using libpci. http://en.wikipedia.org/wiki/Open_Graphics_Project

whoplisp
  • 2,508
  • 16
  • 19
1

It is true that in most cases when you multiply an integer by a constant that happens to be a power of two, the compiler optimises it to use the bit-shift. However, when the shift is also a variable, the compiler cannot deduct it, unless you explicitly use the shift operation.

CygnusX1
  • 20,968
  • 5
  • 65
  • 109
  • True. On the other hand some processors multiply faster than they can perform a shift by a variable amount of bits. Just write what is readable. Chances are that the critical 10% of your code are somewhere else anyway... – Mackie Messer Jul 02 '11 at 23:17
0

Of course (to me) the answer is yes: there can be practical reasons to learn them. The fact that nowadays, e.g., an add instruction on typical processors is as fast as an or/xor or an and just means that: an add is as fast as, say, an or on those processors.

The improvements in speed of instructions like add, divide, and so on, just means that now on those processors you can use them and being less worried about performance impact; but it is true now as in the past that you usually won't change every adds to bitwise operations to implement an add. That is, in some cases it may depend on which hacks: likely some hack now must be considered educational and not practical anymore; others could have still their practical application.

ShinTakezou
  • 9,432
  • 1
  • 29
  • 39
0

Working with IPv4 addresses frequently requires bit-operations to discover if a peer's address is within a routable network or must be forwarded onto a gateway, or if the peer is part of a network allowed or denied by firewall rules. Bit operations are required to discover the broadcast address of a network.

Working with IPv6 addresses requires the same fundamental bit-level operations, but because they are so long, I'm not sure how they are implemented. I'd wager money that they are still implemented using the bit operators on pieces of the data, sized appropriately for the architecture.

sarnold
  • 102,305
  • 22
  • 181
  • 238