1

Going to sleep tonight I have been wondering: if bool, in C++ for example, is set to false that mean, that all of it’s (8 or 16)bits are set 0(seems to be).

Zero bit, as far as I know, means no current flowing in some transistor, so, hence false bool will waste energy in some device with battery less than true?

So if yes, it will be better to, for example, in functions set defaults boolean (or even maybe other) parameters as false:

Instead of:

void DrawImage(int x, int y, bool cached = true);

Do

void DrawImage(int x, int y, bool not_cached = false);
Peter Cordes
  • 328,167
  • 45
  • 605
  • 847
Ngdgvcb
  • 155
  • 6
  • As Peter says, it doesn't work that way. But there is a principle that you can save power if the total *number* of zero and one bits is conserved. So instead of zeroing a bit, swap it with a bit elsewhere that is known to be zero. I want to say it's called "low entropy architecture" or something like that, but can't find a reference right now. – Nate Eldredge Aug 31 '22 at 14:31
  • @NateEldredge, I will Google – Ngdgvcb Sep 01 '22 at 06:35

2 Answers2

1

Zero bit, as far as I know, means no current flowing in some transistor, so, hence false bool will waste energy in some device with battery less than true?

No, not for that reason. CMOS logic has no current flowing in either static state, only in the transition between states (to charge / discharge the parasitic capacitance, and any shoot-through current that flows as the pull-up and pull-down transistors both partially conduct for a moment). Apart from leakage current, of course, which is somewhat significant at lower clock speeds.

CMOS is more or less symmetric, except for differences between N-channel and P-channel MOSFETs, so 1 isn't different from 0 in terms of voltage states and how transistors let charge flow.

You'd be right for the output of one gate in some other logic families like TTL (bipolar transistors with pull-up resistors), where a transistor would pull current to ground through a pull-up resistor or not. But only for one gate; usually logic involves multiple inversions, because an amplifier naturally inverts (in CMOS or TTL or RTL).

Also only for 1 bit out of 64 in the register used for arg-passing. The CPU's pipeline state, and the out-of-order execution machinery, take vastly more transistors (and gates) than just the actual architectural state (register values) and data being operated on. So the state of 1 bit is pretty negligible.

The large number of tiny transistors in a CPU is why CPUs have used CMOS logic for decades, otherwise those static currents through pull-up resistors in RTL or TTL would melt them.

Even with CMOS, power density has been a problem since the early 2000s (the "power wall" for frequency scaling, as described in Modern Microprocessors A 90-Minute Guide! which is pretty essential reading if you want to know more about CPU design considerations). In CMOS, it takes higher voltages to switch faster (about linearly), and the energy in a capacitor scales with V^2. And current only flows in CMOS when a gate switches from 0 to 1 or vice versa, and the rate of that happening is some factor of the CPU clock. So running at the minimum voltage for a given frequency, power scales with about f^3.


Other factors that could make creating a 0 cheaper

On x86, xor edi, edi is a cheaper instruction than mov edi, 1. On Sandybridge-family CPUs, it doesn't even need an execution unit in the back-end, so that's definitely some transistors that didn't need to be switching. As well as a smaller instruction (2 bytes vs. 5, or 3 for mov dil, 3 to save code size at the cost of partial-register performance penalties). So passing a 0 can perhaps improve performance, letting the same number of instructions finish sooner, letting the CPU get back to sleep sooner (race to sleep). Or not, there might easily be no effect, or different code alignment of later instructions might happen to be better with the longer instruction.

Most other ISAs don't have as much different between zeroing vs. setting 1 in a register. And even on x86, this is not generally a very valuable optimization.

But still, if you have a choice for one value to be special, 0 is a good choice, especially for non-bool integers since it's slightly more efficient to test for 0 vs. non-0 than for any other number. (So for example if you're using plain int, x != 0 is cheaper than x == 1. With a bool, a compiler can already just test for non-zero if you do b == true.)

Peter Cordes
  • 328,167
  • 45
  • 605
  • 847
-1

While this would probably technically save energy, the amount saved is negligible as it is only a single bit being set to true or false, and lengths you would have to go to to make this worth it are unreasonable considering the tradeoff.

Even then, making the reader jump through more mental hoops, making your code harder to read by having to think twice about a bool, is a bad idea. Interesting to think about, though.

Chris
  • 180
  • 1
  • 14
  • Well, there can be a lot of variables, and they can be accessed a lot of times. Of course, I (as well as You, as I see) do not know how much exactly energy it costs, but still can be useful – Ngdgvcb Aug 30 '22 at 07:05
  • as I said, it won't make a difference in the cases you described where you're just setting singular bools to false instead of true, they're just one bit after all. This would maybe be different in scenarios where you have millions of bools, but I probably still wouldn't bother. As I said, it's generally a bad idea to make your code harder to read... – Chris Aug 30 '22 at 07:24
  • Agreed that it wouldn't be worth doing even if it did save a bit of energy, this is the wrong answer to the CPU-architecture / electrical part of the question. It *wouldn't* save any energy at all, not for the reason proposed. – Peter Cordes Aug 30 '22 at 11:22