I know power of 2 can be implemented using << operator. What about power of 10? Like 10^5? Is there any way faster than pow(10,5) in C++? It is a pretty straight-forward computation by hand. But seems not easy for computers due to binary representation of the numbers... Let us assume I am only interested in integer powers, 10^n, where n is an integer.
-
10Use a lookup table? – user541686 Sep 02 '13 at 22:28
-
1A lookup table is only useful if you are raising to the power of integer values or decimal values that can be scaled to an integer. If it can be an arbitrary float, you are stuck with `pow` or an equivalent library. – paddy Sep 02 '13 at 22:32
-
1If it's always 10 on the one side, and an integer on the other side, you could write your own table and be done with it. Can't get quicker than that - it's literally one memory read and one or two simple operations to index into the table. – Mats Petersson Sep 02 '13 at 22:39
-
@paddy. OP is only taking about integer powers, or `<<` wouldn't have come up – Mad Physicist Nov 21 '20 at 21:01
13 Answers
Something like this:
int quick_pow10(int n)
{
static int pow10[10] = {
1, 10, 100, 1000, 10000,
100000, 1000000, 10000000, 100000000, 1000000000
};
return pow10[n];
}
Obviously, can do the same thing for long long
.
This should be several times faster than any competing method. However, it is quite limited if you have lots of bases (although the number of values goes down quite dramatically with larger bases), so if there isn't a huge number of combinations, it's still doable.
As a comparison:
#include <iostream>
#include <cstdlib>
#include <cmath>
static int quick_pow10(int n)
{
static int pow10[10] = {
1, 10, 100, 1000, 10000,
100000, 1000000, 10000000, 100000000, 1000000000
};
return pow10[n];
}
static int integer_pow(int x, int n)
{
int r = 1;
while (n--)
r *= x;
return r;
}
static int opt_int_pow(int n)
{
int r = 1;
const int x = 10;
while (n)
{
if (n & 1)
{
r *= x;
n--;
}
else
{
r *= x * x;
n -= 2;
}
}
return r;
}
int main(int argc, char **argv)
{
long long sum = 0;
int n = strtol(argv[1], 0, 0);
const long outer_loops = 1000000000;
if (argv[2][0] == 'a')
{
for(long i = 0; i < outer_loops / n; i++)
{
for(int j = 1; j < n+1; j++)
{
sum += quick_pow10(n);
}
}
}
if (argv[2][0] == 'b')
{
for(long i = 0; i < outer_loops / n; i++)
{
for(int j = 1; j < n+1; j++)
{
sum += integer_pow(10,n);
}
}
}
if (argv[2][0] == 'c')
{
for(long i = 0; i < outer_loops / n; i++)
{
for(int j = 1; j < n+1; j++)
{
sum += opt_int_pow(n);
}
}
}
std::cout << "sum=" << sum << std::endl;
return 0;
}
Compiled with g++ 4.6.3, using -Wall -O2 -std=c++0x
, gives the following results:
$ g++ -Wall -O2 -std=c++0x pow.cpp
$ time ./a.out 8 a
sum=100000000000000000
real 0m0.124s
user 0m0.119s
sys 0m0.004s
$ time ./a.out 8 b
sum=100000000000000000
real 0m7.502s
user 0m7.482s
sys 0m0.003s
$ time ./a.out 8 c
sum=100000000000000000
real 0m6.098s
user 0m6.077s
sys 0m0.002s
(I did have an option for using pow
as well, but it took 1m22.56s when I first tried it, so I removed it when I decided to have optimised loop variant)

- 59,987
- 13
- 123
- 180

- 126,704
- 14
- 140
- 227
-
-
You might be able to initialize your data with a loop. Compiler might be able to optimize that by computing the values before runtime? – FreelanceConsultant Sep 02 '13 at 23:13
-
@EdwardBird: For this purpose, `static` is faster than a loop, for sure. Since it's probably initialized at compile-time, or at least only once. A loop will initialize every time. – Mats Petersson Sep 02 '13 at 23:15
-
@MatsPetersson I read something recently about compiler optimizations which compute values at compile time... For example, if you have `x = x * 10 * 5` some compilers will change that to `x = x * 50`. Would a compiler not detect that the loop initializes some values and therefore compute them when compiling so any executable program wouldn't have to? – FreelanceConsultant Sep 02 '13 at 23:19
-
I have just confirmed that g++ at least just makes a global table that is initialized at build-time. – Mats Petersson Sep 02 '13 at 23:38
-
1Nice, optimal answer, but I think, you really should add a bounds check on `n`. – cmaster - reinstate monica Sep 03 '13 at 16:05
-
Should there be an `n--;` after `r *= x;` in `opt_int_pow()`? Also to note that this only works for a positive power – Toby Oct 31 '14 at 16:50
-
-
Just used this solution for [this question](https://stackoverflow.com/a/61049428/1593077). Thank you! – einpoklum Apr 05 '20 at 22:33
There are certainly ways to compute integral powers of 10 faster than using std::pow()
! The first realization is that pow(x, n)
can be implemented in O(log n) time. The next realization is that pow(x, 10)
is the same as (x << 3) * (x << 1)
. Of course, the compiler knows the latter, i.e., when you are multiplying an integer by the integer constant 10, the compiler will do whatever is fastest to multiply by 10. Based on these two rules it is easy to create fast computations, even if x
is a big integer type.
In case you are interested in games like this:
- A generic O(log n) version of power is discussed in Elements of Programming.
- Lots of interesting "tricks" with integers are discussed in Hacker's Delight.

- 601,492
- 42
- 1,072
- 1,490

- 150,225
- 13
- 225
- 380
-
For some bigint type, probably. I don't think the question is about that, though. For fixed-size integer types, a lookup table makes it an O(1) operation. (Edit: that's not really fair, though. For fixed-size integer types, n has an upper limit, and if n has an upper limit, even something like O(2^n) is equivalent to O(1) then.) For floating point types, pow is already normally implemented as an O(1) operation. – Sep 02 '13 at 22:38
-
1
-
@H2CO3 Yeah, you're right. O(1) will be faster than O(n) for large enough n, but that isn't saying much when n cannot be more than 20 or so. – Sep 02 '13 at 22:42
-
Yes, software can calculate pow10() in O(log n) ***sequential*** time, but hardware is inherently **parallel**. It doesn't have to calculate things sequentially (never mind the fact that it *could* do things sequentially just like software, if it wanted to). So the time complexity point is irrelevant. You're comparing apples and oranges in your sentence about time complexity. – user541686 Sep 02 '13 at 23:24
-
@Mehrdad: Sure, for fixed sized integers you don't even need a parallel version because you can use a look-up table. However, once the size of your integers isn't bounded (other than the size of total available memory) the parallel algorithm the hardware can use won't help you much to get below O(log n). – Dietmar Kühl Sep 02 '13 at 23:30
-
@DietmarKühl: On the one hand you're talking about *humongous* numbers when you refer to O(log n), but on the other hand you're using them to talk about the performance of `std::pow`, which is obviously designed for small numbers that can be implemented in hardware. Are you talking about small numbers or large numbers?? The fact that exponentiation is O(log n) has nothing to do with the performance of `std::pow`... – user541686 Sep 02 '13 at 23:36
-
I thought std::pow() is O(logn) time, isn't it? The bit operation thought is quite interesting and very hard to come up with by myself :) – szli Sep 03 '13 at 01:28
-
@szli: The problem is that `std::pow` is technically O(1) because its inputs are O(1) aka finite. And when you get to variable-length integers, just reading them becomes O(log N). That complicates things. – MSalters Sep 03 '13 at 07:00
-
7
-
5(x<<3)*(x<<1)=8x*2x=16x^2...this answer is pretty problematic. Probably meant exponentiation by repeated squaring e.g. x^(1<<3)*x^(1<<1) but obviously this type of mistake is unacceptable – Gregory Morse Feb 21 '21 at 12:27
-
@GregoryMorse I'm betting they meant `(x<<3)+(x<<1)` which is `8x+2x=10x`. – sampathsris Dec 03 '21 at 04:16
A solution for any base using template meta-programming :
template<int E, int N>
struct pow {
enum { value = E * pow<E, N - 1>::value };
};
template <int E>
struct pow<E, 0> {
enum { value = 1 };
};
Then it can be used to generate a lookup-table that can be used at runtime :
template<int E>
long long quick_pow(unsigned int n) {
static long long lookupTable[] = {
pow<E, 0>::value, pow<E, 1>::value, pow<E, 2>::value,
pow<E, 3>::value, pow<E, 4>::value, pow<E, 5>::value,
pow<E, 6>::value, pow<E, 7>::value, pow<E, 8>::value,
pow<E, 9>::value
};
return lookupTable[n];
}
This must be used with correct compiler flags in order to detect the possible overflows.
Usage example :
for(unsigned int n = 0; n < 10; ++n) {
std::cout << quick_pow<10>(n) << std::endl;
}

- 666
- 7
- 21
-
@cmaster Ok. I tried to improve my answer... I will delete my answer if it is still not useful or incorrect. – Vincent Sep 04 '13 at 14:11
An integer power function (which doesn't involve floating-point conversions and computations) may very well be faster than pow()
:
int integer_pow(int x, int n)
{
int r = 1;
while (n--)
r *= x;
return r;
}
Edit: benchmarked - the naive integer exponentiation method seems to outperform the floating-point one by about a factor of two:
h2co3-macbook:~ h2co3$ cat quirk.c
#include <stdio.h>
#include <stdlib.h>
#include <limits.h>
#include <errno.h>
#include <string.h>
#include <math.h>
int integer_pow(int x, int n)
{
int r = 1;
while (n--)
r *= x;
return r;
}
int main(int argc, char *argv[])
{
int x = 0;
for (int i = 0; i < 100000000; i++) {
x += powerfunc(i, 5);
}
printf("x = %d\n", x);
return 0;
}
h2co3-macbook:~ h2co3$ clang -Wall -o quirk quirk.c -Dpowerfunc=integer_pow
h2co3-macbook:~ h2co3$ time ./quirk
x = -1945812992
real 0m1.169s
user 0m1.164s
sys 0m0.003s
h2co3-macbook:~ h2co3$ clang -Wall -o quirk quirk.c -Dpowerfunc=pow
h2co3-macbook:~ h2co3$ time ./quirk
x = -2147483648
real 0m2.898s
user 0m2.891s
sys 0m0.004s
h2co3-macbook:~ h2co3$
-
-
1@Mehrdad A few years ago. Not recently, though. Who knows what improvements floating-point hardware (and software) have got since then. – Sep 02 '13 at 22:30
-
3There are much faster ways to do integer exponentiation. See, e.g., [Exponentiation by squaring](http://en.wikipedia.org/wiki/Exponentiation_by_squaring) or [Addition-chain exponentiation](http://en.wikipedia.org/wiki/Addition-chain_exponentiation). – Ted Hopp Sep 02 '13 at 22:31
-
No sure because of floating points integrated cores (sse like) that could do this relly well in float – dzada Sep 02 '13 at 22:31
-
1Well, I have **just** tried it, and the naive integer power method seems twice as fast as the floating-point method. – Sep 02 '13 at 22:36
-
2
-
-
2@H2CO3: Surely you want to give it some -O? And why are the results different? – Mats Petersson Sep 02 '13 at 22:43
-
@Mehrdad You are not going to like the `-O2` version. Now that's 0.015 sec versus 2.83 sec. A 20-fold difference. – Sep 02 '13 at 22:47
-
@MatsPetersson OK, I gave it some `-O` (see my comment above). I suspect the results are different because of the floating-point rounding and truncation errors. – Sep 02 '13 at 22:50
-
@H2CO3: Your test is still wrong (though that doesn't mean you're wrong about it being faster): you're converting between integers and floating-point numbers a lot within the loop in one test, but not the other. You need to make sure the function is the **only** thing you're testing. – user541686 Sep 02 '13 at 22:53
-
Given the usual implementation of `double`, `pow(i,5)` loses precision at `i >= 1553`. – aschepler Sep 02 '13 at 22:53
-
2I think it's a little unfair to use a constant `n` in the loop too. – Mats Petersson Sep 02 '13 at 22:55
-
@Mehrdad I'm not sure whether it's "wrong". When you use `pow()` to obtain some integer raised to some other integer power, the compiler **has** to perform the integer <-> floating-point conversion. – Sep 02 '13 at 22:55
-
@aschepler In my example, `i` runs until a hundred million, so that seems reasonable. – Sep 02 '13 at 22:56
-
-
@MatsPetersson With a call to `n = strtoull(argv[1], NULL, 0)`, all integers changed to `unsigned long`, the optimization level set to `-O2`, `powerfunc()` called with `(i, n)`, the result is 0.3 and 3 seconds (the integer function is still faster). – Sep 02 '13 at 23:00
-
@H2CO3: `x += powerfunc(...)` casts the `double` to an `int` when `powerfunc` is `pow`. – user541686 Sep 02 '13 at 23:01
-
1@Mehrdad: I suspect a conversion from `double` to `int` is worth 10x the time of `powerfunc`, and in fact, the OP's question is indeed about integer `pow(10, n)` - which will definitely result in a `double` cast to `int` in some way or another. – Mats Petersson Sep 02 '13 at 23:06
-
1@Mehrdad That's not a cast, that's an implicit conversion (not like it would matter). But still, if I omit that, I get a ~10% performance boost. [Here is some proof](http://i.imgur.com/1bkOPiw.png). I did everything you wanted, and the naive integer power implementation is still 9 times faster than the floating-point one. – Sep 02 '13 at 23:09
-
@H2CO3: A power of 5 is pretty small. When I try a power of 10 on Visual C++, I still see `integer_pow` being faster -- but it's only a difference of 178 ms against 153 ms, which is pretty darn small. [I don't know why GCC is so much slower](http://ideone.com/SrFIUO) like this, but it's definitely not a universal thing for `pow` to be so much slower than integer exponentiation. – user541686 Sep 02 '13 at 23:19
-
@H2CO3: We're picking on your answer because it's the one we're most skeptical about. I'm not asking for benchmarks on lookup tables because I'm already pretty sure they're faster. Don't take it personally! – user541686 Sep 02 '13 at 23:20
-
1@Mehrdad What's interesting, despite that `pow()` with floating-point numbers is ought to be `O(1)`, it doesn't stay at 3 seconds when I compile the benchmark I gave you a link to and call it with `n = 50`. Now the integer version runs in 3 seconds (10x the exponent, 10x the running time, that's expected) -- but the floating-point version ran **in 15 seconds instead of 2.83.** What gives? – Sep 02 '13 at 23:23
-
@H2CO3: Why do you think it's supposed to be O(1) with floating-point numbers? – user541686 Sep 02 '13 at 23:26
-
1@Mehrdad AFAIK floating-point approximations of functions like `pow()`, `log()` and trigonometrical functions are implemented using some sort of series expansion or the like. Those mostly use multiplication and division which somebody just mentioned that can be `O(1)`. But still, this is less important than the fact that **for me,** the integer variant is still faster. I'm not saying that it **absolutely has to** be faster for you too. – Sep 02 '13 at 23:30
-
@H2CO3: I don't think a fixed number of terms in a series expansion can guarantee accuracy for all combinations of possible inputs, but I'm not sure. In any case, I think you need to relax a bit... no one's trying to attack you. – user541686 Sep 02 '13 at 23:37
-
@H2CO3: Didn't I already? I thought I already said this above: *"When I try a power of 10 on Visual C++, I still see `integer_pow` being faster..."* – user541686 Sep 02 '13 at 23:40
-
I just ran my code vs. the code posted here. Mine is MUCH faster... ;) – Mats Petersson Sep 02 '13 at 23:59
-
1@MatsPetersson I'm not surprised, actually :) My claim was not that "hey all, I've got an uber-smart O(0) algorithm that even buys you beer, because I am God!"... All I *dared* to mention humbly was that in my experience integer operations are almost always faster than floating-point ones. Apparently, I shouldn't have done so. – Sep 03 '13 at 00:10
-
1Yes, I agree. Floating point math in general is slower than floating point, and even iterating will make it quite a bit faster. – Mats Petersson Sep 03 '13 at 00:13
No multiplication and no table version:
//Nx10^n
int Npow10(int N, int n){
N <<= n;
while(n--) N += N << 2;
return N;
}

- 2,621
- 1
- 22
- 31
-
This is neat but only works on unsigned numbers - the function signature ought to convey that. – Pete May 05 '22 at 08:43
Here is a stab at it:
// specialize if you have a bignum integer like type you want to work with:
template<typename T> struct is_integer_like:std::is_integral<T> {};
template<typename T> struct make_unsigned_like:std::make_unsigned<T> {};
template<typename T, typename U>
T powT( T base, U exponent ) {
static_assert( is_integer_like<U>::value, "exponent must be integer-like" );
static_assert( std::is_same< U, typename make_unsigned_like<U>::type >::value, "exponent must be unsigned" );
T retval = 1;
T& multiplicand = base;
if (exponent) {
while (true) {
// branch prediction will be awful here, you may have to micro-optimize:
retval *= (exponent&1)?multiplicand:1;
// or /2, whatever -- `>>1` is probably faster, esp for bignums:
exponent = exponent>>1;
if (!exponent)
break;
multiplicand *= multiplicand;
}
}
return retval;
}
What is going on above is a few things.
First, so BigNum support is cheap, it is template
ized. Out of the box, it supports any base type that supports *= own_type
and either can be implicitly converted to int
, or int
can be implicitly converted to it (if both is true, problems will occur), and you need to specialize some template
s to indicate that the exponent type involved is both unsigned and integer-like.
In this case, integer-like and unsigned means that it supports &1
returning bool
and >>1
returning something it can be constructed from and eventually (after repeated >>1
s) reaches a point where evaluating it in a bool
context returns false
. I used traits classes to express the restriction, because naive use by a value like -1
would compile and (on some platforms) loop forever, while (on others) would not.
Execution time for this algorithm, assuming multiplication is O(1), is O(lg(exponent)), where lg(exponent) is the number of times it takes to <<1
the exponent
before it evaluates as false
in a bool
ean context. For traditional integer types, this would be the binary log of the exponent
s value: so no more than 32.
I also eliminated all branches within the loop (or, made it obvious to existing compilers that no branch is needed, more precisely), with just the control branch (which is true uniformly until it is false once). Possibly eliminating even that branch might be worth it for high bases and low exponents...

- 262,606
- 27
- 330
- 524
-
Why do you needlessly pollute your code with these hideous templates?!? It hurts. I almost downvoted you for it... – cmaster - reinstate monica Sep 03 '13 at 16:10
-
@cmaster Because doing this code mainly makes sense if you are using bignums or other types that are not simple integers. If your base is an integer > 1, then using 64 bit ints the naive "multiply self exponent times" is going to be a loop of no more than length 64 before it overflows, which will be faster than my code above. The above code handles bases that are floating point, bignum, rationals, or anything else: and outside of those cases, the above code *is not worth running*. In the integer base case, it won't be *much* slower in the worst case, and it may be faster in common cases. – Yakk - Adam Nevraumont Sep 03 '13 at 17:00
Now, with constexpr
, you can do like so:
constexpr int pow10(int n) {
int result = 1;
for (int i = 1; i<=n; ++i)
result *= 10;
return result;
}
int main () {
int i = pow10(5);
}
i
will be calculated at compile time. ASM generated for x86-64 gcc 9.2:
main:
push rbp
mov rbp, rsp
mov DWORD PTR [rbp-4], 100000
mov eax, 0
pop rbp
ret

- 6,300
- 4
- 51
- 80
-
int i could be calculated at compile time, but also could not to since i not declared as constexpr. You gave an example without guarantee of compile-time computation. – Vladislav Kogan Dec 24 '22 at 05:29
You can use the lookup table which will be by far the fastest
You can also consider using this:-
template <typename T>
T expt(T p, unsigned q)
{
T r(1);
while (q != 0) {
if (q % 2 == 1) { // q is odd
r *= p;
q--;
}
p *= p;
q /= 2;
}
return r;
}

- 168,305
- 31
- 280
- 331
-
1
-
@Mehrdad: ah, yes, it looks like the OP may only be interested in integer powers. (I was considering the general case, where a large table would screw cache performance...) – Oliver Charlesworth Sep 02 '13 at 22:32
-
@OliCharlesworth:- Exponentiation by squaring is also a good option. Correct me if I am wrong!!! – Rahul Tripathi Sep 02 '13 at 22:35
This function will calculate x ^ y much faster then pow. In case of integer values.
int pot(int x, int y){
int solution = 1;
while(y){
if(y&1)
solution*= x;
x *= x;
y >>= 1;
}
return solution;
}

- 21
- 4
A generic table builder based on constexpr functions. The floating point part requires c++20 and gcc, but the non-floating point part works for c++17. If you change the "auto" type param to "long" you can use c++14. Not properly tested.
#include <cstdio>
#include <cassert>
#include <cmath>
// Precomputes x^N
// Inspired by https://stackoverflow.com/a/34465458
template<auto x, unsigned char N, typename AccumulatorType>
struct PowTable {
constexpr PowTable() : mTable() {
AccumulatorType p{ 1 };
for (unsigned char i = 0; i < N; ++i) {
p *= x;
mTable[i] = p;
}
}
AccumulatorType operator[](unsigned char n) const {
assert(n < N);
return mTable[n];
}
AccumulatorType mTable[N];
};
long pow10(unsigned char n) {
static constexpr PowTable<10l, 10, long> powTable;
return powTable[n-1];
}
double powe(unsigned char n) {
static constexpr PowTable<2.71828182845904523536, 10, double> powTable;
return powTable[n-1];
}
int main() {
printf("10^3=%ld\n", pow10(3));
printf("e^2=%f", powe(2));
assert(pow10(3) == 1000);
assert(powe(2) - 7.389056 < 0.001);
}

- 18,552
- 7
- 62
- 74
Based on Mats Petersson approach, but compile time generation of cache.
#include <iostream>
#include <limits>
#include <array>
// digits
template <typename T>
constexpr T digits(T number) {
return number == 0 ? 0
: 1 + digits<T>(number / 10);
}
// pow
// https://stackoverflow.com/questions/24656212/why-does-gcc-complain-error-type-intt-of-template-argument-0-depends-on-a
// unfortunatly we can't write `template <typename T, T N>` because of partial specialization `PowerOfTen<T, 1>`
template <typename T, uintmax_t N>
struct PowerOfTen {
enum { value = 10 * PowerOfTen<T, N - 1>::value };
};
template <typename T>
struct PowerOfTen<T, 1> {
enum { value = 1 };
};
// sequence
template<typename T, T...>
struct pow10_sequence { };
template<typename T, T From, T N, T... Is>
struct make_pow10_sequence_from
: make_pow10_sequence_from<T, From, N - 1, N - 1, Is...> {
//
};
template<typename T, T From, T... Is>
struct make_pow10_sequence_from<T, From, From, Is...>
: pow10_sequence<T, Is...> {
//
};
// base10list
template <typename T, T N, T... Is>
constexpr std::array<T, N> base10list(pow10_sequence<T, Is...>) {
return {{ PowerOfTen<T, Is>::value... }};
}
template <typename T, T N>
constexpr std::array<T, N> base10list() {
return base10list<T, N>(make_pow10_sequence_from<T, 1, N+1>());
}
template <typename T>
constexpr std::array<T, digits(std::numeric_limits<T>::max())> base10list() {
return base10list<T, digits(std::numeric_limits<T>::max())>();
};
// main pow function
template <typename T>
static T template_quick_pow10(T n) {
static auto values = base10list<T>();
return values[n];
}
// client code
int main(int argc, char **argv) {
long long sum = 0;
int n = strtol(argv[1], 0, 0);
const long outer_loops = 1000000000;
if (argv[2][0] == 't') {
for(long i = 0; i < outer_loops / n; i++) {
for(int j = 1; j < n+1; j++) {
sum += template_quick_pow10(n);
}
}
}
std::cout << "sum=" << sum << std::endl;
return 0;
}
Code does not contain quick_pow10, integer_pow, opt_int_pow for better readability, but tests done with them in the code.
Compiled with gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5), using -Wall -O2 -std=c++0x, gives the following results:
$ g++ -Wall -O2 -std=c++0x main.cpp
$ time ./a.out 8 a
sum=100000000000000000
real 0m0.438s
user 0m0.432s
sys 0m0.008s
$ time ./a.out 8 b
sum=100000000000000000
real 0m8.783s
user 0m8.777s
sys 0m0.004s
$ time ./a.out 8 c
sum=100000000000000000
real 0m6.708s
user 0m6.700s
sys 0m0.004s
$ time ./a.out 8 t
sum=100000000000000000
real 0m0.439s
user 0m0.436s
sys 0m0.000s
if you want to calculate, e.g.,10^5, then you can:
int main() {
cout << (int)1e5 << endl; // will print 100000
cout << (int)1e3 << endl; // will print 1000
return 0;
}

- 779
- 11
- 7
result *= 10
can also be written as result = (result << 3) + (result << 1)
constexpr int pow10(int n) {
int result = 1;
for (int i = 0; i < n; i++) {
result = (result << 3) + (result << 1);
}
return result;
}

- 81
- 1
- 4
-
Is that actually faster on any common processor/compiler combination? It's certainly not very readable. – Konrad Jan 15 '21 at 16:34
-
Please trust the compiler with these kinds of optimizations. It does a better job than you think. – Björn Sundin Apr 15 '21 at 12:49