0

I got back into programming as a hobby and I started with one of my favourites: calculating primes. Naturally I wanted bigger numbers. Boost multiprecision seemed good. So I installed it. My program now accepts bigger numbers. So far so good. But it's really slow compared to using a standard int64_t.

To make it less arbitrary: calculating an int64_t 18 digit number which is a prime(100000000000000003) takes about 1sec. With an int128_t roughly 45sec..

I'm partially aware of the issue with it (using different operands for the same thing, probably bad formatting, double declaration of n..)

Here is my code:

#include <iostream>
#include <boost/multiprecision/cpp_int.hpp>

namespace mp = boost::multiprecision;


bool isPrime(mp::int128_t n);
mp::int128_t n{ 0 }, y{ 0 };

int main()
{
    std::cin >> y;
    std::cin >> n;
    std::cout << "\n";

    for (y; y <= n; y++)
    {
        if (isPrime(y) == true)
            std::cout << "\033[1;7;32m" << y << "\033[0m ";
        else
        {
            std::cout << y << " ";
        }
    } 

    std::cout << "\n";
}
bool isPrime(mp::int128_t n)
{
    if (n == 2 || n == 3)
        return true;
    
    if (n <= 1 or n % 2 == 0 or n % 3 == 0)
        return false;
    
    for (mp::int128_t i = 5; i * i <= n; i += 6)
    {
        if (n % i == 0 or n % (i + 2) == 0)
            return false;
    }

    return true;
}
Nimantha
  • 6,405
  • 6
  • 28
  • 69
386
  • 23
  • 6
  • 6
    Did you turn on optimizations when you compiled? – JohnFilleau Jan 19 '22 at 17:14
  • 2
    [Where should I prefer pass-by-reference or pass-by-value?](https://stackoverflow.com/q/4986341/2027196) – JohnFilleau Jan 19 '22 at 17:18
  • 1
    compiling with visual studio `int64_t` takes 1 second, `mp::int128_t` takes 3 seconds, definitely sounds like you haven't enabled optimisations – Alan Birtles Jan 19 '22 at 17:25
  • No, i did not. Just running in debug mode. Ill try that. Hard to imagine that will make much of a difference though. @AlanBirtles Did you compile the code for yourself and got 1sec to 3sec? Which would still be pretty bad but it would be a start. Ill try after dinner. Thanks so far. – 386 Jan 19 '22 at 17:33
  • 1
    3 times slower isn't that surprising? Using an integer type natively supported by your CPU is going to be much faster than one implemented in software (presumably as 2 64-bit numbers) – Alan Birtles Jan 19 '22 at 17:39
  • 3
    Never profile a code without turning on optimization (or at least don't get into a conclusion from that). Having said that... a 3x slow down for 128 bit number in a 64 bit platform (I'm assuming it), seems fair. – Adriel Jr Jan 19 '22 at 17:41
  • @AlanBirtles Wow, ok i didnt even think about that. Makes an awful lot of sense. A factor of 3 seems very reasonable than. So i put on optimitzation and got about 3 sec. So it seems i cant give any of you guys an "upvote".. dont know why. My question is answered though. – 386 Jan 19 '22 at 18:28
  • @AdrielJr Will do. – 386 Jan 19 '22 at 18:31
  • A quick local test (GCC 11.2 on an old Haswell with `-O3`) gives me ~777ms for `int64_t` and ~950ms for an `mp::int128_t`, which is actually better than I expected. – Useless Jan 19 '22 at 18:32
  • I dont really know what -03 means but your result seems (without measuring my results) far better than what i experience. Did you just wrap a timer around the isPrime call? And i guess im suppossed to close this.. still waiting a bit to maybe figure out how to give you guys internet cookies.. – 386 Jan 19 '22 at 18:37
  • @386 you can post your own answer (like the one you just deleted, but containing what was actually your solution). Then you can mark that as the solution. – sehe Jan 19 '22 at 22:55
  • Unoptimized code does lots of "silly things", and has lots of things to aid debugging, so it's inherently much slower. Possible duplicate: [Why is this code running over 100 times slower in Debug mode than Release?](https://stackoverflow.com/q/36514709/995714) – phuclv Jan 20 '22 at 04:16

1 Answers1

0

Enabling optimization like -O2 or -O3 will make both calculations (using any: int64 or mp::int128) run faster. Without that, it runs in -O0 which is used mostly for debugging where optimizations cannot be used.

-O3 enables more advanced optimizations, usually at expense of compile time and size. Some suggest that these extra optimizations may not be very safe or well tested.

Also, as @JohnFilleau suggested, you can pass n by reference in isPrime(mp::int128_t n). This is usually faster for non primitive data types. You can also use the const qualifier.

Adriel Jr
  • 2,451
  • 19
  • 25
  • 1
    There will also be tradeoffs for your bignum representation. As a generalization, for crunching _really, really, extra super huge_ numbers, you will want to configure and use the GMP backend. For everything else, cpp_int is plenty fast. Just make sure to profile on your data if time is an issue. – Dúthomhas Jan 20 '22 at 02:40
  • 1
    -O3 does *not* affect numeric precision or any safety checks, i.e. the code compiled with -O3 has the same behavior as the one compiled with -O2 or -O0. What -O3 does is enable more optimizations that are (a) more time consuming during compilation, (b) possibly more expensive in terms of code size and (c) may or *may not* improve runtime performance. Also, usually newer optimizations are first added to -O3 for gaining field experience, before moving to -O2. – Andrey Semashev Jan 20 '22 at 08:57
  • 2
    [Here](https://gcc.gnu.org/onlinedocs/gcc-11.2.0/gcc/Optimize-Options.html#index-O) is the description of various optimization modes. – Andrey Semashev Jan 20 '22 at 09:00
  • @AndreySemashev Ok, so according to your source fast-math is not enabled by O3. This contradicts another source that I read... I'll have to do more research. – Adriel Jr Jan 20 '22 at 15:09
  • 1
    @AdrielJr This depends on the compiler (i.e. gcc never uses fast-math at O3, but the intel compiler uses it per default, I think). – paleonix Feb 04 '22 at 15:44