0

note: in instantiation of function template specialization 'std::__1::__gcd' requested here while (__gcd(n, k) <= 1) n++;

The above line was displayed along with the error shown earlier, I know there are many other methods to calculate gcd but I am confused why it's not working for (__gcd() ).

I am using MacBook : OS -> BgSur

                  Apple clang version 12.0.0 (clang-1200.0.32.29)
#include <bits/stdc++.h>
#include <cmath>
using namespace std;

int getSum(int n)
{
    int sum;
    for (sum = 0; n > 0; sum += n % 10, n /= 10)
        ;
    return sum;
}
int main() {
    int t, n;
    cin >> t;
    while (t--) {
        cin >> n;
        int k = getSum(n);
        while (__gcd(n, k) <= 1) n++;
        cout << n << endl;
    }
}

here, getSum(125) = 1+2+5 = 8

INPUT :

3
11
31
75

OUTPUT:

12
32
75

EXPECTED OUTPUT:

12
33
75
yash
  • 77
  • 3
  • 10

1 Answers1

1

As static assert suggests, your __gcd() implementation requires unsigned types for arguments (i.e. algorithm operates on non-negative numbers only), so replacing int with unsigned int should help - or you could replace

#include <bits/stdc++.h>  // never include this
#include <cmath>

with

#include <iostream>
#include <numeric>  // std::gcd

and use the GCD function that is included in the standard library instead:

while (std::gcd(n, k) <= 1) n++;
Ted Lyngmo
  • 93,841
  • 5
  • 60
  • 108
Dmitry
  • 1,230
  • 10
  • 19
  • thank you, it helped me but now in second test case i.e 31 i am recieving 32 but correct answer is 33 – yash Mar 31 '21 at 15:32
  • @trev if `33` is expected you have some bug in your algorithm or in the implementation of your algorithm. I'd be surprised if it was related to the GCD function as such. – Ted Lyngmo Mar 31 '21 at 15:33
  • If the input that we are providing is positive(unsigned) then why should we care about what the data type is ? please explain. – yash Mar 31 '21 at 15:33
  • @Ted-Lyngmo "I'd be surprised if it was related to the GCD function as such" why? – yash Mar 31 '21 at 15:35
  • Re: _data type_: Because the GCD function you called is an internal (`bits`) implementation that _requires_ unsigned types. That's what the `static_assert` asserts. Why I would be surprised? Because the GCD function has never made an error that I've seen. On my platform, both the internal `__gcd` and the standard `std::gcd` give `32`. – Ted Lyngmo Mar 31 '21 at 15:35
  • 1
    @Ted-Lyngmo okay, thank you. – yash Mar 31 '21 at 15:39
  • @Dmitry why should we never use bits/stdc++ library ? – yash Mar 31 '21 at 15:42
  • [Why should I **not** `#include `?](https://stackoverflow.com/Questions/31816095/Why-Should-I-Not-Include-Bits-Stdc-H.) – Ted Lyngmo Mar 31 '21 at 15:50
  • @Ted-Lyngmo The above code is perfectly working in windows laptop, is there any problem with MacOS ? – yash Mar 31 '21 at 16:05
  • @trev Did you read the link? The code is not portable. It may not even work if you try to compile it with a newer or older version of the same compiler because you are using an _internal_ header file. It does not need to provide a stable API. It's also not available on all platforms. As you also noticed. My version of that internal function did _not_ have the `static_assert` problem that yours had. Non-portable. – Ted Lyngmo Mar 31 '21 at 16:20
  • @Ted-Lyngmo If the input that we are providing is positive(unsigned) then why should we care about what the data type is ? please explain. – yash Apr 01 '21 at 03:57
  • I think you need to read what I've already written. I've already answered that question above (see "Re: _data type_"). – Ted Lyngmo Apr 01 '21 at 05:07