The standard double floating point format can only represent integers exactly up to 2^53 (9007199254740992, 16 digits); from that point on there are gaps between representable whole numbers, and these gaps get bigger as the numbers get bigger.
64-bit builds of python use 64-bit integers natively, on other platforms you could use numpy's int64
. That doesn't get you anywhere near 200 digits but it gets you far enough away from the 32-bit integer ranges for naive code to get painfully slow.
For example, trial division needs to consider only up to pi(sqrt(2^32)) = 6542 potential small prime divisors when working with integers up to 2^32, or sqrt(2^32)/2 = 32767 odd candidate divisors plus the number 2, or somewhere between these two extremes when using a wheel with an order higher than 2. Near 2^64, the number of small prime divisors to be tested is pi(sqrt(2^64)) = 203,280,221 already...
Deterministic primality testing beyond 2^32 is the domain of algorithms like Miller-Rabin or Baillie-PSW. Miller-Rabin is deterministic up to certain thresholds - up to somewhere around 2^64 - when used with certain carefully selected sets of bases; see The best known SPRP bases sets. Baillie-PSW is also known to be deterministic up to at least 2^64.
This means that beyond 2^64 you need to use a big integer type of some sort, and you have to make do with probabilistic algorithms for primality testing (giving you 'industrial-grade' primes rather than proven ones). Or plan on spending insanely huge amounts of time on actually proving the primality of a 200-digit number... A 200-digit number has a 100-digit square root (some 300 bits), so even computing all potential small prime divisors for fueling a naive trial division test is not feasible anymore.