e.g.:
a = 10 ^ 12, b = 93 ^ 7
result = b % a
then what is the time complexity of '%' operator in terms of big O notation, and how it is computed?
e.g.:
a = 10 ^ 12, b = 93 ^ 7
result = b % a
then what is the time complexity of '%' operator in terms of big O notation, and how it is computed?
Big-O notation is probably the wrong way to think about the time complexity of the %
operator. Fundamentally, big-O notation measures the rate at which some quantity grows as the input gets larger and larger. However, the built-in %
operator only works on primitive integer types, and those types max out at a certain size (for example, 64 bits). As a result, measuring complexity by saying "how does the cost of %
scale as a function of input size?" might not be the right way of quantifying performance. If you do want to quantify things this way, the answer would be "O(1)," since there's some maximum amount of time required to compute the modulus of two integers.
There are two other questions you might want to look into. The first is how long does it take to perform a modulus compared against other operations like addition, subtraction, etc.? That answer varies from platform to platform, but usually a modulus is much slower than an addition or subtraction. The second is what's the big-O cost of modding two integers, when those integers can be arbitrarily large? That's never going to be worse than the cost of doing a division, a multiplication, and a subtraction, since you could always compute a - b * (a / b)
. The cost of doing so is usually logarithmically slower than just doing a multiplication.
Hope this helps!