In edge-cases like this, the formal definition of "input size" which theorists use does not agree with the practical definition which most programmers use when thinking about actual code.
The formal definition of "input size" is the number of bits (or sometimes machine words) required to represent the input. In most cases this is proportional to e.g. the length of an array, the size of a dictionary, or so on, so the definition is equivalent. Formally, your counting method's time complexity is O(2N) where N is the number of bits required to represent the input number. (Usually you would write lowercase n for the number of bits and uppercase N for the actual numerical value, for readability; uppercase N is "bigger".) The formal definition is like this so that terms like "polynomial time" and "NP" have exact meanings which make sense for a variety of different algorithm inputs.
The intuitive, practical definition is that you can measure "input size" by any variable or quantity that matters for your application. Often, the numerical value of an integer is more important to your application than the number of bits required to represent it; typically you don't care about the number of bits. In that case your counting method takes O(n) time where n is the value of the integer.
Most programmers (e.g. on Stack Overflow) will talk about time complexity using this practical definition, simply because it's easier and more useful for real programming. So in your case, O(n) isn't a time complexity according to the formal definition, but if the reason you want to know the time complexity is because you want to estimate how long the code will take to run, or compare it with another algorithm to see which should be faster, then you won't care about the formal definition.