These loops each run in time O(log n). You can make an even tighter statement for the first one and say that it runs in time O(b), where b is the number of bits set in the number. It happens to be the case that b = O(log n) because the number n requires O(log n) bits to write out.
It's always a little bit dicey to talk about runtimes of algorithms that process numbers one bit at a time because you have two things in tension with one another. One is that, in a mathematical sense, the number of bits in a number grows logarithmically with the magnitude of the number. That is, the number of bits in the number n is O(log n). On the other hand, on actual machines there's a fixed limit to the number of bits in a number, which makes it seem like the runtime "ought" to be a constant, since you can't have unboundedly large values of n.
One mental model I find helpful here is to think about what it would really mean for n to grow unboundedly. If you have a 45-bit number, I'm assuming you're either
- working on a 64-bit system, or
- working on a 32-bit system and using multiple 32-bit integers to assemble your 45-bit number.
If we make assumption (1), then we're implicitly saying that as our numbers get bigger and bigger we're moving to larger and larger word size machines. (This can be formalized as the transdichotomous machine model). If we make assumption (2), then we're assuming that our numbers don't necessarily fit into a machine word, in which case the number of bits isn't a constant.
One last way to reason about this is to introduce another parameter w indicating the machine word size, then to do the analysis in terms of w. For example, we can say that the runtime of both algorithms is O(w). That eliminates the dependency on n, which may overestimate the work done.
Hope this helps!