Big O notation is talking about worst case, how long will the function run?
If you have Func2, with b = 1, six instructions run. Assignment, Comparison, Assignment, Assignment, Comparison, Return. When b = 2, 9 instructions are run. We can tell right away that the number of instructions is 3+(3*n). When talking about how long the function will take, we ignore constants both added to our variable, and multiplied by our variable. Because ultimately, when b=1000, that's the overriding factor of how long the function takes to run. And because it increases Linearly with respect to b, the function is said to be O(n) time. (Big O notation always uses 'n' for its variable.)
The first function increases even slower with respect to n, it increases Logarithmically, only adding more computation for each power of 2 that n passes. (More computation when n = 2, 4, 8, 16, 32, etc)
(Assuming pow(2,n) = n squared; sometimes pow(2,n) = 2 to the power of n)
The third function runs the loop once when n = 1, and 4 times when n = 2, we can see that we're talking about n^2 number of iterations through the loop. This is Exponentially expanding, so it's O(n^2) time.
The way to analyze things for Big O notation is to identify how many instructions run based on n = 1, 2, 3, 4... n Make that a function and identify whether it's Constant (doesn't change based on n), Linear (changes directly proportional to n), Logarithmic (changes based on log(n)), Exponential (changes based on a power of n) or otherwise. (It's been a while, but I remember an n log n type and one that talked about when n is a power that some other exponent is raised to, but memory is hazy on those.)