Worst case is at least linear in n + m (hence exponential in the size of input) since we can choose m > 1 and n = 1 so that this algorithm correctly determines GCD(m, n) = 1 in exactly m - 1 iterations. This means the worst case is Omega(m + n) = Omega(2^k), where k is the number of bits in the encoding of the input.
Worst case is at most linear in m + n (hence exponential in the size of input) since at each step either m or n is reduced by at least 1 (assuming n, m > 0). Thus the loop can iterate at most m + n times (in fact, it will never iterate that many times, but it certainly doesn't iterate more). This means the worst case is O(m+n) = O(2^k), where k is the number of bits in the encoding of the input.
Thus, the worst case is exactly linear in m+n, that is, exponential in the size of the input k: it is Theta(m+n) = Theta(2^k).
When an algorithm which takes numbers as input has a runtime which is polynomial in terms of the values of the inputs, but exponential in terms of the size of the input's encoding (e.g., binary), the algorithm is said to run in pseudopolynomial time.