Adding to the other answer, you can frame the total run-time of the algorithm, T(n), as the number of times the inner loop runs for each i value from 1 to n. When i = 1 the inner loop runs log(1) times, when i = 2 the inner loop runs log(2) times, and so on. The total run-time is then represented by the sum:
T(n) = log(1) + log(2) + log(3) ... + log(n)
A sum of logs is a log of a product:
T(n) = log(1*2*3...*n)
<= log(n^n) (n! <= n^n)
= nlog(n) (log(a^b) = b*log(a))
T(n) <= nlogn
So the run-time is no worse than nlog(n): it's O(nlogn).
You could do a little more work to show that the O(nlogn) bound is tight, but this normally isn't necessary. (If you show that T(n) is upper bounded by nlogn, and that it's lower bounded by nlogn, you have shown a tight bound. That is, you've shown the exact complexity class, typically represented by "Big Theta" = Θ notation).
T(n) = log(1*2*3...*n)
>= log(n/2*(n/2 + 1)*(n/2 + 2)...*n) (throwing away first half of product)
>= log(n/2^(n/2)) (reducing all terms to n/2)
= n/2(log(n/2))
= n/2(logn - 1) (assuming base 2)
T(n) >= n/2(logn - 1) = Θ(nlogn)
So as the algorithm's run-time is bounded above and below by Θ(nlogn) functions, its complexity class is exactly nlogn.
n/2(logn - 1) <= T(n) <= nlogn implies T(n) = Θ(nlogn)