You're moving into a complex topic here ;-) At university, you spend ages on the theory behind the O-notation. I always tended to come down to the following simplification:
An algorithm that does not contain any loops (for example: Write Text to Console, Get Input From User, Write Result To Console) is O(1), no matter how many steps. The "time" it takes to execute the algorithm is constant (this is what O(1) means), as it does not depend on any data.
An algorithm that iterates through a list of items one by one has complexity O(n) (n being the number of items in the list). If it iterates two times through the list in consecutive loops, it is still O(n), as the time to execute the algorithm still depends on the number of items only.
An algorithm with two nested loops, where the inner loop somehow depends on the outer loop, is in the O(n^x) class (depending on the number of nested loops).
An binary search algorithm on a sorted field is in the O(log(n)) class, as the number of items is reduced by half in every step.
The above may not be very precise, but this is how I tried to remember some of the important values. From just looking at code, determining the complexity is not always easy.
For further and more detailed (and more correct) reading, the question David Heffernan linked to in his comment seems quite suitable.