Dominant light sources are usually up, whether or not it is night or day, and whether or not there are people in the scene. Combining highlight detection with edge detection you can identify likely locations of the scene's light sources and judge which way is up.
EDIT: Great question - I just spent 5 minutes on Google Scholar and failed to even find the correct problem domain.
EDIT: Got it. It's called 'image orientation detection' -- not too obscure a title.
EDIT: A quick review suggests that there are two major approaches:
- Combined classifiers - train lots of different classifiers and combine the results, a classic 'throw everything you got at it' shotgun approach. Here, most of the innovative contribution of papers appears to be on how to design new ways for different classifiers to be combined.
- Specific features - pick a specific (or small set of specific) feature and use these to classify, detect orientation. Some examples are : facial recognition + edge detection, local binary pattern overlap (relative: only works between two images of same subject).
Anyway it is certainly an interesting field, and there seem to be more patents than papers, which makes it even more interesting. However I did not find anything that explicates the Picasa method. However I did find this:
S. Baluja (from Google) has published the following papers:
From this, one might conclude that the methods therein are indicative of what Google uses.