If, in "determine the encoding of a unicode", "unicode" is the python data type, then you cannot do it, as "encoding" refers to the original byte patterns that represented the string when it was input (say, read from a file, a database, you name it). By the time it becomes a python 'unicode' type (an internal representation) the string has either been decoded behind the lines or has thrown a decoding exception because a byte sequence did not jibe with the system encoding.
Shadyabhi's answer refers to the (common) case in which you are reading bytes from a file (which you could be very well be stuffing in a string - not a python unicode string) and need to guess in what encoding they were saved. Strictly speaking, you cannot have a "latin1 unicode python string": a unicode python string has no encoding (encoding may be defined as the process that translates a character to a byte pattern and decoding as the inverse process; a decoded sring has therfore no encoding - though it can be encoded in several ways for storage/external representation purposes).
For instance on my machine:
In [35]: sys.stdin.encoding
Out[35]: 'UTF-8'
In [36]: a='è'.decode('UTF-8')
In [37]: b='è'.decode('latin-1')
In [38]: a
Out[38]: u'\xe8'
In [39]: b
Out[39]: u'\xc3\xa8'
In [41]: sys.stdout.encoding
Out[41]: 'UTF-8'
In [42]: print b #it's garbage
è
In [43]: print a #it's OK
è
Which means that, in your example, latin1_unicode will contain garbage if the default encoding happens to be UTF-8, or UTF-16, or anything different from latin1.
So what you (may) want to to do is:
- Ascertain the encoding of your data source - perhaps using one of Shadyabhi's methods
- Decode the data according to (1), save it in python unicode strings
- Encode it using the original encoding (if that's what serves your needs) or some other encoding of your choosing.