These are just convenience functions falling naturally into the way these two libraries tend to do things, respectively. The first one "condenses" the information by changing things to integers, and the second one "expands" the dimensions allowing (possibly) more convenient access.
sklearn.preprocessing.LabelEncoder
simply transforms data, from whatever domain, so that its domain is 0, ..., k - 1, where k is the number of classes.
So, for example
["paris", "paris", "tokyo", "amsterdam"]
could become
[0, 0, 1, 2]
pandas.get_dummies
also takes a Series with elements from some domain, but expands it into a DataFrame whose columns correspond to the entries in the series, and the values are 0 or 1 depending on what they originally were. So, for example, the same
["paris", "paris", "tokyo", "amsterdam"]
would become a DataFrame with labels
["paris", "tokyo", "amsterdam"]
and whose "paris"
entry would be the series
[1, 1, 0, 0]
The main advantage of the first method is that it conserves space. Conversely, encoding things as integers might give the impression (to you or to some machine learning algorithm) that the order means something. Is "amsterdam" closer to "tokyo" than to "paris" just because of the integer encoding? probably not. The second representation is a bit clearer on that.