If the strings you have obtained are the result of web site scraping, it appears that the site you got them off has an incorrect encoding setting.
It is fairly common for sites to specify charset=utf-8
and then have the site's content actually in some other character set (windows-1252
in particular) or vice versa. There is no simple, universal workaround for this phenomenon (also known as mojibake).
You might want to try with different scraping libraries -- most have some sort of tactic for identifying and coping with this scenario, but they have different success rates in different scenarios. If you are using BeautifulSoup, you might want to experiment with different parameters to the chardet
back end.
Of course, if you only care about correctly scraping a single site, you can hard-code an override for the site's claimed character encoding.
Your question as such doesn't make much sense. It's not really clear what you are trying to accomplish. u'Chicken and sauted potatoes'
is no more correct and only marginally less unappealing than u'Chicken and sautéed potatoes'
(and in some ways more unappealing, because you can't tell that there was an attempt to make it right, although it wasn't competently executed).
If you get an encoding error because you are feeding Unicode to a file handle with an ASCII encoding, the correct solution for that is to specify an encoding other than ASCII (commonly, UTF-8) when opening the file for writing.