I'm trying to retrieve some tweets with snscrape but the JSON file generated is encoded 'cp1252'. I coulnd't find in the documentation if there is a way to request the JSON file to be encoded as I whis but, shoudn't it be possible, how can I convert a quite big text file from cp1252 to UTF-8? I've seen plenty of questions of this kind but they all explained how to print the correct text instead of storing it in a file.
This question is not a diplicate of this one as I'm not trying to do it by cmd but instead via python.
EDIT: I'll try to better explain the situation: I'm retrieving tweets but they happen to contain unicode characters. This is an example of a sentence I'd like to decode:
La mia vita \u00e8 fantastica I extracted the encoding of the file this sentence is written in and it is 'cp-1252'. I'm not sure anymore if this is a 'cp-1252' file containing unicode characters (is this even possible?), but I had no luck converting that "\u00e8" to my "è".
After the first comment, here's what I tried:
file = open(file_name_input, encoding='cp1252')
file_output = open(file_name_output, 'w')
for line in file:
file_output.write(line.encode('utf-8').decode())