I am doing some csv read/write operations over a huge flat file. According the source, the contents of the file are under UTF-8 encoding but while trying to write to a .csv file I am getting following error:
Traceback (most recent call last):
File " basic.py", line 12, in <module>
F.write(q)
File "C:\Program Files\Python36\lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\x9a' in position 19: character maps to <undefined>
Quite a possibility that the file contains some multicultural symbols, given it contains data that represent some global information.
But as its huge it can’t be fixed manually one by one. So is there a way I can fix these errors, make the code ignore these or ideally can standardize these characters. As after writing these the csv file I will uploading it over a sql server db.