First time posting, any advice is welcome :)
I have quite a lot of data in DBF format. I want to export it to Excel. I did this with smaller files before and went great, now I want to do the same with a DBF that would write several hundred of thousands of rows in Excel. But I'm not sure how to handle this error:
from dbfread import DBF
import pandas as pd
import time
start = time.time()
name_file= "MyData.dbf"
file = open(name_file, encoding="utf-8")
l = []
rango = 1000
count = 0
for limit, i in zip(range(rango), DBF(name_file)):
count += 1
l.append(i)
file.close()
df = pd.DataFrame(l)
end = time.time()
lapse = str(round(end-start,2))
print(f"{rango} iterations where processed in {lapse} seconds, total counts: {count}")
The first 718 iterations were doing great. Everything works smoothly and I can export the data to Excel.
But then, if I set "rango" to 719 or more, I get this error:
UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 33: character maps to <undefined>
I'm looking for a way to skip the error so the loop can just continue adding data of the next iteration to the list. But the error happens at this very line:
for limit, i in zip(range(rango), DBF(name_file))
I'm new at coding and researched about error handling, but I'm not finding anything specific to this. Most things I tried makes the loop start over again from scratch. I also tried changing the encoding="utf-8", but nothing seems to work.