Loss of precision and accuracy, issues with saving dataframes as csv files
I have an issue with saving my pandas dataframes as CSV files where the data being changed loses their actual exponents and is converted into a completely different version. When I print the numbers being saved on my terminal, they occur as:
5.991147508039374e-14
0.00010245463935037415
0.12370028677066161
3.441345557219637e-10
1.0513129941379549e-11
0.00012962879921246142
0.0012641841232159863
5.5991400151355556e-11
1.607061744727683e-20
0.6436355784804926
But when I open the corresponding excel file: the corresponding rows have the following numbers:
5.99E+01
0.000102
0.1237
3.44E+05
1.05E+05
0.00013
0.001264
5.60E+05
1.61E-05
0.643636
Clearly, these numbers are completely different from the actual data being saved. This is catastrophic for my data analysis because the original data corresponds to p values. Data whose values were significant is saved as not significant affecting the interpretation of my results.
I tried the regular pd.to_csv(df_data)
format to save such data, I also tried specifying the float format and delimiter as follows:
pd.to_csv(df_data, float_format='%.15f',sep= ",")
I encountered this issue too when I copied a number let's say, 5.991147508039374e-14
into an Excel cell and it got converted to
5.99E+01
.
I have checked the previous query (below) on issues involving CSV and tried recommended solutions to no avail. pandas to_csv: suppress scientific notation in csv file when writing pandas to csv and Prevent trailing zero with pandas "to_csv"