I am working on a Django project, where I need to save data from multiple pandas DataFrames into Django models. I also use MySQL to store the data.
However, one of the DataFrames - df, is equivalent to the Django model/database table, where I want to store the data. So my sql-table is a one to one copy of df. df is also the largest dataframe that I have.
I was thinking, if it would make sense to simply apply df.to_sql instead of saving the data through the Django model, where I have to iterate over the rows of df. I already tried it and it seems to work fine (and also fast). However, I am not sure if such an approach is good from Django's perspective, because I am saving the rest of the data through models.
I would really appreciate any help or suggestions!