In my application I fetch some data from my server and after deserialization (from JSON to objects) I'm going to put these objects in my database. Until today I had approach:
for(int i=0; i<receivedJsonArray.length; i++)
Bean bean = new Bean();
//matching some parameters
//...
dao.putObject(bean);
}
Where my dao.putObject(Bean)
was like (I ommited a few things like try-catch block as they were not really relevant).
public void putObject(Bean bean){
sqliteDatabase.beginTransaction();
ContentValues values = new ContentValues();
values.put("something", bean.getSomething());
// ... some mapping
database.insert("table", null, values);
database.setTransactionSuccessful();
database.endTransaction();
}
As you can see here every time I deserialize a single instance of object I'm making a new transaction and so on. However, I feel I use a fraction of memory I would use in the second approach.
Bean[] beans = new Bean[receivedJsonArray.length];
for(int i=0; i<receivedJsonArray.length; i++)
Bean bean = new Bean();
//matching some parameters
//...
beans[i] = bean;
}
dao.putObject(beans);
Now dao.putObject(Bean...)
looks like
public void putObject(Bean... beans){
sqliteDatabase.beginTransaction();
for(int i=0; i<beans.length; i++){
ContentValues values = new ContentValues();
values.put("something", beans[i].getSomething());
// ... some mapping
database.insert("table", null, values);
}
database.setTransactionSuccessful();
database.endTransaction();
}
So now I wonder which of these two is a better way to handle about ~400-500 objects? I feel that beginning and ending transaction 400-500 times is a bad practice. On the other hand I feel bad for creating and holding quite big collection of data.
I'm aware in devices I targeted for my application (Android 4.+) this does not really matter as I have both efficient CPUs and quite much RAM; however I feel this is not an excuse for non-efficient code.
So, how do you suggest? One big batch, inserting every single object or maybe trying to insert a bunch of (i.e.) 100 objects at a time?