I am working on a project using greendao 1.3.1. Some of the tables containing about 200000 entities (not containing a lot of properties).
I read the entities from csv and to speed things up I developed a small solution, which might also help with your OOM-issue.
For explanation:
greendao uses a cache and after each insert it updates the entity to get the row-id and probably inserts the entity into its cache. On top of that greendao starts a transaction if you call an insert or an update method and if there isn't already a transaction. This slows down "bulk"-inserts and increases the memory usage and also reduces speed.
What I did:
Performance (time)
To fasten things up I started a transaction before I did any insert. This way greendao will not start a transaction on every insert and all inserts and updates are in the same transaction which has additional benefits concerning data consistency.
You can use code like this:
SQLiteDatabase db = dao.getDatabase();
db.beginTransaction();
try {
// do all your inserts and so on here.
db.setTransactionSuccessful();
} catch (Exception ex) {
} finally {
db.endTransaction();
}
But this won't help you with your OOM-problem yet.
Memory-usage
Solution 1
If you don't want to mess with the greendao-code you can issue a DaoSession.clear()
every once in a while.
This is definitely the simple solution, but will be less performant than solution 2.
Solution 2
To prevent greendao from updateing and inserting the entity into its cache you can replace the method private long executeInsert(T entity, SQLiteStatement stmt)
with this code in AbstractDao.java:
/**
* Insert an entity into the table associated with a concrete DAO.
*
* @return row ID of newly inserted entity
*/
public long insertOrReplace(T entity, boolean update) {
return executeInsert(entity, statements.getInsertOrReplaceStatement(), update);
}
private long executeInsert(T entity, SQLiteStatement stmt) {
return executeInsert(entity, stmt, true);
}
private long executeInsert(T entity, SQLiteStatement stmt, boolean update) {
long rowId;
if (db.isDbLockedByCurrentThread()) {
synchronized (stmt) {
bindValues(stmt, entity);
rowId = stmt.executeInsert();
}
} else {
// Do TX to acquire a connection before locking the stmt to avoid deadlocks
db.beginTransaction();
try {
synchronized (stmt) {
bindValues(stmt, entity);
rowId = stmt.executeInsert();
}
db.setTransactionSuccessful();
} finally {
db.endTransaction();
}
}
if (update) {
updateKeyAfterInsertAndAttach(entity, rowId, true);
}
return rowId;
}