0

I have a program that trims items from a SQLite database based on a timestamp field. It seems when I run it on a large existing database that it takes quite a very long time and hogs up the resources on my already very low end embedded machine.

Can anyone suggest any ideas on how to improve my code to get it to run better/faster?

Call from main program:

query = SQLiteInterface.getInstance().delete(entry, "SampleInfo_source_timestamp < '" + timestamp + "'");
                            dbCall(query);

SQLiteInterface delete method:

 /**
     * This creates a SQLite formatted 'delete' statement that allows for where clauses
     * 
     * @param table - The table that the items are being selected from
     * @param filters - The list of where clauses
     * @return - A SQLite formatted delete query
     */
    public String delete(String table, String filters)
    {
        if (filters == null)
        {
            return "delete from " + table;
        } else
        {
            return "delete from " + table + " where " + filters;
        }
    }

Code that actually sends the DB Command using JDBC

/**
 * 
 * 
 * @param query - A SQLite formatted DB query
 */
public void dbCall(String query)
{
    // Good practice to create a new statement and result set instead of reusing
    Statement stmt = null;
    Connection conn = null;
    ResultSet rs = null;
    try
    {
        // Fetch the query and the request out of the QueryRequest object
        conn = cpds.getConnection();
        conn.setAutoCommit(false);

        // Make sure the query request isn't empty, if it is, there is no point in sending it to the DB
        if (!query.isEmpty())
        {
            // Initialize the new statement
            stmt = conn.createStatement();
            stmt.executeUpdate(query);
        }

        conn.commit();

    }
    catch (SQLException e)
    {
        if (e.getMessage().contains("no such column"))
        {
            // This table is not one that needs to be cleaned up.
        }
        else
        {
            errorLog.error("", e);
            // Table busy, try again.
            dbCall(query);
        }
    }
    catch (Exception e2)
    {
        errorLog.error("", e2);
    }
    finally
    {
        if (rs != null)
        {
            try
            {
                rs.close();
            }
            catch (SQLException e)
            {
                errorLog.error("Failed to close JDBC result set.");
            }
        }
        if (stmt != null)
        {
            try
            {
                stmt.close();
            }
            catch (SQLException e)
            {
                errorLog.error("Failed to close JDBC statement.");
            }
        }
        if (conn != null)
        {
            try
            {
                conn.close();
            }
            catch (SQLException e)
            {
                errorLog.error("Failed to close JDBC connection.");
            }
        }

    }
Tacitus86
  • 1,314
  • 2
  • 14
  • 36
  • Well if there's a lot of data to delete and you have a slow CPU and SQLite not known for its performance, you might not be able to do much except maybe redesign your data model so you don't need to execute a mass delete like that. – Kayaman Feb 05 '20 at 15:52
  • Do you have an index on the relevant column? – Shawn Feb 05 '20 at 16:07
  • @Shawn No, I have other indexes though. I thought each table can only have one index. Well it could have more but it would only use one. My tables have indexs on timestamp + IOID + machine number. But this will just use timestamp. If i have a second index on timestamp won't a request that has timestamp + ioid + machine number may try to use the timestamp only index or is it smart enough? – Tacitus86 Feb 05 '20 at 16:26
  • Possibly try batch deletes: https://stackoverflow.com/a/37995249/2711811. Would require restructuring your delete approach to "add" multiple deletes before executing batch. –  Feb 05 '20 at 17:21

0 Answers0