1

I have an web application, that runs under Glassfish 4.1, that contains a couple of features that require JMS/MDB. In particular I am having problems regarding the generation of a report using JMS/MDB, that is, obtain data from a table and dump them in a file.

This is what happens, i have a JMS/MDB message that does a couple tasks in an Oracle database and after having the final result in a table, i would like to obtain a csv report from that table (which usually is 30M+ records).

So while in JMS/MDB this is what happens to generate the report:

public boolean handleReportContent() {

    Connection conn = null;

    try {
        System.out.println("Handling report content... " + new Date());
        conn = DriverManager.getConnection(data.getUrl(), data.getUsername(), data.getPassword());
        int reportLine = 1;
        String sql = "SELECT FIELD_NAME, VALUE_A, VALUE_B, DIFFERENCE FROM " + data.getDbTableName() + " WHERE SET_PK IN ( SELECT DISTINCT SET_PK FROM " + data.getDbTableName() + " WHERE IS_VALID=? )";
        PreparedStatement ps = conn.prepareStatement(sql);
        ps.setBoolean(1, false);
        ResultSet rs = ps.executeQuery();

        List<ReportLine> lst = new ArrayList<>();
        int columns = data.getLstFormats().size();
        int size = 0;
        int linesDone = 0;

        while (rs.next()) {

            ReportLine rl = new ReportLine(reportLine, rs.getString("FIELD_NAME"), rs.getString("VALUE_A"), rs.getString("VALUE_B"), rs.getString("DIFFERENCE"));
            lst.add(rl);
            linesDone = columns * (reportLine - 1);
            size++;
            if ((size - linesDone) == columns) {
                reportLine++;

                if (lst.size() > 4000) {
                    appendReportContentNew(lst);
                    lst.clear();
                }
            }
        }

        if (lst.size() > 0) {
            appendReportContentNew(lst);
            lst.clear();
        }

        ps.close();
        conn.close();
        return true;
    } catch (Exception e) {
        System.out.println("exception handling report content new: " + e.toString());
        return false;
    }

This is working, i am aware it is slow and inneficient and most likely there is a better option to perform the same operation. What this method does is:

  • collect the data from the ResultSet;
  • dump it in a List;
  • for each 4K objects will call the method appendReportContentNew()
  • dump the data in the List for the file

    public void appendReportContentNew(List<ReportLine> lst) {
    
    File f = new File(data.getJobFilenamePath());
    
    try {
        if (!f.exists()) {
            f.createNewFile();
        }
    
        FileWriter fw = new FileWriter(data.getJobFilenamePath(), true);
        BufferedWriter bw = new BufferedWriter(fw);
    
        for (ReportLine rl : lst) {
            String rID = "R" + rl.getLine();
            String fieldName = rl.getFieldName();
            String rline = rID + "," + fieldName + "," + rl.getValue1() + "," + rl.getValue2() + "," + rl.getDifference();
            bw.append(rline);
            bw.append("\n");
        }
    
        bw.close();
    
    } catch (IOException e) {
        System.out.println("exception appending report content: " + e.toString());
    }
    

    }

With this method, in 20 minutes, it wrote 800k lines (30Mb file) it usually goes to 4Gb or more. This is what i want to improve, if possible.

So i decided to try OpenCSV, and i got the following method:

public boolean handleReportContentv2() {

    Connection conn = null;

    try {
        FileWriter fw = new FileWriter(data.getJobFilenamePath(), true);
        System.out.println("Handling report content v2... " + new Date());
        conn = DriverManager.getConnection(data.getUrl(), data.getUsername(), data.getPassword());
        String sql = "SELECT NLINE, FIELD_NAME, VALUE_A, VALUE_B, DIFFERENCE FROM " + data.getDbTableName() + " WHERE SET_PK IN ( SELECT DISTINCT SET_PK FROM " + data.getDbTableName() + " WHERE IS_VALID=? )";
        PreparedStatement ps = conn.prepareStatement(sql);
        ps.setBoolean(1, false);
        ps.setFetchSize(500);
        ResultSet rs = ps.executeQuery();

        BufferedWriter out = new BufferedWriter(fw);
        CSVWriter writer = new CSVWriter(out, ',', CSVWriter.NO_QUOTE_CHARACTER);
        writer.writeAll(rs, false);

        fw.close();
        writer.close();
        rs.close();
        ps.close();
        conn.close();
        return true;
    } catch (Exception e) {
        System.out.println("exception handling report content v2: " + e.toString());
        return false;
    }
}

So I am collecting all the data from the ResultSet, and dumping in the CSVWriter. This operation for the same 20 minutes, only wrote 7k lines.

But the same method, if I use it outside the JMS/MDB, it has an incredible difference, just for the first 4 minutes it wrote 3M rows in the file. For the same 20 minutes, it generated a file of 500Mb+.

Clearly using OpenCSV is by far the best option if i want to improve the performance, my question is why it doesn't perform the same way inside the JMS/MDB? If it is not possible is there any possible solution to improve the same task by any other way?

I appreciate the feedback and help on this matter, i am trying to understand the reason why the behavior/performance is different in/out of the JMS/MDB.

**

EDIT:

**

@MessageDriven(activationConfig = {
@ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue"),
@ActivationConfigProperty(propertyName = "destinationLookup", propertyValue = "MessageQueue")})

public class JobProcessorBean implements MessageListener {

private static final int TYPE_A_ID = 0;
private static final int TYPE_B_ID = 1;

@Inject
JobDao jobsDao;

@Inject
private AsyncReport generator;

public JobProcessorBean() {
}

@Override
public void onMessage(Message message) {
    int jobId = -1;
    ObjectMessage msg = (ObjectMessage) message;
    try {
        boolean valid = true;
        JobWrapper jobw = (JobWrapper) msg.getObject();
        jobId = jobw.getJob().getJobId().intValue();

        switch (jobw.getJob().getJobTypeId().getJobTypeId().intValue()) {
            case TYPE_A_ID:
                jobsDao.updateJobStatus(jobId, 0);
                valid = processTask1(jobw);
                if(valid) {
                    jobsDao.updateJobFileName(jobId, generator.getData().getJobFilename());
                    System.out.println(":: :: JOBW FileName :: "+generator.getData().getJobFilename());
                    jobsDao.updateJobStatus(jobId, 0);
                }
                else {
                    System.out.println("error...");
                    jobsDao.updateJobStatus(jobId, 1);
                }
                **boolean validfile = handleReportContentv2();**
                if(!validfile) {
                    System.out.println("error file...");
                    jobsDao.updateJobStatus(jobId, 1);
                }
                break;
            case TYPE_B_ID:
                (...)
        }
        if(valid) {        
            jobsDao.updateJobStatus(jobw.getJob().getJobId().intValue(), 2); //updated to complete
        }
        System.out.println("***********---------Finished JOB " + jobId + "-----------****************");
        System.out.println();
        jobw = null;
    } catch (JMSException ex) {
        Logger.getLogger(JobProcessorBean.class.getName()).log(Level.SEVERE, null, ex);
        jobsDao.updateJobStatus(jobId, 1);
    } catch (Exception ex) {
        Logger.getLogger(JobProcessorBean.class.getName()).log(Level.SEVERE, null, ex);
        jobsDao.updateJobStatus(jobId, 1);
    } finally {
        msg = null;
    }
}

private boolean processTask1(JobWrapper jobw) throws Exception {

    boolean valid = true;
    jobsDao.updateJobStatus(jobw.getJob().getJobId().intValue(), 0);

    generator.setData(jobw.getData());
    valid = generator.deployGenerator();
    if(!valid) return false;
    jobsDao.updateJobParameters(jobw.getJob().getJobId().intValue(),new ReportContent());

    Logger.getLogger(JobProcessorBean.class.getName()).log(Level.INFO, null, "Job Finished");
    return true;
}

So if the same method, handleReportContent() is executed inside the generator.deployGenerator() is has those slow results. If I wait for everything inside that method and make the file in this bean JobProcessorBean is way more fast. I am just trying to figure out why/how the behavior works to performs like this.

MaDa
  • 10,511
  • 9
  • 46
  • 84
mpc
  • 83
  • 1
  • 9
  • What is your `@TransactionAttribute` annotation on the message-driven bean? How do you call your `handleReportContentv2` method outside of the MDB? – MaDa Mar 12 '15 at 09:53
  • @MaDa I have a class `JobProcessorBean` that has the following annotations: `@MessageDriven(activationConfig = { @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue"), @ActivationConfigProperty(propertyName = "destinationLookup", propertyValue = "MessageQueue") })` This bean injects another bean where i invoke a `deploy()`method. The `handleReportContentv2()` is used inside that bean where deploy is made and i have those results. If i copy the same method and run in a different bean i get different results. I will add that portion of code. – mpc Mar 12 '15 at 11:35
  • 1
    Try adding `@TransactionAttribute(NOT_SUPPORTED)` on the bean and see if this has impact on performance. Also, take a look at http://stackoverflow.com/questions/19139426/how-to-write-a-file-to-resource-images-folder-of-the-app for other issues with saving a file directly into web application folders. – MaDa Mar 12 '15 at 12:37
  • @MaDa Will do, I will let you know if there is any difference. – mpc Mar 12 '15 at 12:57
  • @MaDa that solved the issue, it performs the same way inside the MDB. Feel free to answer the question with your previous comment, so i can accept it. Thank you for your time and help. :) – mpc Mar 12 '15 at 15:12

1 Answers1

1

Adding the @TransactionAttribute(NOT_SUPPORTED) annotation on the bean might solve the problem (and it did, as your comment indicates).

Why is this so? Because if you don't put any transactional annotation on a message-driven bean, the default becomes @TransactionAttribute(REQUIRED) (so everything the bean does, is supervised by a transaction manager). Apparently, this slows things down.

MaDa
  • 10,511
  • 9
  • 46
  • 84