1

I am trying to export data from a PostgreSQL DB where the volume of data i have to export is near about 1 million. I have tried various approach but didn't get a solution for it.Even if i am using the postman for the calling the API , which i have written to export the csv, the postman is shutting down.I am using react.js to download the but there it is loading for hours. I am posting the code for the export

public String populateCsvReport(SearchDto searchDto){

List<DetailRecord> myDetailRecord = itsCustomRepo.getDetail(searchDto);
StringWriter sw = new StringWriter();
try(CSVPrinter csvPrinter = new CSVPrinter(sw,CSVFormat.DEFAULT.withHeader("Supplier Number"
"Supplier name"........
  )){
  myDetailRecord.forEach(mydetail->{

 csvPrinter.printRecord(
     mydetail.getSuplNum(),
     mydetail.getSuplName(),

   ......................
  )

  });

 return myDetailRecord;

Now Here i have also tried to change my code as

 myDetailRecord.forEach(mydetail->{
    mydetail.getSuplNum(),
    mydetail.getSuplName(),

   ......................
   });

csvPrinter.printRecord(
  myDetailRecord
);

But it's didn't create an impact on my code.

And in my controller i am doing like

 @Getmapping(path="/get-export-detail/csv"){
 public RespnseEntity<String> generateMydetailExport(SearchDto searchDto){
 return ResponseEntity.ok()
        .header("Content-Disposition","attachment;fileName="+"myDetails.csv")
        .contentType(MediaType.parseMediaType("text/csv"))
         .body(callingService.populateCSVForDetail(searchDto));

And here i am using the react.js code to export the file

const exportOnClick=()=>{
 callingDetailsService.export(param)
 .then(response)=>{
 let mime = "test/csv";
 let fileName = "myDetail.csv";
 util.downloadFile(response.data,fileName,mime);
 

Here is my custom repository Code

@Repository
public class ItsCustomRepo{

@PersistanceContext
private EntityManager entityManager;

public List<DetailRecord> getDetail(List<SearchCriteria> params){
List<DetailRecord> listOfDetail = new ArrayList<>();
CriteriaBuilder cb = entityManager.getCriteriaBuilder();
CriteriaQuery<DetailEntity> cQuery = cb.createQuery(DetailEntity.class);
Root<DetailEntity> rootE = cQuery.from(DetailEntity.class);
String sqlQuery = "select ............."
if(params.size()>0){

for(SearchCriteria param:params){
if(param.getValue()!=null && param.getValue()!=""){
  if(param.getOperation().equalIgnoreCase(CriteriaOperationEnum.GREATER_THAN_OR_EQUAL.getOperation()){
 if(rootE.get(param.getKey()).getJavaType()==LocalDate.class){
}else if(param.getOperation().equalIgnoreCase(CriteriaOperationEnum.LESS_THAN_OR_EQUAL.getOperation()
 //some op
  }else{ if(param.getOPeration().equalsIgnoreCase(CriteriaOperationEnum.LIKE.getOperation())){
//some op
}
}

Query query = entityManager.createNativeQuery(sqlQuery);
List<Object[]> objectList  = query.getResultList();
int count  = objectList.size();
objectList.forEach(glObject->{
DetailRecord detailRecord = DetailRecord.builder()
 .supl_num(glObject[0])
 ...................

  listOfDetail .add(detailRecord);
});

return listOfDetail;

My code is simple by i don't understand where it is getting failed, i am checking the count of the DB while running the Query and it is fast , and also i can see while debugging the code is smoothly coming to the controller but after that it is hanging for hours and hours.I have tried using opencsv,apache- poi etc. Can't understand where it is failing, someone please help me.

Mandrek
  • 1,159
  • 6
  • 25
  • 55
  • It looks like your code will read the entire ~1m row result set into memory before starting to write the csv data. That's going to take a lot of time and a lot of memory. If your `itsCustomRepo` has a method like `processDetail(SearchDto searchDto, Consumer callback)` then you could write csv as the rows are coming back from the database. – Erik Mar 15 '21 at 17:15
  • use a scrollable result, see https://stackoverflow.com/questions/5067619/jpa-what-is-the-proper-pattern-for-iterating-over-large-result-sets – Turo Mar 15 '21 at 17:18
  • @Erik can you post a sample code or edit mine ? – Mandrek Mar 15 '21 at 17:26
  • By the way, Postgres can directly export to CSV via the `COPY` command. See sister site, [DBA Stack Exchange](https://dba.stackexchange.com/q/36235/19079) – Basil Bourque Mar 15 '21 at 17:29
  • @BasilBourque can you give me some example code – Mandrek Mar 15 '21 at 17:31
  • You did not post any code for the `itsCustomRepo` so I can't tell what methods are available there. – Erik Mar 15 '21 at 17:36
  • @Erik adding that – Mandrek Mar 15 '21 at 17:47
  • @Erik I have added my code , can you please check ? – Mandrek Mar 15 '21 at 18:17
  • The `EntityManager` API is not well suited for this stuff. You have more flexibility at the JDBC level, but to get there depends on your JPA implementation. (See https://stackoverflow.com/questions/3493495/getting-database-connection-in-pure-jpa-setup) Which implementation are you using? – Erik Mar 15 '21 at 18:51

1 Answers1

0

Here is some sample code that generates some CSV with two methods. The first one is similar to yours -- it gets all rows in a list and then creates csv. The second method is more "streaming" in that it writes out the rows as soon as it can get them from the database. With 1M rows, that makes a big difference.

import org.h2.jdbcx.JdbcDataSource;
import org.springframework.jdbc.core.JdbcTemplate;

import java.util.List;
import java.util.function.Consumer;

public class CsvSample {

  static class Player {
    int id;
    String name;
    int teamId;
    Player(int id, String name, int temId) {
      this.id = id;
      this.name = name;
      this.teamId = temId;
    }
  }

  interface PlayerRepo {
    void save(Player player);
    List<Player> findPlayers(int teamId);
    int processPlayers(int teamId, Consumer<Player> callback);
  }

  static class SimplePlayerRepo implements PlayerRepo {
    JdbcTemplate jdbc;

    SimplePlayerRepo(JdbcTemplate jdbc) {
      this.jdbc = jdbc;
      this.jdbc.execute("create table if not exists Player(id int primary key, name varchar(30), team int)");
    }

    @Override
    public void save(Player player) {
      int n = jdbc.update(
          "update Player set name=?, team=? where id=?",
          player.name, player.teamId, player.id);
      if (n == 0) {
        jdbc.update(
            "insert into Player(name, team, id) values (?, ?, ?)",
            player.name, player.teamId, player.id);
      }
    }

    @Override
    public List<Player> findPlayers(int teamId) {
      return jdbc.query(
          "select id, name, team from Player where team=?",
          (rs, n) -> new Player(rs.getInt(1), rs.getString(2), rs.getInt(3)),
          teamId);
    }
    @Override
    public int processPlayers(int teamId, Consumer<Player> callback) {
      return jdbc.query(
          "select id, name, team from Player where team=?",
          rs -> {
            int n = 0;
            while (rs.next()) {
              Player p = new Player(rs.getInt(1), rs.getString(2), rs.getInt(3));
              callback.accept(p);
            }
            return n;
          },
          teamId);
    }
  }

  public static void main(String[] args) throws Exception {
    JdbcDataSource dataSource = new JdbcDataSource();
    dataSource.setUrl("jdbc:h2:mem:csvsample;DB_CLOSE_DELAY=-1");
    PlayerRepo repo = new SimplePlayerRepo(new JdbcTemplate(dataSource));

    // add some players
    repo.save(new Player(1, "Kobe", 1));
    repo.save(new Player(2, "LeBron", 1));
    repo.save(new Player(3, "Shaq", 1));
    repo.save(new Player(4, "Kareem", 1));
    repo.save(new Player(5, "Magic", 1));
    repo.save(new Player(6, "Larry", 2));
    repo.save(new Player(7, "Jason", 2));

    // generate CSV from List
    repo.findPlayers(1).forEach(player -> {
        System.out.println(player.id + "," + player.name);
    });

    System.out.println("----");

    // generate CSV with callback
    repo.processPlayers(1, player -> {
        System.out.println(player.id + "," + player.name);
    });
  }
}

So, in you case I would add a method to your repository class. It should contain all the logic from your getDetail method until you get to the line that says Query query = entityManager.createNativeQuery(sqlQuery);:

public int processSearchResults(List<SearchCriteria> params, Consumer<DetailRecord> callback){
    // instead of this:
    // Query query = entityManager.createNativeQuery(sqlQuery);

    Session session = entityManager.unwrap(Session.class);
    return session.doWork(new Work() {
        @Override
        public void execute(Connection connection) throws SQLException {
            Statement stmt = connection.createQuery();
            ResultSet rs = stmt.executeQuery(sqlQuery);
            int n = 0;
            while (rs.next()) {
                DetailRecord detailRecord = DetailRecord.builder()
                        .supl_num(rs.getObject(1))
                        // .....
                        .build();
                callback.accept(detailRecord);
                n++;
            }
            rs.close();
            stmt.close();
            return n;
        }
    });
}

Then, your generateMydetailExport may look something like this:

@Getmapping(path="/get-export-detail/csv", produces="text/csv") {
public void generateMydetailExport(SearchDto searchDto, PrintWriter out) {
    CSVPrinter csvPrinter = new CSVPrinter(out, CSVFormat.DEFAULT.withHeader("Supplier Number", /* ... */));
    itsCustomRepo.processSearchResults(searchDto, detail -> {
        csvPrinter.printRecord(
            mydetail.getSuplNum(),
            mydetail.getSuplName(),
            // .....
        );
    });
}
Erik
  • 578
  • 2
  • 9