I want to parse the large CSV file as fast and efficient as possible.
Currently, I am using the openCSV library to parse my CSV file but it is taking approx 10sec to parse a CSV file which has 10776 records with 24 headings and I want to parse a CSV file with millions of records.
<dependency>
<groupId>com.opencsv</groupId>
<artifactId>opencsv</artifactId>
<version>4.1</version>
</dependency>
I am using the openCSV library parsing using below code snippet.
public List<?> convertStreamtoObject(InputStream inputStream, Class clazz) throws IOException {
HeaderColumnNameMappingStrategy ms = new HeaderColumnNameMappingStrategy();
ms.setType(clazz);
Reader reader = new InputStreamReader(inputStream);
CsvToBean cb = new CsvToBeanBuilder(reader)
.withType(clazz)
.withMappingStrategy(ms)
.withSkipLines(0)
.withSeparator('|')
.withFieldAsNull(CSVReaderNullFieldIndicator.EMPTY_SEPARATORS)
.withThrowExceptions(true)
.build();
List<?> parsedData = cb.parse();
inputStream.close();
reader.close();
return parsedData;
}
I am looking for suggestions for another way to parse a CSV file with millions of records in less time frame.
--- updated the answer ----
Reader reader = new InputStreamReader(in);
CSVParser csvParser = new CSVParser(reader, CSVFormat.DEFAULT
.withFirstRecordAsHeader()
.withDelimiter('|')
.withIgnoreHeaderCase()
.withTrim());
List<CSVRecord> recordList = csvParser.getRecords();
for (CSVRecord csvRecord : recordList) {
csvRecord.get("headername");
}