I have 5,000,000 insert query in file. I want read them from file and write to cassandra with java driver and executeAsync method, in loop statement like following code:
public static void main(String[] args) {
FileReader fr = null;
try {
fr = new FileReader("the-file-name.txt");
BufferedReader br = new BufferedReader(fr);
String sCurrentLine;
long time1 = System.currentTimeMillis();
while ((sCurrentLine = br.readLine()) != null) {
session.executeAsync(sCurrentLine);
}
System.out.println(System.currentTimeMillis() - time1);
fr.close();
br.close();
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
my table definition is:
CREATE TABLE test.climate (
city text,
date text,
time text,
temprature int,
PRIMARY KEY ((city, date), time)
) WITH CLUSTERING ORDER BY (time ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
AND comment = ''
AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32', 'min_threshold': '4'}
AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}
AND crc_check_chance = 1.0
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99PERCENTILE';
But after running program the count of row in table is 2,569,725
cqlsh:test> select count(*) from climate ;
count
---------
2569725
I tested more than 10 times and each time the result of select count(*) was between 2,400,00 and 2,600,000