0

I am trying to write 2GB(which is limit of Cassandra single key/value) data into single(or many) column using Datastax driver,CQL3 on one machine windows node.I am hardly able to write data like 100MB(in single column), that too by facing almost all kind of exceptions and config changes.If i try write 100MB data i have to keep "commitlog_segment_size_in_mb: 200" which works; after that Cassandra killing itself.Is there any way where i can insert upto 2GB data into one(at least) or many column and find out timing ?

  • Why you're insterting 2GB data in a single row ? What do you want to store ? – Guillaume S Jun 13 '16 at 13:30
  • I want to test how much time it take to insert/read operation; when data(plain text) in in range 500MB to 2GB(which is supported in Cassandra ). I have not seen any benchmark of timing in this range; all theory only. – Deepak Dabi Jun 13 '16 at 16:52
  • Are you trying to insert data at once ? Maybe you can insert a row with 64 columns. Each rows will contains 32 MB of data and each column will be inserted with update statements. – Guillaume S Jun 14 '16 at 19:53
  • Okay; but why ? why cant we insert at least 1 GB (if limit is 2GB). This is mentioned in Cassandra documents, but all kind of drivers are failing even at 100 MB (until you increase commitlog_segment_size_in_mb; after 1000 MB ; C* killed its self while inserting 200 MB); What i fail to understand is claimed and supported size are not even closed ?. I am able to insert max 100 MB that's it. I mean am i only one to see this ? Note:- C* version 3.6, windows , 8GB RAM. – Deepak Dabi Jun 14 '16 at 22:00

0 Answers0