0

I am using Cassandra for storing all the events happening my application.After 6 month It has more than millions data.

I am getting following error while reading all the records from cassandra

CLI -

cqlsh> use abc;
cqlsh:abc> select count(*) from interaction;
OperationTimedOut: errors={}, last_host=localhost

From spring data ( Using allow filtering )

Caused by: com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout during read query at consistency ONE (1 responses were required but only 0 replica responded)
    at com.datastax.driver.core.exceptions.ReadTimeoutException.copy(ReadTimeoutException.java:69)
    at com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:256)
    at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:172)
    at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:52)
    at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:36)
    at com.germinait.replica.CassandraTransfer.startTransfer(CassandraTransfer.java:43)
    at com.germinait.replica.DemoApplication.run(DemoApplication.java:22)
    at org.springframework.boot.SpringApplication.runCommandLineRunners(SpringApplication.java:673)
    ... 5 more
Caused by: com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout during read query at consistency ONE (1 responses were required but only 0 replica responded)
    at com.datastax.driver.core.exceptions.ReadTimeoutException.copy(ReadTimeoutException.java:69)
    at com.datastax.driver.core.Responses$Error.asException(Responses.java:94)
    at com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:108)
    at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:235)
    at com.datastax.driver.core.RequestHandler.onSet(RequestHandler.java:379)
    at com.datastax.driver.core.Connection$Dispatcher.messageReceived(Connection.java:584)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:

I am not sure how to read a large number of data.

Let me know how to read such large data using spring-data or some alternative.

Vaibhav Shah
  • 538
  • 3
  • 16
  • Cassandra is good at returning values for a specific key or range. It is not good at returning millions of rows. In fact, unbound queries are an anti-pattern. – Aaron Sep 16 '16 at 15:00
  • 1
    answered here: http://stackoverflow.com/questions/29394382/operation-time-out-error-in-cqlsh-console-of-cassandra/29394935#29394935 if that helps – Chris Lohfink Sep 16 '16 at 15:00

2 Answers2

1
  1. Change and see your driver read timeout

https://datastax.github.io/java-driver/manual/socket_options/

Cluster cluster = Cluster.builder()
    .addContactPoint("127.0.0.1")
    .withSocketOptions(
            new SocketOptions()
                    .setConnectTimeoutMillis(2000))
    .build();
  1. Change Cassandra server configuration to allow large read and writes

read_request_timeout_in_ms, range_request_timeout_in_ms, write_request_timeout_in_ms, cas_contention_timeout_in_ms, truncate_request_timeout_in_ms

Also, look at this answer for more information.

Cassandra read timeout

Community
  • 1
  • 1
Sreekar
  • 995
  • 1
  • 10
  • 22
  • I tried this solution.It is returning result set but its very huge to process with java . I am looking for pagination kind of solution – Vaibhav Shah Sep 16 '16 at 14:52
0

If you want to know how many row a table contains, you can have a counter table that it's incremented when record is inserted and decremented when rows are deleted.

Guillaume S
  • 1,462
  • 2
  • 19
  • 31