3

I am firing the multiget query with 330 keys and 750 columns per row.

Its dying somewhere in the phpcassa code.The worst thing is, its not throwing any exception.

Script is getting terminated abruptly.Is there any setting I should do ?

Its working fine if I fetch few columns out of these 750 columns

Following is my API call.

multiget($dataCFKeys,$columns = $superColumns, $column_start = "",$column_finish = "", $column_reversed = False,$column_count = $columnCount,$super_column = null,$read_consistency_level = 1,$buffer_size = 100);

Am I missing something.OR is there any configuration that can help me get this work?

Thanks in advance Manish

MANISH ZOPE
  • 1,181
  • 1
  • 11
  • 28
  • when it is dead in your phpcassa code, did you check into cassandra node for any exception thrown in the log? – Jasonw Mar 13 '12 at 14:50

2 Answers2

4

To answer the question as posed: you're probably hitting the PHP max_execution_time -- PHP configuration: max_execution_time and max_input_time

More generally though I would say that this is not a good way to model data in Cassandra. If you need to crunch through a lot of data, use Hadoop (http://wiki.apache.org/cassandra/HadoopSupport); otherwise, you should model things so you can get the data you want from a single row or from an index.

Community
  • 1
  • 1
jbellis
  • 19,347
  • 2
  • 38
  • 47
  • Thanks for answer Jonathan.We are using HADOOP for crunching.But our data is so huge that after crunching too we need to get that much of data to show up in UI. – MANISH ZOPE Apr 11 '12 at 05:19
0

After spending some time on this bug I figured out the problem area.

The problem was not with PHPCASSA or cassandra.

The problem lies in maximum memory limit set for PHP on my server.

MANISH ZOPE
  • 1,181
  • 1
  • 11
  • 28