1

I am start to learning the snappy-data,and running the snappy-data examples as per the Documentation, while Start in snappy-data server like

$SNAPPY_HOME$ ./sbin/snappy-start-all.sh

./bin/run-example snappydata.JDBCExample

comment ,the snappy-data example are executing, at the same time when using snappy-data row-store its not working while starting snappy-data a like

" $SNAPPY_HOME$ ./sbin/./sbin/snappy-start-all.sh rowstore "

./bin/run-example snappydata.JDBCExample

like that i will execute the comment. could you any one knows how to run snappy examples in row-store, please share me, Thank you.. for the Reference snappy-data rows-tore Documentation link..

Karthik GB
  • 57
  • 5
  • First I'm not sure what does this has to do with apache-spark ? – eliasah Mar 10 '17 at 09:23
  • Snappy-data all the example will running behind the Apache spark, spark-shell and spark-submit. – Karthik GB Mar 10 '17 at 09:33
  • Yes I read about it a bit. But this is still not a spark problem even though it has spark under the hood. – eliasah Mar 10 '17 at 09:34
  • yes, this is from snappy-data issue, but they are using spark-shell and spark-submit to executing the examples using run-example script.. – Karthik GB Mar 10 '17 at 09:36
  • Read about how to ask a good question. This is not a minimum reproducible verifiable example thus answers will avoid it know that it also needs to look into a new framework to be able to answer. – eliasah Mar 10 '17 at 09:38

1 Answers1

1

The JDBCExample is written for SnappyData cluster and will not work for just rowstore mode cluster. Also to start a rowstore cluster correct command is-

/sbin/snappy-start.sh rowstore

Note there is no hyphen.

The JDBCExample uses SnappyData format DDL to create tables which will not work in pure rowstore mode.

  • I tried what ever you mention commend, the GfxdServerLauncher, GfxdLocatorLauncher started successfully, while running the example its not running its showing error like a ""buckets" ... "eviction" ... "expire" ... "initsize" ... "local" ... "maxpartsize" ... "partition" ... "persistent" . " this error at the same time if i run normal "snappy-start.sh "its showing the correct output.. – Karthik GB Mar 10 '17 at 10:01
  • The example is not supposed work when cluster is started in rowstore mode. It will work with SnappyData cluster (that is when started with using /sbin/snappy-start.sh _without_ the rowstore mode option) – Shirish Deshmukh Mar 10 '17 at 10:16
  • Ok, If not supported means how to run job ,query, and programmatic in rowstore snappydata, do you have a any idea about that.. – Karthik GB Mar 10 '17 at 10:20
  • now i created one example and tested local cluster , its created in the table in snappydata as well as hdfs,But i need to load data from csv file and store to snappy table , If anyone knows how to do that please share me , this is my code: – Karthik GB Mar 10 '17 at 11:55
  • val c1 = DriverManager.getConnection(url) val s1 = conn1.createStatement() s1.execute("DROP TABLE IF EXISTS APP.CUSTOMER2") s1.execute("CREATE TABLE APP.CUSTOMER2 (id INT NOT NULL PRIMARY KEY,Open_data int,High_data int,Low_data int ) PARTITION BY PRIMARY KEY EVICTION BY CRITERIA ( id < 50 ) EVICTION FREQUENCY 180 SECONDS HDFSSTORE (streamingstore)") val p1 = c1.prepareStatement("INSERT INTO APP.CUSTOMER2 VALUES( ?, ?, ?, ?)") var x = 0 for (x <- 1 to 10) { p1.setInt(1, x*10) p1.setInt(2, x) p1.setInt(3, x*1000) p1.setInt(4, x*1000) p1.addBatch() } p1.executeBatch() – Karthik GB Mar 10 '17 at 11:58
  • Karthik you might try the Spark way of loading a CSV file as found here: http://stackoverflow.com/a/39533431/3723346 – plamb Mar 10 '17 at 16:07
  • But i have to store the data in the snappy rowstore, so i am calling jdbc:snappydata://localhost:1527/ for connect the snappy data store to client, in the case i need to read data from csv and insert into the snappy rowstore table,but spark api not connecting the snappy store to create the table, so could you tell me any solution on this case..Thank you.. – Karthik GB Mar 11 '17 at 08:46