I have a dataframe with 10609 rows and I want to convert 100 rows at a time to JSON and send them back to a webservice.
I have tried using the LIMIT clause of SQL like
temptable = spark.sql("select item_code_1 from join_table limit 100")
This returns the first 100 rows, but if I want the next 100 rows, I tried this but did not work.
temptable = spark.sql("select item_code_1 from join_table limit 100, 200")
Error: Py4JJavaError: An error occurred while calling o22.sql. : org.apache.spark.sql.catalyst.parser.ParseException: mismatched input ',' expecting (line 1, pos 44)
== SQL ==
select item_code_1 from join_table limit 100, 200