0

Is there a way to process different sparkSQL queries(read queries with different filters and groupbys) on a static dataset, being received from the front-end, in parallel and not in a FIFO manner, so that the users will not have to wait in a queue?

One way is to submit the queries from different threads of a thread pool but then wouldn't concurrent threads compete for the same resources i.e. the RDDs? Source

Is there a more efficient way to achieve this using spark or any other big data framework? Currently, I'm using sparkSQL and the data is stored in parquet format(200GB)

Divya Bansal
  • 85
  • 1
  • 8

1 Answers1

0

I assume you mean different users submitting their own programs or spark-shell activities and not parallelism within the same application per se.

That being so, Fair Scheduler Pools or Spark Dynamic Resource Allocation would be the best bets. All to be found here https://spark.apache.org/docs/latest/job-scheduling.html

This area is somewhat hard to follow, as there is the notion of as follows:

... " Note that none of the modes currently provide memory sharing across applications. If you would like to share data this way, we recommend running a single server application that can serve multiple requests by querying the same RDDs. ".

One can find opposing statements on Stack Overflow regarding this point. Apache Ignite is what is meant here, that may well serve you as well.

thebluephantom
  • 16,458
  • 8
  • 40
  • 83