Is there a way to process different sparkSQL queries(read queries with different filters and groupbys) on a static dataset, being received from the front-end, in parallel and not in a FIFO manner, so that the users will not have to wait in a queue?
One way is to submit the queries from different threads of a thread pool but then wouldn't concurrent threads compete for the same resources i.e. the RDDs? Source
Is there a more efficient way to achieve this using spark or any other big data framework? Currently, I'm using sparkSQL and the data is stored in parquet format(200GB)