I need to create certain DF's to load raw data into my context to build ETL logic on top of that. For the same purpose, I am looking to load all raw-data DF's in parallel and I am using Databricks Notebook for code creation/execution on dedicated clusters.
Any pointers how to execute two DF's in parallel in same Databricks notebook ?