I have a cluster of 4 nodes where it's already installed Spark, I use Pyspark or spark-shell to launch spark and start programming.
I knew how to use Zepplin, but I would like to use jupyter instead as Programation interface (IDE) because it's more useful.
I read that I should export this 2 variable to my .bashrc to make it work:
export PYSPARK_DRIVER_PYTHON="jupyter"
export PYSPARK_DRIVER_PYTHON_OPTS="notebook"
how can I use Pyspark with jupyter?