Spark Context:
Since Spark 1.x, Spark SparkContext is an entry point to Spark and defined in org. apache. spark package and used to programmatically create Spark RDD, accumulators, and broadcast variables on the cluster. Its object sc is the default variable available in spark-shell and it can be programmatically created using SparkContext class.
SparkContext is a client of spark's execution environment.
SparkContext is the entry point of the spark execution job.
SparkContext acts as the master of the spark application.
Hope you will find this Apache SparkContext Examples site useful.
SparkSession:
Since Spark 2.0, SparkSession has become an entry point to Spark to work with RDD, DataFrame, and Dataset. Prior to 2.0, SparkContext used to be an entry point. Here, I will mainly focus on explaining what is SparkSession by defining and describing how to create Spark Session and using the default Spark Session ‘spark’ variable from spark-shell.
Apache spark2.0 onwards, spark session is the new entry point for spark applications.
All the functionalities provided by spark context are available in the Spark session.
spark session Provides API(s) to work on Datasets and Dataframes.
Prior to Spark2.0:
Spark Context was the entry point for spark jobs.
RDD was one of the main APIs then, and it was created and manipulated using spark Context.
For every other APIs, different Contexts were required - For SQL, SQL Context was required.
You can find more real-time examples on Apache SparkSession.
SQLContext:
In Spark Version 1.0 SQLContext (org.apache.spark.sql.SQLContext ) is an entry point to SQL in order to work with structured data (rows and columns) however with 2.0 SQLContext has been replaced with SparkSession.
Apache Spark SQLContext is the entry point to SparkSQL which is a Spark module for structured data (rows and columns) in Spark 1.x. processing.
Spark SQLContext is initialized.
apache-spark SQL context is the entry point of Spark SQL which can be received from spark context
JavaSparkContext:
JavaSparkContext For JAVARDD same as above is done but in java implementation.
JavaSparkContext Java-friendly version of [[org.apache.spark.SparkContext]] that returns [[org.apache.spark.api.java.JavaRDD]]s and works with Java collections instead of Scala ones.