I'm running a Spark cluster in standalone
mode. Both Master and Worker nodes are reachable, with logs in the Spark Web UI.
I'm trying to load data into a PySpark session so I can work on Spark DataFrames.
Following several examples (among them, one from the official doc), I tried using different methods, all failing with the same error. Eg
from pyspark.conf import SparkConf
from pyspark.context import SparkContext
from pyspark.sql import SQLContext
conf = SparkConf().setAppName('NAME').setMaster('spark://HOST:7077')
sc = SparkContext(conf=conf)
spark = SparkSession.builder.getOrCreate()
# a try
df = spark.read.load('/path/to/file.csv', format='csv', sep=',', header=True)
# another try
sql_ctx = SQLContext(sc)
df = sql_ctx.read.csv('/path/to/file.csv', header=True)
# and a few other tries...
Every time, I get the same error:
Py4JJavaError: An error occurred while calling o81.csv. :
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, 192.168.X.X, executor 0):
java.io.StreamCorruptedException: invalid stream header: 0000000B
I'm loading data from JSON and CSV (tweaking the methods calls appropriately of course), the error is the same for both, every time.
Does someone understand what is the problem?