0

I am using pyspark and am trying to run it locally on desktop. I import the libraries as such:

from pyspark.sql import SQLContext
from pyspark.sql.types import *
from pyspark import SparkContext
import pandas as pd

sc = SparkContext.getOrCreate()
sqlContext = SQLContext(sc)

Read in csv:

df = sqlContext.read.load('D:/Databases/Datasets/file.csv', 
                      header='true', 
                      inferSchema='true') 

The output as it completes read in:

Caused by: java.io.IOException: Could not read footer for file: FileStatus{path=file:/D:/Databases/Datasets/file.csv; isDirectory=false; length=3293305101; replication=0; blocksize=0; modification_time=0; access_time=0; owner=; group=; permission=rw-rw-rw-; isSymlink=false}
    at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anonfun$readParquetFootersInParallel$1.apply(ParquetFileFormat.scala:526)
    at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anonfun$readParquetFootersInParallel$1.apply(ParquetFileFormat.scala:513)
    at scala.collection.parallel.AugmentedIterableIterator$class.flatmap2combiner(RemainsIterator.scala:132)
    at scala.collection.parallel.immutable.ParVector$ParVectorIterator.flatmap2combiner(ParVector.scala:62)
    at scala.collection.parallel.ParIterableLike$FlatMap.leaf(ParIterableLike.scala:1072)
    at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply$mcV$sp(Tasks.scala:49)
    at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply(Tasks.scala:48)
    at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply(Tasks.scala:48)
    at scala.collection.parallel.Task$class.tryLeaf(Tasks.scala:51)
    at scala.collection.parallel.ParIterableLike$FlatMap.tryLeaf(ParIterableLike.scala:1068)
    at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask$class.compute(Tasks.scala:152)
    at scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.compute(Tasks.scala:443)
    at scala.concurrent.forkjoin.RecursiveAction.exec(RecursiveAction.java:160)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinTask.doJoin(ForkJoinTask.java:341)
    at scala.concurrent.forkjoin.ForkJoinTask.join(ForkJoinTask.java:673)
    at scala.collection.parallel.ForkJoinTasks$WrappedTask$class.sync(Tasks.scala:378)
    at scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.sync(Tasks.scala:443)
    at scala.collection.parallel.ForkJoinTasks$class.executeAndWaitResult(Tasks.scala:426)
    at scala.collection.parallel.ForkJoinTaskSupport.executeAndWaitResult(TaskSupport.scala:56)
    at scala.collection.parallel.ParIterableLike$ResultMapping.leaf(ParIterableLike.scala:958)
    at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply$mcV$sp(Tasks.scala:49)
    at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply(Tasks.scala:48)
    at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply(Tasks.scala:48)
    at scala.collection.parallel.Task$class.tryLeaf(Tasks.scala:51)
    at scala.collection.parallel.ParIterableLike$ResultMapping.tryLeaf(ParIterableLike.scala:953)
    at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask$class.compute(Tasks.scala:152)
    at scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.compute(Tasks.scala:443)
    at scala.concurrent.forkjoin.RecursiveAction.exec(RecursiveAction.java:160)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.lang.RuntimeException: file:/D:/Databases/Datasets/file.csv is not a Parquet file. expected magic number at tail [80, 65, 82, 49] but found [0, 0, 0, 0]
    at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:476)
    at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:445)
    at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:421)
    at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anonfun$readParquetFootersInParallel$1.apply(ParquetFileFormat.scala:519)

There seems to be a problem when reading the file into the spark dataframe. When accessing the dataframe as such:

df.show(2,truncate= True)

The error is the following:

NameError                                 Traceback (most recent call last)
<ipython-input-20-c8f1d4ce926c> in <module>()
----> 1 df.show(2,truncate= True)

NameError: name 'df' is not defined

Is there a step I am missing when reading locally into a pyspark dataframe?

Sade
  • 450
  • 7
  • 27
  • Missing `format` argument or option. – zero323 Oct 04 '18 at 11:18
  • I have tried most of the examples in https://stackoverflow.com/questions/28782940/load-csv-file-with-spark and it throws an error java.lang.outofmemoryerror: java heap space – Sade Oct 04 '18 at 13:50
  • I solved it by using this : conf = SparkConf().setAppName("App") conf = (conf.setMaster('local[*]') .set('spark.executor.memory', '4G') .set('spark.driver.memory', '45G') .set('spark.driver.maxResultSize', '10G')) sc = SparkContext(conf=conf) – Sade Oct 04 '18 at 14:00

0 Answers0