When I run the following code in a python script and run it with python directly I get the error below. When I start a pyspark session and then do the import of koalas, the creation of the data frame and call head() it runs fine and gives me the expected output.
Is there a specific way the SparkSession needs to be set up for koalas to work?
from pyspark.sql import SparkSession
import pandas as pd
import databricks.koalas as ks
spark = SparkSession.builder \
.master("local[*]") \
.appName("Pycedro Spark Application") \
.getOrCreate()
kdf = ks.DataFrame({"a" : [4 ,5, 6],
"b" : [7, 8, 9],
"c" : [10, 11, 12]})
print(kdf.head())
Error when running it in a python script:
File "/usr/local/Cellar/apache-spark/3.1.1/libexec/python/lib/pyspark.zip/pyspark/worker.py", line 586, in main
func, profiler, deserializer, serializer = read_command(pickleSer, infile)
File "/usr/local/Cellar/apache-spark/3.1.1/libexec/python/lib/pyspark.zip/pyspark/worker.py", line 69, in read_command
command = serializer._read_with_length(file)
File "/usr/local/Cellar/apache-spark/3.1.1/libexec/python/lib/pyspark.zip/pyspark/serializers.py", line 160, in _read_with_length
return self.loads(obj)
File "/usr/local/Cellar/apache-spark/3.1.1/libexec/python/lib/pyspark.zip/pyspark/serializers.py", line 430, in loads
return pickle.loads(obj, encoding=encoding)
AttributeError: Can't get attribute '_fill_function' on <module 'pyspark.cloudpickle' from '/usr/local/Cellar/apache-spark/3.1.1/libexec/python/lib/pyspark.zip/pyspark/cloudpickle/__init__.py'>
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:517)
[...]
Versions: koalas: 1.7.0 pyspark: Version: 3.0.2