0

I am super new to Spark. This error occurred when I was trying to collect results from a RDD_new after passing a top-level external function into RDD_old.reduceByKey.

Firstly, I defined a treeStruct:

class treeStruct(object):
    def __init__(self,node,edge):
        self.node = nodeDictionary
        self.edge = edgeDictionary

After that, I converted two treeStructs into a RDD with sc.parallelize:

RDD = sc.parallelize([treeStruct1,treeStruct2])

Then, I passed a top-level function defined outside the driver code into reduceByKey. The function contains several "for" iterations, something like:

def func(tree1,tree2):
    if conditions according to certain attributes of the RDD:
        for dummy:
             do something to the RDD attributes
    if conditions according to certain attributes of the RDD:
        for dummy2:
             do something to the RDD attributes

And when I try to collect the outcome, this error occurred:

Driver stacktrace:
17/03/07 13:38:37 INFO DAGScheduler: Job 0 failed: collect at /mnt/hgfs/VMshare/ditto-dev/pkltreeSpark_RDD.py:196, took 3.088593 s
Traceback (most recent call last):
  File "/mnt/hgfs/VMshare/pkltreeSpark_RDD.py", line 205, in <module>
startTesting(1,1)
  File "/mnt/hgfs/VMshare/pkltreeSpark_RDD.py", line 196, in startTesting
tmp = matchingOutcome.collect()
  File "/usr/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 809, in collect
  File "/usr/spark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__
  File "/usr/spark/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py", line 319, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/usr/spark/python/lib/pyspark.zip/pyspark/worker.py", line 174, in main
process()
  File "/usr/spark/python/lib/pyspark.zip/pyspark/worker.py", line 169, in process
serializer.dump_stream(func(split_index, iterator), outfile)
  File "/usr/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 2407, in pipeline_func
  File "/usr/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 346, in func
  File "/usr/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 1828, in combineLocally
  File "/usr/spark/python/lib/pyspark.zip/pyspark/shuffle.py", line 236, in mergeValues
    for k, v in iterator:
TypeError: 'treeStruct' object is not iterable

Confused. Does it mean that I shouldn't use "for" iterations inside the function? Or I shouldn't construct my object like what I did now?

Also, this error sames to be about how to iterate over certain attributes of a RDD, not about key-value pairs.

Any help would be great!

1 Answers1

-2

I finally came to understand that this problem is introduced by my class definition, where I want to iterate over this treeStruct which doesn't have any iterator, and it is non-iterable. So this problem can be addressed by adding an iterator to the class.

class treeStruct(object):
    def __init__(self,node,edge):
        self.node = nodeDictionary
        self.edge = edgeDictionary

    # add an iterator
    def __iter__(self):
        for x in [self.node,self.edge]:
            yield x

Anyway, thank y'all for your help! :)