animals_population_file = sc.textFile("input/myFile1.txt")
animals_place_file = sc.textFile("input/myFile2.txt")
animals_population_file:
Dogs, 5
Cats, 6
animals_place_file:
Dogs, Italy
Cats, Italy
Dogs, Spain
Now I want to join animals_population_file
and animals_place_file
using the type of animals as a key.
The result should be this one:
Dogs, [Italy, Spain, 5]
Cats, [Italy, 6]
I tried joined = animals_population_file.join(animals_place_file)
, but I don't know how to define the key. Also, when I run joined.collect()
, it gives me an error:
298 raise Py4JJavaError(
299 'An error occurred while calling {0}{1}{2}.\n'.
--> 300 format(target_id, '.', name), value)
301 else:
302 raise Py4JError(
Py4JJavaError: An error occurred while calling o247.collect.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 21.0 failed 1 times, most recent failure: Lost task 0.0 in stage 21.0 (TID 29, localhost): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/usr/lib/spark/python/pyspark/worker.py", line 101, in main
process()
File "/usr/lib/spark/python/pyspark/worker.py", line 96, in process
serializer.dump_stream(func(split_index, iterator), outfile)
File "/usr/lib/spark/python/pyspark/serializers.py", line 236, in dump_stream
vs = list(itertools.islice(iterator, batch))
File "/usr/lib/spark/python/pyspark/rdd.py", line 1807, in <lambda>
map_values_fn = lambda (k, v): (k, f(v))
ValueError: too many values to unpack