3

Inspired from this question, I wrote some code to store an RDD (which was read from a Parquet file), with a Schema of (photo_id, data), in pairs, delimited by tabs, and just as a detail base 64 encode it, like this:

def do_pipeline(itr):
   ...
   item_id = x.photo_id

def toTabCSVLine(data):
  return '\t'.join(str(d) for d in data)

serialize_vec_b64pkl = lambda x: (x[0], base64.b64encode(cPickle.dumps(x[1])))

def format(data):
    return toTabCSVLine(serialize_vec_b64pkl(data))

dataset = sqlContext.read.parquet('mydir')
lines = dataset.map(format)
lines.saveAsTextFile('outdir')

So now, the point of interest: How to read that dataset and print for example its deserialized data?

I am using Python 2.6.6.


My attempt lies here, where for just verifying that everything can be done, I wrote this code:

deserialize_vec_b64pkl = lambda x: (x[0], cPickle.loads(base64.b64decode(x[1])))

base64_dataset = sc.textFile('outdir')
collected_base64_dataset = base64_dataset.collect()
print(deserialize_vec_b64pkl(collected_base64_dataset[0].split('\t')))

which calls collect(), which for testing is OK, but in a real-world scenario would struggle...


Edit:

When I tried zero323's suggestion:

foo = (base64_dataset.map(str.split).map(deserialize_vec_b64pkl)).collect()

I got this error, which boils down to this:

PythonRDD[2] at RDD at PythonRDD.scala:43
16/08/04 18:32:30 WARN TaskSetManager: Lost task 4.0 in stage 0.0 (TID 4, gsta31695.tan.ygrid.yahoo.com): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/grid/0/tmp/yarn-local/usercache/gsamaras/appcache/application_1470212406507_56888/container_e04_1470212406507_56888_01_000009/pyspark.zip/pyspark/worker.py", line 98, in main
    command = pickleSer._read_with_length(infile)
  File "/grid/0/tmp/yarn-local/usercache/gsamaras/appcache/application_1470212406507_56888/container_e04_1470212406507_56888_01_000009/pyspark.zip/pyspark/serializers.py", line 164, in _read_with_length
    return self.loads(obj)
  File "/grid/0/tmp/yarn-local/usercache/gsamaras/appcache/application_1470212406507_56888/container_e04_1470212406507_56888_01_000009/pyspark.zip/pyspark/serializers.py", line 422, in loads
    return pickle.loads(obj)
UnpicklingError: NEWOBJ class argument has NULL tp_new

    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166)
    at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207)
    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:89)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

16/08/04 18:32:30 ERROR TaskSetManager: Task 12 in stage 0.0 failed 4 times; aborting job
16/08/04 18:32:31 WARN TaskSetManager: Lost task 14.3 in stage 0.0 (TID 38, gsta31695.tan.ygrid.yahoo.com): TaskKilled (killed intentionally)
16/08/04 18:32:31 WARN TaskSetManager: Lost task 13.3 in stage 0.0 (TID 39, gsta31695.tan.ygrid.yahoo.com): TaskKilled (killed intentionally)
16/08/04 18:32:31 WARN TaskSetManager: Lost task 16.3 in stage 0.0 (TID 42, gsta31695.tan.ygrid.yahoo.com): TaskKilled (killed intentionally)
---------------------------------------------------------------------------
Py4JJavaError                             Traceback (most recent call last)
/homes/gsamaras/code/read_and_print.py in <module>()
     17     print(base64_dataset.map(str.split).map(deserialize_vec_b64pkl))
     18 
---> 19     foo = (base64_dataset.map(str.split).map(deserialize_vec_b64pkl)).collect()
     20     print(foo)

/home/gs/spark/current/python/lib/pyspark.zip/pyspark/rdd.py in collect(self)
    769         """
    770         with SCCallSiteSync(self.context) as css:
--> 771             port = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
    772         return list(_load_from_socket(port, self._jrdd_deserializer))
    773 

/home/gs/spark/current/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py in __call__(self, *args)
    811         answer = self.gateway_client.send_command(command)
    812         return_value = get_return_value(
--> 813             answer, self.gateway_client, self.target_id, self.name)
    814 
    815         for temp_arg in temp_args:

/home/gs/spark/current/python/lib/py4j-0.9-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    306                 raise Py4JJavaError(
    307                     "An error occurred while calling {0}{1}{2}.\n".
--> 308                     format(target_id, ".", name), value)
    309             else:
    310                 raise Py4JError(

Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
Community
  • 1
  • 1
gsamaras
  • 71,951
  • 46
  • 188
  • 305
  • 1
    Why not `base64_dataset.map(str.split).map(deserialize_vec_b64pkl)`? – zero323 Aug 04 '16 at 07:54
  • @zero323 I didn't know that we could use `str.split`, I am still new to this, so please bare with me, I am pretty sure that if someone explains I will be able to get along afterwards..So what you are proposing should result in an RDD..So just to be sure that everything works, how can I view the first element? I tried to `collect()` what you said, but that resulted in an error (`Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.`). Maybe it could help if I understood the data layout of that resulting RDD.. – gsamaras Aug 04 '16 at 18:04
  • @zero323 I am using Python 2, it would be enough to cover that, I mean from there I can get to Python 3, if needed! – gsamaras Aug 04 '16 at 19:10
  • 1
    2.x should work as well. I posted the answer with a [mcve]. I hope it helps. – zero323 Aug 04 '16 at 19:18

1 Answers1

2

Let's try a simple example. For convenience I'll be using handy toolz library but it is not really required here.

import sys
import base64

if sys.version_info < (3, ):
    import cPickle as pickle
else:
    import pickle


from toolz.functoolz import compose

rdd = sc.parallelize([(1, {"foo": "bar"}), (2, {"bar": "foo"})])

Now, your code is not exactly portable right now. In Python 2 base64.b64encode returns str, while in Python 3 it returns bytes. Lets illustrate that:

  • Python 2

    type(base64.b64encode(pickle.dumps({"foo": "bar"})))
    ## str
    
  • Python 3

    type(base64.b64encode(pickle.dumps({"foo": "bar"})))
    ## bytes
    

So lets add decoding to the pipeline:

# Equivalent to 
# def pickle_and_b64(x):
#     return base64.b64encode(pickle.dumps(x)).decode("ascii")

pickle_and_b64 = compose(
    lambda x: x.decode("ascii"),
    base64.b64encode,
    pickle.dumps
)

Please note that this doesn't assume any particular shape of the data. Because of that, we'll use mapValues to serialize only keys:

serialized = rdd.mapValues(pickle_and_b64)
serialized.first()
## 1, u'KGRwMApTJ2ZvbycKcDEKUydiYXInCnAyCnMu')

Now we can follow it with simple format and save:

from tempfile import mkdtemp
import os

outdir = os.path.join(mkdtemp(), "foo")

serialized.map(lambda x: "{0}\t{1}".format(*x)).saveAsTextFile(outdir)

To read the file we reverse the process:

# Equivalent to
# def  b64_and_unpickle(x):
#     return pickle.loads(base64.b64decode(x))

b64_and_unpickle = compose(
    pickle.loads,
    base64.b64decode
)

decoded = (sc.textFile(outdir)
    .map(lambda x: x.split("\t"))  # In Python 3 we could simply use str.split
    .mapValues(b64_and_unpickle))

decoded.first()
## (u'1', {'foo': 'bar'})
zero323
  • 322,348
  • 103
  • 959
  • 935
  • Also if you're on Python 2.x a) `str.split` may not work. Use complete function instead b) for testing `pickle` is slightly more verbose when providing error messages. – zero323 Aug 04 '16 at 19:44
  • 2.6?!! Haven't seen this one for a while :) I don't even have enthronement I can use to test it. Not to mention Spark dropped 2.6 support in the latest release and branch reached its end-of-life quite a few years ago. Regarding toolz - no particular reason other than convenience. I am spoiled and find nesting function calls tedious. I added full featured functions. – zero323 Aug 04 '16 at 20:00
  • 1
    Oh I should have written a function, silly of me sorry! All good now, I will debug my code, thanks! – gsamaras Aug 05 '16 at 00:03