1

I'm getting this error

pickle.PicklingError: Pickling client objects is explicitly not supported. Clients have non-trivial state that is local and unpickleable.

When trying to use beam.ParDo to call a function that looks like this

class ExtractBlobs(beam.DoFn):
    def start_bundle(self):
        self.storageClient = storage.Client()

    def process(self, element):
        client = self.storageClient
        bucket = client.get_bucket(element)
        blobs = list(bucket.list_blobs(max_results=100))
        return blobs

I thought the whole point of the start_bundle was to initialize self.someProperty and then use that self.someProperty in the 'process' method to get rid of the pickling problem (from sources below) Could anyone point me into the right direction of how to solve this?

[+] What I've read:

https://github.com/GoogleCloudPlatform/google-cloud-python/issues/3191

How do I resolve a Pickling Error on class apache_beam.internal.clients.dataflow.dataflow_v1b3_messages.TypeValueValuesEnum?

Andrew Nguonly
  • 2,258
  • 1
  • 17
  • 23

1 Answers1

0

UPDATED: The issue was actually a library issue. I had to have the correct apache-beam SDK version with the correct google-cloud versions:

gapic-google-cloud-pubsub-v1==0.15.4

gax-google-logging-v2==0.8.3

gax-google-pubsub-v1==0.8.3

google-api-core==1.1.2 google-api-python-client==1.6.7

google-apitools==0.5.10

google-auth==1.4.1

google-auth-httplib2==0.0.3

google-cloud-bigquery==1.1.0

google-cloud-core==0.28.1

google-cloud-datastore==1.6.0

google-cloud-pubsub==0.26.0

google-cloud-storage==1.10.0

google-gax==0.12.5

apache-beam==2.3.0

Was able to solve this by what seems a combination of things, first I don't serialize anything (ugly one liner in the yield) and second is using threading.local()

class ExtractBlobs(beam.DoFn):
    def start_bundle(self):
        self.threadLocal = threading.local()
        self.threadLocal.client = storage.Client()

    def process(self, element):
        yield list(storage.Client().get_bucket(element).list_blobs(max_results=100))