I'm using S3.Client.upload_fileobj()
with a BytesIO
stream as input to upload a file to S3 from a stream. My function should not return before the upload is finished, so I need a way to wait it.
From the documentation there is no obvious way to wait for the transfer to finish, but there are some hints of what could work:
- Use the callback arg to wait until progress is at 100%. In Javascript this would be trivial using callbacks or promises, but in Python I'm not so sure.
- Use a
S3.Waiter
object that checks if the object exists. But it does so by polling every 5s and seems very ineffective. Also I'm not sure if it would wait until the object is complete. - There's a class
S3.MultipartUpload
with a.complete()
method, but I doubt that does what I want. - Do a loop that checks if the object is completely uploaded and if not, sleeps for a bit. But how do I check if the object is complete?
I've been googling but it seems nobody is asking the same question. Also, most results talking about related issues are using a different API (I believe upload_fileobj()
is rather new).
EDIT
If found out about S3.Client.put_object
which also accepts a file-like object and blocks until the server responded. But would that work in combination with streams? I'm not sure how Python multithreading works here. The stream comes originally from a S3.Client.download_fileobj()
, gets piped through a subprocess.Popen()
and is then supposed to get uploaded back to S3. Both the download and the subprocess run in parallel threads/processes as fas as I can tell.