1

My Android app is for data collection in an industrial context - fulltime employees use the app to take many observations (photos, geotag, text, etc) which are then uploaded in bulk when wifi is available.

I use Amazon's Android SDK to perform bulk S3 uploads - under the hood, this library uses a threadpool thread per upload and occasionally I encounter a RejectedExecutionException. I'm curious if I can handle this in a more resilient way...

Please note that my code does not perform parallel uploads, I use a single thread which sequentially invokes s3.transfer.TransferManager.upload() for each photo, waits for completion, and then continues. So at any given time, there should only be ~2 relevant threads here - one for my code, and one for the thread dispatched by Amazon.

But occasionally my users hit a RejectedExecutionException when the thread pool has no more room:

java.util.concurrent.RejectedExecutionException: pool=0/10, queue=0
at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:1961)  
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:794)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1315)
at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:107)
at com.amazonaws.services.s3.transfer.internal.UploadMonitor.<init>(Unknown Source)
at com.amazonaws.services.s3.transfer.TransferManager.upload(Unknown Source)
at com.amazonaws.services.s3.transfer.TransferManager.upload(Unknown Source)
at com.amazonaws.services.s3.transfer.TransferManager.upload(Unknown Source)

Looking across all the other thread stacks included in the crash report, I wonder if perhaps the other AsyncTasks are simply past uploads which have not yet had a chance to clean up? e.g. I see many of these:

java.lang.Object.wait(Native Method)
java.lang.Thread.parkFor(Thread.java:1424)
java.lang.LangAccessImpl.parkFor(LangAccessImpl.java:48)
sun.misc.Unsafe.park(Unsafe.java:337)
java.util.concurrent.locks.LockSupport.park(LockSupport.java:157)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2016)
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:411)
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1021)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1081)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:581)
java.lang.Thread.run(Thread.java:1019)

Is it possible that these threads simply need a chance to clean up? And if so, is there a relatively straightforward mechanism to accomplish this in predictable time/risk such that I can continue with my bulk uploads?

I wonder if the (conceptual) equivalent of a thread.sleep(0) on my main upload thread will give the pool a chance to clean up and then I can retry uploading?

I would greatly appreciate any thoughts or experience. I have been unable to reproduce this in house and it is not predictable, so my ability to experiment is limited here...

Thanks.

Mike Repass
  • 6,825
  • 5
  • 38
  • 35

0 Answers0