1

I build up a batch query in variable "gBatchWrite". I try to keep it around 200 operation per Amazon suggestions. Any retries after a "ConcurrentModificationException" fail. It seems that I would have to rebuild the gBatchWrite variable from scratch which is much more expensive than just retrying. Keep in mind I am running this code in separate threads that have a little overlap that causes the "ConcurrentModificationException".

I was expecting to be able to exponentially retry to execute "await gBatchWrite.iterate()" after a "ConcurrentModificationException" error. Instead all retries fail. It keeps on failing even if all other threads have finished and is the only thread left. If I run the same data in a single thread then I do not have any exceptions but is much slower.

let gBatchWrite = g
for (const jtom of batch) {
    // some code here to build gBatchWrite
}
await exponentialDoTillTruthy(
    async () => {
        try {
            await gBatchWrite.iterate()
            return true
        } catch (err) {
            if (err.statusMessage && typeof err.statusMessage === "string") {
                const errStatus = JSON.parse(err.statusMessage)
                if (typeof errStatus === "object" && errStatus !== null && errStatus.code === 'ConcurrentModificationException') {
                    return false
                }
            }
            throw err
        }
    },
    50
)
  • You may want to examine he sample code here: https://docs.aws.amazon.com/neptune/latest/userguide/lambda-functions-examples.html#lambda-functions-examples-javascript. This is for use with AWS Lambda, but the retry logic and libraries used in this code example may help with the current issue you're seeing. – Taylor Riggan Mar 09 '23 at 15:46
  • 1
    I changed my code to rebuild the "gBatchWrite". I begin with assigning it gBatchWrite = g, then rebuilding the query and finally rerun the iterate. That fixed the problem. – Louis Rebolloso Mar 13 '23 at 15:39

0 Answers0