4

According to the DynamoDB DAX documentation, DAX maintains two separate caches: one for objects and one for queries. Which is OK, I guess.

Trouble is, if you change an object and the changed value of the object should impact a value stored in the query cache, there appears to be no way to inform DAX about it, meaning that the query cache will be wrong until its TTL expires.

This is rather limiting and there doesn't appear to be any easy way to work around it.

Someone tell me I don't know what I'm talking about and there is a way to advise DAX to evict query cache values.

Haroldo Gondim
  • 7,725
  • 9
  • 43
  • 62
Eli
  • 227
  • 1
  • 3
  • 11
  • this is precisely why i was researching DAX just now... read perf of ddb is great but query/paging notsomuch. ugh. really wish i could go back in time and tell past me to shoehorn my app into an RBDMS and call it a day. ddb is a great k/v but despite the lure of more -- that's all it is. – Cory Mawhorter Mar 28 '19 at 04:20

2 Answers2

2

I wish there is a better answer, but unfortunately there is no way currently to update the query cache values except for TTL expiry. The item cache values are immediately updated by any Put or Update requests made through DAX, but not if there are any changes made directly to DynamoDB.

However, keep in mind that the key for query cache is the full request; thus changing any field in the request would trigger a cache miss. Obviously, this is not a solution, but it could be an option (hack) to work around the current limitation.

Jeff Hardy
  • 7,632
  • 24
  • 24
  • Ouch. I was afraid that would be the answer. Do you know: is there any motion in the works to improve this? – Eli Feb 15 '18 at 23:04
  • It's certainly something we've discussed, but we haven't come to a conclusion on what the behaviour should be. We're open to suggestions, although the AWS Forums (https://forums.aws.amazon.com/forum.jspa?forumID=131) are probably a better place for discussion than here. – Jeff Hardy Feb 20 '18 at 17:05
  • 8
    This should really be at the top of the DAX documentation in a large font. – Adrian Baker Apr 05 '18 at 02:00
  • 1
    Is there a field that could be (ab)used to force it to be treated as a new request, while still using the same condition, projection etc? Something like a session ID that you could randomize a new value for when you want to "start over" for a particular set of queries, e.g., forcing cache misses? – JHH Nov 23 '18 at 12:36
-1

As per the Dynamo Db documentation you have to pass your update query through DAX.

DAX supports the following write operations: PutItem, UpdateItem, DeleteItem, and BatchWriteItem. When you send one of these requests to DAX, it does the following:

DAX sends the request to DynamoDB.

DynamoDB replies to DAX, confirming that the write succeeded.

DAX writes the item to its item cache.

DAX returns success to the requester.

If a write to DynamoDB fails for any reason, including throttling, then the item will not be cached in DAX and the exception for the failure will be returned to the requester. This ensures that data is not written to the DAX cache unless it is first written successfully to DynamoDB.

So instead of using method update of Dynamo db use UpdateItem.

To dig more you can refer this link

Anurag pareek
  • 1,382
  • 1
  • 10
  • 21