1

In this document it is mentioned that the default read_policy setting is ndb.EVENTUAL_CONSISTENCY.

After I did a bulk delete of entity items from the Datastore versions of the app I pulled up continued to read the old data, so I've tried to figure out how to change this to STRONG_CONSISTENCY with no success, including:

  • entity.query().fetch(read_policy=ndb.STRONG_CONSISTENCY) and
  • ...fetch(options=ndb.ContextOptions(read_policy=ndb.STRONG_CONSISTENCY))

The error I get is

BadArgumentError: read_policy argument invalid ('STRONG_CONSISTENCY')

How does one change this default? More to the point, how can I ensure that NDB will go to the Datastore to load a result rather than relying on an old cached value? (Note that after the bulk delete the datastore browser tells me the entity is gone.)

Dan Cornilescu
  • 39,470
  • 12
  • 57
  • 97
greylock
  • 47
  • 2
  • 8

1 Answers1

1

You cannot change that default, it is also the only option available. From the very doc you referenced (no other options are mentioned):

Description

Set this to ndb.EVENTUAL_CONSISTENCY if, instead of waiting for the Datastore to finish applying changes to all returned results, you wish to get possibly-not-current results faster.

The same is confirmed by inspecting the google.appengine.ext.ndb.context.py file (no STRONG_CONSISTENCY definition in it):

# Constant for read_policy.
EVENTUAL_CONSISTENCY = datastore_rpc.Configuration.EVENTUAL_CONSISTENCY

The EVENTUAL_CONSISTENCY ends up in ndb via the google.appengine.ext.ndb.__init__.py:

from context import *
__all__ += context.__all__

You might be able to avoid the error using a hack like this:

from google.appengine.datastore.datastore_rpc import Configuration

...fetch(options=ndb.ContextOptions(read_policy=Configuration.STRONG_CONSISTENCY))

However I think that only applies to reading the entities for the keys obtained through the query, but not to obtaining the list of keys themselves, which comes from the index the query uses, which is always eventually consistent - the root cause of your deleted entities still appearing in the result (for a while, until the index is updated). From Keys-only Global Query Followed by Lookup by Key:

But it should be noted that a keys-only global query can not exclude the possibility of an index not yet being consistent at the time of the query, which may result in an entity not being retrieved at all. The result of the query could potentially be generated based on filtering out old index values. In summary, a developer may use a keys-only global query followed by lookup by key only when an application requirement allows the index value not yet being consistent at the time of a query.

Potentially of interest: Bulk delete datastore entity older than 2 days

Dan Cornilescu
  • 39,470
  • 12
  • 57
  • 97
  • That's very clear. What seems odd in the document is that it suggests "setting" a parameter for which there turn out to be no choices. But this is helpful. – greylock Mar 29 '18 at 06:24