The limit is 1MB, according to the docs, which I assumed means 1024**2
bytes, but apparently not.
I've got a simple function which stringifies large python objects into JSON, splits the JSON string into smaller chunks, and puts the chunks (as BlobProperty) and a separate index entity to the datastore (and memcache, using ndb). And another function to reverse this.
First I tried splitting into 1024**2
chunks but the datastore complained about it. Currently I'm using 1000**2
and it has worked without errors. I could've answered my own question already here, if it wasn't for this comment by Guido, with code that splits into 950000 bytes chunks. If Guido does it, it must be for a reason I figured. Why the 50K safety margin?
Maybe we can get a definitive answer on this, to not waste even a single byte. I'm aware of Blobstore.