Context
I have a Django project, and inside of this project, I have a database model with a field of type models.BinaryField(max_length=50000000, default=b'0')
Problem
When the server is requested to query for this binary data, even though there are never more than 1-2 concurrent requests, I am reaching the ram limits of my server using io.BytesIO
, along with stream.seek({index})
, and stream.close()
(I use the term stream very loosely, as I am simply referring to seeking and closing a stream of bytes here).
Question
Is it possible that I need to implement the way I am storing this binary data differently?
Solutions that I have tried that didn't work
Would splitting it up into chunks of smaller sizes be computationally more efficient for the RAM of the server?
When I attempted this, I discovered that chunking up the file into smaller chunks and relating them with a many-to-one
relationship was extremely taxing as far as overhead on the database.