28

Using

dd = {'ID': ['H576','H577','H578','H600', 'H700'],
      'CD': ['AAAAAAA', 'BBBBB', 'CCCCCC','DDDDDD', 'EEEEEEE']}
df = pd.DataFrame(dd)

Pre Pandas 0.25, this below worked.

set:  redisConn.set("key", df.to_msgpack(compress='zlib'))
get:  pd.read_msgpack(redisConn.get("key"))

Now, there are deprecated warnings..

FutureWarning: to_msgpack is deprecated and will be removed in a future version.
It is recommended to use pyarrow for on-the-wire transmission of pandas objects.

The read_msgpack is deprecated and will be removed in a future version.
It is recommended to use pyarrow for on-the-wire transmission of pandas objects.

How does pyarrow work? And, how do I get pyarrow objects into and back from Redis.

reference: How to set/get pandas.DataFrame to/from Redis?

Merlin
  • 24,552
  • 41
  • 131
  • 206

5 Answers5

46

Here's a full example to use pyarrow for serialization of a pandas dataframe to store in redis

apt-get install python3 python3-pip redis-server
pip3 install pandas pyarrow redis

and then in python

import pandas as pd
import pyarrow as pa
import redis

df=pd.DataFrame({'A':[1,2,3]})
r = redis.Redis(host='localhost', port=6379, db=0)

context = pa.default_serialization_context()
r.set("key", context.serialize(df).to_buffer().to_pybytes())
context.deserialize(r.get("key"))
   A
0  1
1  2
2  3

I just submitted PR 28494 to pandas to include this pyarrow example in the docs.

Reference docs:

Shadi
  • 9,742
  • 4
  • 43
  • 65
  • 4
    This is really nice. I'm assuming that a defensive programmer should check the size of the dataframe before pushing to Redis, since to my knowledge the 512MB limit still exists. https://github.com/antirez/redis/issues/757 – Brian Wylie Mar 05 '20 at 17:29
  • 2
    @BrifordWylie: I use `bz2` package to compress data before pushing it to Redis. – Javiar Sandra Apr 30 '20 at 19:30
  • I am getting below error at: context.deserialize(r.get("key")) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 16: invalid start byte – sumon c Aug 05 '20 at 03:11
  • @sumonc what do you get with `r.get("key")` alone? – Shadi Aug 05 '20 at 04:26
  • 1
    Is the above answer doing any compression at all? In to_pybytes()? – sray Jan 13 '21 at 18:06
  • Docs don't say so https://arrow.apache.org/docs/python/generated/pyarrow.Buffer.html – Shadi Jan 14 '21 at 09:25
  • @sumonc I assume you have figured it out by now, but for completeness: In the redis.Redis constructor, don't put `decode_responses=True` in there. – oerpli Aug 25 '21 at 08:52
  • in the latest version of pyarrow, context isn't needed. Just `pa.serialize(..)` and `pa.deserialize(..)` should work. – Aziz Alto Mar 05 '22 at 00:14
11

Here is how I do it since default_serialization_context is deprecated and things are a bit simpler:

import pyarrow as pa
import redis

pool = redis.ConnectionPool(host='localhost', port=6379, db=0)
r = redis.Redis(connection_pool=pool)

def storeInRedis(alias, df):
    df_compressed = pa.serialize(df).to_buffer().to_pybytes()
    res = r.set(alias,df_compressed)
    if res == True:
        print(f'{alias} cached')

def loadFromRedis(alias):
    data = r.get(alias)
    try:
        return pa.deserialize(data)
    except:
        print("No data")


storeInRedis('locations', locdf)

loadFromRedis('locations')
ety
  • 131
  • 1
  • 7
7

If you would like to compress the data in Redis, you can use the built in support for parquet & gzip

def openRedisCon():
   pool = redis.ConnectionPool(host=REDIS_HOST, port=REDIS_PORT, db=0)
   r = redis.Redis(connection_pool=pool)
   return r

def storeDFInRedis(alias, df):
    """Store the dataframe object in Redis
    """

    buffer = io.BytesIO()
    df.to_parquet(buffer, compression='gzip')
    buffer.seek(0) # re-set the pointer to the beginning after reading
    r = openRedisCon()
    res = r.set(alias,buffer.read())

def loadDFFromRedis(alias, useStale: bool = False):
    """Load the named key from Redis into a DataFrame and return the DF object
    """

    r = openRedisCon()

    try:
        buffer = io.BytesIO(r.get(alias))
        buffer.seek(0)
        df = pd.read_parquet(buffer)
        return df
    except:
        return None


rossco
  • 593
  • 12
  • 22
0

Pickle and zlib can be an alternative to pyarrow:

import pandas as pd
import redis
import zlib
import pickle

df=pd.DataFrame({'A':[1,2,3]})
r = redis.Redis(host='localhost', port=6379, db=0)
r.set("key", zlib.compress( pickle.dumps(df)))
df=pickle.loads(zlib.decompress(r.get("key")))
0
import pandas as pd
import redis
import pickle

r = redis.Redis(host='localhost', port=6379, db=0)

data = {
    "calories": ["v1", 'v2', 'v3'],
    "duration": [50, 40, 45]
}
df = pd.DataFrame(data, index=["day1", "day2", "day3"])

r.set("key", pickle.dumps(df))
print(pickle.loads(r.get("key")))

alternatively, you can use direct-redis

import pandas as pd
from direct_redis import DirectRedis

r = DirectRedis(host='localhost', port=6379)
>>> df =  pd.DataFrame([[1,2,3,'235', '@$$#@'], 
                   ['a', 'b', 'c', 'd', 'e']])
>>> print(df)
   0  1  2    3      4
0  1  2  3  235  @$$#@
1  a  b  c    d      e   

>>> r.set('df', df)   

>>> r.get('df')
   0  1  2    3      4
0  1  2  3  235  @$$#@
1  a  b  c    d      e   

>>> type(r.get('df'))
<class 'pandas.core.frame.DataFrame'>