2

I have some sample data as below:

    test_a      test_b   test_c   test_d   test_date
    -------------------------------------------------
1   a           500      0.1      111      20191101
2   a           NaN      0.2      NaN      20191102
3   a           200      0.1      111      20191103
4   a           400      NaN      222      20191104
5   a           NaN      0.2      333      20191105

I would like to let those data store in Hbase, and I use the below code to achieve it.

from test.db import impala, hbasecon, HiveClient
import pandas as pd

sql = """
    SELECT test_a
            ,test_b
            ,test_c
            ,test_d
            ,test_date
    FROM table_test
    """

conn_impa = HiveClient().getcon()
all_df = pd.read_sql(sql=sql, con=conn_impa, chunksize=50000)

num = 0

for df in all_df:
    df = df.fillna('')
    df["s"] = df["test_d"] + df["test_date"]
    tmp_num = len(df)
    if len(df) > 0:
        with hintltable.batch(batch_size=1000) as b:
            df.apply(lambda row: b.put(row["k"], {
                'test:test_a': str(row["test_a"]),
                'test:test_b': str(row["test_b"]),
                'test:test_c': str(row["test_c"]),
            }), axis=1)

            num += len(df)

When I query on Hbase get 'test', 'a201911012', I got below result:

COLUMN                           CELL                                                                                         
 test:test_a                      timestamp=1578389750838, value=a                                                              
 test:test_b                      timestamp=1578389788675, value=                                                              
 test:test_c                      timestamp=1578389775471, value=0.2                                                              
 test:test_d                      timestamp=1578449081388, value=                                                           

How to ensure null values are not stored in HBase in Pandas Python? We don't need null or empty string values, our expected result is:

COLUMN                           CELL                                                                                         
 test:test_a                      timestamp=1578389750838, value=a                                                                                                                       
 test:test_c                      timestamp=1578389775471, value=0.2                                                              

1 Answers1

2

You should be able to do this by creating a custom function and calling that in your lambda function. For example you could have a function -

def makeEntry(a, b, c):
    entrydict = {}
    ## using the fact that NaN == NaN is supposed to be False and empty strings are Falsy
    if(a==a and a):
        entrydict ["test:test_a"] = str(a)
    if(b==b and b):
        entrydict ["test:test_b"] = str(b)
    if(c==c and c):
        entrydict ["test:test_c"] = str(c)
    return entrydict

and then you could change your apply function to -

df.apply(lambda row: b.put(row["k"],
makeEntry(row["test_a"],row["test_b"],row["test_c"])), axis=1)

This way you only put in values that are not NaN instead of all values.

Karan Shishoo
  • 2,402
  • 2
  • 17
  • 32
  • Thanks so much for your answer, I tried your way, and I got an error in `dict["test:test_a"] = str(a)`, TypeError: ("'type' object does not support item assignment", u'occurred at index 0') –  Jan 08 '20 at 08:08
  • @nullfearless ohh it should be fine now, i messed up when I did not change all the variable names after renaming the dict, they all should have been `entrydict` – Karan Shishoo Jan 08 '20 at 08:10
  • Thank you so so so much, you saved my day, I just now found it there was `None` value in my data, do you know how to ignore them too thank you so so so much. ☕️☕️ –  Jan 08 '20 at 08:23
  • Can I use `if(a==a and a is not None)` –  Jan 08 '20 at 08:25
  • 1
    @nullfearless the function as is should ignore the `None` values (unless its a `"None"` string) as all None values are Falsy but yes you could use `if(a==a and a is not None)` – Karan Shishoo Jan 08 '20 at 08:25