1

hadoop: 3.2.1
hbase: 2.3.4
spark: 2.4.7
python: 3.7.6

Hbase table: "tmp"

hbase(main):001:0> scan "tmp"
ROW                                          COLUMN+CELL                                                                                                                       
 1                                           column=cols:age, timestamp=2021-06-22T14:17:31.735, value=10                                                               
 1                                           column=cols:name, timestamp=2021-06-22T14:17:23.037, value=tom                                                                  
 2                                           column=cols:age, timestamp=2021-06-22T14:17:40.157, value=11                                                                    
 2                                           column=cols:name, timestamp=2021-06-22T14:17:48.516, value=dim  

spark shell:
pyspark \
--master yarn \
--deploy-mode client \
--num-executors 5 \
--executor-cores 1 \
--driver-memory 6g \
--executor-memory 1g \
--packages org.apache.hbase.connectors.spark:hbase-spark:1.0.0

spark code:

from pyspark.sql import SparkSession

spark = SparkSession.builder.getOrCreate()

df = (spark.read.format("org.apache.hadoop.hbase.spark")
                .option("hbase.table", "tmp")
                .option("hbase.columns.mapping", "col1 STRING :key, col2 STRING cols:name, col3 STRING cols:age")
                .load())
df.show()

I run pyspark code in pyspark shell. but I get an error.

Traceback (most recent call last):
    File "<stdin>", line 1, in <module>
    File "/usr/lib/spark-current/python/pyspark/sql/readwriter.py", line 172, in load
        return self._df(self._jreader.load())
    File "/usr/lib/spark-current/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__
    File "/usr/lib/spark-current/python/pyspark/sql/utils.py", line 67, in deco
        return f(*a, **kw)
    File "/usr/lib/spark-current/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o169.load.
: java.lang.NullPointerException
    at org.apache.hadoop.hbase.spark.HBaseRelation.<init>(DefaultSource.scala:138)
    at org.apache.hadoop.hbase.spark.DefaultSource.createRelation(DefaultSource.scala:69)
    at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:365)
    at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:242)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:230)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:186)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Thread.java:748)
TenDim
  • 41
  • 2
  • Try setting the following conf properties "spark.hbase.host" and "spark.hbase.port" on the sparkSession and [hadoop configuration](https://stackoverflow.com/a/32661336/4307136) – ggordon Jun 24 '21 at 14:38
  • @ggordon, I tried this way(config("spark.hbase.host", "xxx").config("spark.hbase.port", "xxx")), but i get the same error – TenDim Jun 25 '21 at 10:02
  • I guess you might be missing `hbase-spark-protocol-shaded` package. – mazaneicha Jun 27 '21 at 15:27

0 Answers0