I would like to create a table in spark-SQL using below-mentioned data.
[{
"empstr": "Blogspan",
"empbyte": 48,
"empshort": 457,
"empint": 935535,
"emplong": 36156987676070,
"empfloat": 6985.98,
"empdoub": 6392455.0,
"empdec": 0.447,
"empbool": 0,
"empdate": "09/29/2018",
"emptime": "2018-03-24 12:56:26"
}, {
"empstr": "Lazzy",
"empbyte": 9,
"empshort": 460,
"empint": 997408,
"emplong": 37564196351623,
"empfloat": 7464.75,
"empdoub": 5805694.86,
"empdec": 0.303,
"empbool": 1,
"empdate": "08/14/2018",
"emptime": "2018-06-17 18:31:15"
}]
but, when i tried to see the print schema, it is showing corruped_redord. So, could you please help me anyone, how to read nested JSON record in JAVA-spark 2.1.1 Below i will attach my code
case "readjson":
tempTable = hiveContext.read().json(hiveContext.sparkContext().wholeTextFiles("1.json", 0));
/*In above line i am getting error at .json says
The method json(String...) in the type DataFrameReader is not applicable for the arguments (RDD<Tuple2<String,String>>)
//tempTable = hiveContext.read().json(componentBean.getHdfsPath());
tempTable.printSchema();
tempTable.show();
tempTable.createOrReplaceTempView(componentKey);
break;