How to parse xml file containing xml data within one of it's column itself?
In one of our project, we receive xml files, in which some of the columns store another xml. While loading this data to dataframe, the inner xml is getting converted to StringType (which is not intended), so not being able to get to the nodes while querying the data (using dot operator).
I have looked around for answers vividly in net, but no luck. Found one open issue exactly identical to my use case in GitHub. The link is here.
https://github.com/databricks/spark-xml/issues/140
My xml source file looks like below.
+------+--------------------+
| id | xml |
+------+--------------------+
| 6723 |<?xml version="1....|
| 6741 |<?xml version="1....|
| 6774 |<?xml version="1....|
| 6735 |<?xml version="1....|
| 6828 |<?xml version="1....|
| 6764 |<?xml version="1....|
| 6732 |<?xml version="1....|
| 6792 |<?xml version="1....|
| 6754 |<?xml version="1....|
| 6833 |<?xml version="1....|
+------+--------------------+
In SQL Server, to store xml within a database column, there is the XML
datatype but same is not present in Spark SQL.
Has anyone come around the same issue and found any workaround? If yes, please share. We're using Spark Scala.