1

How can I parse an xml file containing a list of same nodes in Apache Spark?

Example of a file:

<?xml version="1.0" encoding="UTF-8"?>
<osm version="0.6" generator="CGImap 0.4.0 (25361 thorn-02.openstreetmap.org)" copyright="OpenStreetMap and contributors" attribution="http://www.openstreetmap.org/copyright" license="http://opendatacommons.org/licenses/odbl/1-0/">
 <bounds minlat="48.8306100" minlon="2.3310900" maxlat="48.8337900" maxlon="2.3389100"/>
 <node id="430785" visible="true" version="8" changeset="24482318" timestamp="2014-08-01T14:24:53Z" user="dhuyp" uid="1779584" lat="48.8340725" lon="2.3309196"/>
 <node id="661209" visible="true" version="6" changeset="9914127" timestamp="2011-11-22T21:46:44Z" user="lapinos03" uid="33634" lat="48.8337517" lon="2.3333992"/>
 <node id="24912996" visible="true" version="2" changeset="806076" timestamp="2009-03-14T10:38:25Z" user="Goon" uid="24657" lat="48.8302268" lon="2.3338015">
  <tag k="crossing" v="uncontrolled"/>
  <tag k="highway" v="traffic_signals"/>
 </node>
 <node id="24912994" visible="true" version="5" changeset="5904801" timestamp="2010-09-28T15:32:01Z" user="maouth-" uid="322872" lat="48.8301333" lon="2.3309869">
  <tag k="highway" v="mini_roundabout"/>
 </node>
</osm>
poiuytrez
  • 21,330
  • 35
  • 113
  • 172
  • 1
    Possible duplicate of [How to read XML files from apache spark framework?](http://stackoverflow.com/questions/20225129/how-to-read-xml-files-from-apache-spark-framework) – Justin Pihony Oct 22 '15 at 14:00
  • 1
    Possible duplicate of [Xml processing in Spark](http://stackoverflow.com/questions/33078221/xml-processing-in-spark) – zero323 Oct 22 '15 at 17:01

2 Answers2

2

As mentioned in another answer, spark-xml from Databricks is one way to read XML, however there is currently a bug in spark-xml which prevents you from importing self closing elements. To get around this, you can import the entire XML as a single value, and then do something like the following:

val pathToYourData = "Z:/test.xml"
val osm = sqlContext.read.format("com.databricks.spark.xml").option("rowTag", "osm").load(pathToYourData)
val nodes = osm.selectExpr("explode(node) as node")
nodes.select("node.*").show
/*
+------+----------+--------+----------+---------+--------------------+-------+---------+--------+--------+--------------------+
|#VALUE|@changeset|     @id|      @lat|     @lon|          @timestamp|   @uid|    @user|@version|@visible|                 tag|
+------+----------+--------+----------+---------+--------------------+-------+---------+--------+--------+--------------------+
|  null|  24482318|  430785|48.8340725|2.3309196|2014-08-01T14:24:53Z|1779584|    dhuyp|       8|    true|                null|
|  null|   9914127|  661209|48.8337517|2.3333992|2011-11-22T21:46:44Z|  33634|lapinos03|       6|    true|                null|
|  null|    806076|24912996|48.8302268|2.3338015|2009-03-14T10:38:25Z|  24657|     Goon|       2|    true|[[null,crossing,u...|
|  null|   5904801|24912994|48.8301333|2.3309869|2010-09-28T15:32:01Z| 322872|  maouth-|       5|    true|[[null,highway,mi...|
+------+----------+--------+----------+---------+--------------------+-------+---------+--------+--------+--------------------+
*/
Nick
  • 645
  • 7
  • 22
0

use https://github.com/databricks/spark-xml

val df = sqlContext.read
.format("com.databricks.spark.xml")
.option("rowTag", "result")
.load(pathTOyourDATA)
elcomendante
  • 1,113
  • 1
  • 11
  • 28