0

I want to select a spaecific element : select("File.columns.column._name")

 |-- File: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- _Description: string (nullable = true)
 |    |    |-- _RowTag: string (nullable = true)
 |    |    |-- _name: string (nullable = true)
 |    |    |-- _type: string (nullable = true)
 |    |    |-- columns: struct (nullable = true)
 |    |    |    |-- column: array (nullable = true)
 |    |    |    |    |-- element: struct (containsNull = true)
 |    |    |    |    |    |-- _Hive_Final_Table: string (nullable = true)
 |    |    |    |    |    |-- _Hive_Final_column: string (nullable = true)
 |    |    |    |    |    |-- _Hive_Table1: string (nullable = true)
 |    |    |    |    |    |-- _Hive_column1: string (nullable = true)
 |    |    |    |    |    |-- _Path: string (nullable = true)
 |    |    |    |    |    |-- _Type: string (nullable = true)
 |    |    |    |    |    |-- _VALUE: string (nullable = true)
 |    |    |    |    |    |-- _name: string (nullable = true)

I got this error :

Exception in thread "main" org.apache.spark.sql.AnalysisException: cannot resolve 'File.columns.column[_name]' due to data type mismatch: argument 2 requires integral type, however, '_name' is of string type.; at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:65) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:57) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:335) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:335) at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:69) at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:334) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$5.apply(TreeNode.scala:332) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$5.apply(TreeNode.scala:332) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:281) at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) at scala.collection.Iterator$class.foreach(Iterator.scala:727) at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47) at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273) at scala.collection.AbstractIterator.to(Iterator.scala:1157) at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265) at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157) at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252) at scala.collection.AbstractIterator.toArray(Iterator.scala:1157) at org.apache.spark.sql.catalyst.trees.TreeNode.transformChildren(TreeNode.scala:321) at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:332) at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionUp$1(QueryPlan.scala:108) at org.apache.spark.sql.catalyst.plans.QueryPlan.org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$2(QueryPlan.scala:118)

Can you help me please ?

Thomas Rollet
  • 1,573
  • 4
  • 19
  • 33

2 Answers2

0

You need explode function to get the required column

explode(Column e) Creates a new row for each element in the given array or map column.

val df1 = df.select(explode($"File").as("File")).select($"File.columns").as("column")

First, explode gives you column field

val finalDF = df1.select(explode($"(column"))."column")).select($"column._name").as("_name")

Second explode gives you the _name column

Hope this helps!

Community
  • 1
  • 1
koiralo
  • 22,594
  • 6
  • 51
  • 72
0

Looking at your schema you can do the following to select _name from the nested structs from dataframe

import org.apache.spark.sql.functions._
df.select(col("File.columns.column")(0)(0)("_name").as("_name"))
Ramesh Maharjan
  • 41,071
  • 6
  • 69
  • 97