0
Column Names
Production_uint_id,batch_id,items_produced,items_discarded
Data:
P188    gv962   {'scissor': 141, 'paper': 274, 'rock': 218}
{'scissor': 14,'paper': 135, 'rock': 24}
P258    mr005   {'scissor': 151, 'paper': 143, 'rock': 225}
{'scissor': 24, 'paper': 60, 'rock': 17}

Code:

from pyspark.sql.types import *
sc = spark.sparkContext
production_rdd = sc.textFile("/Production_logs.tsv")
production_parts = production_rdd.map(lambda l: l.split("\t"))
production = production_parts.map(lambda p: (p[0], p[1], p[2], p[3].strip()))
schemaStringProduction = "production_unit_id batch_id items_produced items_discarded"
fieldsProduction = [StructField(field_name, StringType(), True) for field_name in schemaStringProduction.split()]
schemaProduction = StructType(fieldsProduction)
schemaProductionDF = spark.createDataFrame(production, schemaProduction)

I am Trying to explode
exploding = schemaProductionDF.select("production_unit_id",  explode("items_produced").alias("item_p", "item_p_count"), "items_discarded")

Getting this error:

pyspark.sql.utils.AnalysisException: u"cannot resolve 'explode(`items_produced`)' due to data type mismatch: 
input to function explode should be array or map type, not string;

Please help

Suresh
  • 5,678
  • 2
  • 24
  • 40
  • All columns are set as `StringType` in your schema, `fieldsProduction = [StructField(field_name, StringType(), True) for field_name in schemaStringProduction.split()]` . Change it to proper data type. – Suresh Jan 21 '19 at 07:10
  • Hi Suresh, which data type should I mention? – vishal kumar Jan 21 '19 at 07:18
  • You can use `MapType(StringType(),LongType())` for columns `items_produced,items_discarded` – Suresh Jan 21 '19 at 08:27
  • Possible duplicate of [How to query JSON data column using Spark DataFrames?](https://stackoverflow.com/questions/34069282/how-to-query-json-data-column-using-spark-dataframes) – 10465355 Jan 21 '19 at 10:50

1 Answers1

0

Explode is UDTF function, which will return new Row for each array element. For explode: Explode in PySpark

For your question please try below code:

from pyspark import SparkContext
from pyspark.sql import Row
sc= SparkContext.getOrCreate()
import pandas as pd
rdd1=sc.textFile("D:\MOCK_DATA\*_dict.txt")
lineRDD=rdd1.map(lambda line: line.split("\t"))
header="Production_uint_id,batch_id,items_produced,items_discarded"
col_name=[x.encode("utf-8") for x in header.split(',')] 
production = lineRDD.map(lambda p: (eval(p[0]), eval(p[1]), eval(p[2]), eval(p[3]).strip()))
flatRDD=lineRDD.map(lambda a : ((a[0],a[1],eval(a[2]).values(),eval(a[3]).values())))
DF1=flatRDD.toDF(col_name)
DF1.printSchema()
from pyspark.sql import functions as f
DF2=DF1
lst='scissor,paper,rock'
col_lst='items_produced,items_discarded'
for col_ele in col_lst.split(","):
    count=0
    for i in lst.split(','):
        DF2=DF2.withColumn(col_ele+'.'+i, DF2[col_ele][count])
        count=count+1

DF2.show()