0

I have a file like this(I am providing you sample data, but file is very large):

QQ
1
2
3
ZZ
b
QQ
4
5
6
ZZ
a
QQ
9
8
23

I want to read data between QQ and ZZ, So I want dataframe should look like :

[1,2,3]
[4,5,6]
[9,8]

Code which I have tried is as below,but this is taking failing for large data.

from pyspark.sql.types import *
from pyspark import SparkContext
from pyspark.sql import SQLContext

path ="/tmp/Poonam.Raskar/Sample.txt"
sc =SparkContext()
sqlContext = SQLContext(sc)
sc.setLogLevel("ERROR")
textFile = sc.textFile(path)

wi = textFile.zipWithIndex()
startPos = wi.filter(lambda x: x[0].startswith('QQ')).map(lambda (key,index) : index).collect()
endPos = wi.filter(lambda x: x[0].startswith('ZZ')).map(lambda (key,index) : index).collect()
finalPos =zip(startPos,endPos)
dtlRow =[]

for pos in finalPos:
        #print(pos)
        #print(wi.filter())
        dtlRow1 = [[wi.filter(lambda x: x[1]==1).map(lambda (key,index) : key ,).collect() for i in range(pos[0],pos[1])]]  #Required option for collect...program is taking long time while executing this statement
        #print(dtlRow1)
        dtlRow.append(dtlRow1)


cSchema = StructType([StructField("DataFromList", ArrayType(StringType()))])
df = sqlContext.createDataFrame(dtlRow,schema=cSchema)
print(df.show())
Poonam
  • 669
  • 4
  • 14
  • 1
    You mean that it works with the sample data you provide? And what you mean by "failing"? Where in the code, and what's the exact error? – desertnaut Nov 21 '17 at 12:27
  • Yes, code is working for sample data, but for large data is takes infinite time to calculation. – Poonam Nov 22 '17 at 04:36

1 Answers1

0

I suspect the issue with large data for your method is that you have an intermediate step where you collect the rdd, which will not scale. Here is a way using an rdd/dataframe:

# get a DF with a rownumber
lst=['QQ', '1', '2', '3', 'ZZ', 'b', 'QQ', '4', '5', '6', 'ZZ', 'a', 'QQ', '9', '8', '23']
df=sc.parallelize(lst).zipWithIndex()\
  .map(lambda (x,i): Row(**{'col': x, 'rownum': i})).toDF()

# hack to count cumulative occurrences of QQ
winspec=Window.partitionBy().orderBy('rownum')
df=df.withColumn('QQ_indicator', f.expr("case when col='QQ' then 1 else 0 end"))
df=df.withColumn('QQ_indicator_cum', f.sum('QQ_indicator').over(winspec))

# ditto for ZZ
df=df.withColumn('ZZ_indicator', f.expr("case when col='ZZ' then 1 else 0 end"))
df=df.withColumn('ZZ_indicator_cum', f.sum('ZZ_indicator').over(winspec))

df.filter("QQ_indicator_cum=ZZ_indicator_cum+1 and not(col='QQ')")\
  .groupby('QQ_indicator_cum')\
  .agg(f.collect_list('col').alias('result'))\
  .select('result')\
  .show(3)
ags29
  • 2,621
  • 1
  • 8
  • 14
  • @https://stackoverflow.com/users/8671053/ags29: Is it possible for you to add col.startwith in place of ("case when col='QQ' then 1 else 0 end") . Because in my file data start with something QQ123, QQ456, QQ789 – Poonam Nov 22 '17 at 05:15