3

I have seen and tried many existing StackOverflow posts regarding this issue but none work. I guess my JAVA heap space is not as large as expected for my large dataset, My dataset contains 6.5M rows. My Linux instance contains 64GB Ram with 4 cores. As per this suggestion I need to fix my code but I think making a dictionary from pyspark dataframe should not be very costly. Please advise me if any other way to compute that.

I just want to make a python dictionary from my pyspark dataframe, this is the content of my pyspark dataframe,

property_sql_df.show() shows,

+--------------+------------+--------------------+--------------------+
|            id|country_code|       name|          hash_of_cc_pn_li|
+--------------+------------+--------------------+--------------------+
|  BOND-9129450|          US|Scotron Home w/Ga...|90cb0946cf4139e12...|
|  BOND-1742850|          US|Sited in the Mead...|d5c301f00e9966483...|
|  BOND-3211356|          US|NEW LISTING - Com...|811fa26e240d726ec...|
|  BOND-7630290|          US|EC277- 9 Bedroom ...|d5c301f00e9966483...|
|  BOND-7175508|          US|East Hampton Retr...|90cb0946cf4139e12...|
+--------------+------------+--------------------+--------------------+

What I want is to make a dictionary with hash_of_cc_pn_li as key and id as a list value.

Expected Output

{
  "90cb0946cf4139e12": ["BOND-9129450", "BOND-7175508"]
  "d5c301f00e9966483": ["BOND-1742850","BOND-7630290"]
}

What I have tried so far,

Way 1: causing java.lang.OutOfMemoryError: Java heap space

%%time
duplicate_property_list = {}
for ind in property_sql_df.collect(): 
     hashed_value = ind.hash_of_cc_pn_li
     property_id = ind.id
     if hashed_value in duplicate_property_list:
         duplicate_property_list[hashed_value].append(property_id) 
     else:
         duplicate_property_list[hashed_value] = [property_id] 

Way 2: Not working because of missing native OFFSET on pyspark

%%time
i = 0
limit = 1000000
for offset in range(0, total_record,limit):
    i = i + 1
    if i != 1:
        offset = offset + 1
        
    duplicate_property_list = {}
    duplicate_properties = {}
    
    # Preparing dataframe
    url = '''select id, hash_of_cc_pn_li from properties_df LIMIT {} OFFSET {}'''.format(limit,offset)  
    properties_sql_df = spark.sql(url)
    
    # Grouping dataset
    rows = properties_sql_df.groupBy("hash_of_cc_pn_li").agg(F.collect_set("id").alias("ids")).collect()
    duplicate_property_list = { row.hash_of_cc_pn_li: row.ids for row in rows }
    
    # Filter a dictionary to keep elements only where duplicate cound
    duplicate_properties = filterTheDict(duplicate_property_list, lambda elem : len(elem[1]) >=2)
    
    # Writing to file
    with open('duplicate_detected/duplicate_property_list_all_'+str(i)+'.json', 'w') as fp:
        json.dump(duplicate_property_list, fp)

What I get now on the console:

java.lang.OutOfMemoryError: Java heap space

and showing this error on Jupyter notebook output

ERROR:py4j.java_gateway:An error occurred while trying to connect to the Java server (127.0.0.1:33097)

This is the followup question that I asked here: Creating dictionary from Pyspark dataframe showing OutOfMemoryError: Java heap space

A l w a y s S u n n y
  • 36,497
  • 8
  • 60
  • 103
  • [did this solution not work..?](https://stackoverflow.com/a/63103691/9840637) – anky Jul 27 '20 at 06:38
  • 1
    This is an XY problem. You seem convinced that your design (generating a huge dictionary on the driver with collected data) should work. It is only an intermediate step to a solution that we have no idea about (and that's beside the fact that your solution is obviously not scalable). You need to add more info: 1. What you are intending to do with the dict (that can't be done on workers) 2. What your Spark memory settings are (and how much of your 64gb is actually being used before the OOM occurs) 3. How much memory is needed by your 65m rows 4. Any transformations before/into `property_sql_df` – ernest_k Jul 27 '20 at 07:08
  • @anky No sir, I tried that but no luck same error of Memory – A l w a y s S u n n y Jul 27 '20 at 07:40

1 Answers1

1

Why not keep as much data and processing in Executors, rather than collecting to Driver? If I understand this correctly, you could use pyspark transformations and aggregations and save directly to JSON, therefore leveraging executors, then load that JSON file (likely partitioned) back into Python as a dictionary. Admittedly, you introduce IO overhead, but this should allow you to get around your OOM heap space errors. Step-by-step:

import pyspark.sql.functions as f


spark = SparkSession.builder.getOrCreate()
data = [
    ("BOND-9129450", "90cb"),
    ("BOND-1742850", "d5c3"),
    ("BOND-3211356", "811f"),
    ("BOND-7630290", "d5c3"),
    ("BOND-7175508", "90cb"),
]
df = spark.createDataFrame(data, ["id", "hash_of_cc_pn_li"])

df.groupBy(
    f.col("hash_of_cc_pn_li"),
).agg(
    f.collect_set("id").alias("id")  # use f.collect_list() here if you're not interested in deduplication of BOND-XXXXX values
).write.json("./test.json")

Inspecting the output path:

ls -l ./test.json

-rw-r--r-- 1 jovyan users  0 Jul 27 08:29 part-00000-1fb900a1-c624-4379-a652-8e5b9dee8651-c000.json
-rw-r--r-- 1 jovyan users 50 Jul 27 08:29 part-00039-1fb900a1-c624-4379-a652-8e5b9dee8651-c000.json
-rw-r--r-- 1 jovyan users 65 Jul 27 08:29 part-00043-1fb900a1-c624-4379-a652-8e5b9dee8651-c000.json
-rw-r--r-- 1 jovyan users 65 Jul 27 08:29 part-00159-1fb900a1-c624-4379-a652-8e5b9dee8651-c000.json
-rw-r--r-- 1 jovyan users  0 Jul 27 08:29 _SUCCESS
_SUCCESS

Loading to Python as dict:

import json
from glob import glob

data = []
for file_name in glob('./test.json/*.json'):
    with open(file_name) as f:
        try:
            data.append(json.load(f))
        except json.JSONDecodeError:  # there is definitely a better way - this is here because some partitions might be empty
            pass

Finally

{item['hash_of_cc_pn_li']:item['id'] for item in data}

{'d5c3': ['BOND-7630290', 'BOND-1742850'],
 '811f': ['BOND-3211356'],
 '90cb': ['BOND-9129450', 'BOND-7175508']}

I hope this helps! Thank you for the good question!

Napoleon Borntoparty
  • 1,870
  • 1
  • 8
  • 28