I am trying to write pyspark code for the below sql query:
Create table table1 as
Select a.ip_address,a.ip_number,b.ip_start_int,b.ip_end_int,b.post_code_id,b.city,b.region_name,b.two_letter_country
from nk_ip_address_check a
join
ip_additional_pulse b
on a.ip_number between b.ip_start_int and b.ip_end_int
The above query joins between two tables and uses a "between" clause with the "on" clause. I have written a UDF which does the same but seems like it is very slow. Is there any way I can write the above query in pyspark code which will give me better performace.
Below are the code that I am using
def ip_mapping(ip_int):
ip_qry = "select country_code,region_code,city_code,postal_code from de_pulse_ip_pqt where ip_start_int < {} and ip_end_int > {}".format(ip_int,ip_int)
result = spark.sql(ip_qry)
country_code = result.rdd.map(lambda x: x['country_code']).first()
return country_code
ip_mapped = udf(ip_mapping, IntegerType())
df_final = df.withColumn("country_code", ip_mapped("ip_int"))
this is very inefficient. moreover, If I have region_code , I have to call the by changing the return value of the function ip_mapping.
df_final = df.withColumn("region_code", ip_mapped("ip_int"))