21

I have a Spark dataframe with a column (assigned_products) of type string that contains values such as the following:

"POWER BI PRO+Power BI (free)+AUDIO CONFERENCING+OFFICE 365 ENTERPRISE E5 WITHOUT AUDIO CONFERENCING"

I would like to count the occurrences of + in the string for and return that value in a new column.

I tried the following, but I keep returning errors.

from pyspark.sql.functions import col

DF.withColumn('Number_Products_Assigned', col("assigned_products").count("+"))

I'm running my code in Azure Databricks on a cluster running Apache Spark 2.3.1.

blackbishop
  • 30,945
  • 11
  • 55
  • 76
Joshua Hernandez
  • 213
  • 1
  • 2
  • 5

4 Answers4

22

Here's a non-udf solution. Split your string on the character you are trying to count and the value you want is the length of the resultant array minus 1:

from pyspark.sql.functions import col, size, split
DF.withColumn('Number_Products_Assigned', size(split(col("assigned_products"), r"\+")) - 1)

You have to escape the + because it's a special regex character.

+--------------------+------------------------+
|   assigned_products|Number_Products_Assigned|
+--------------------+------------------------+
|POWER BI PRO+Powe...|                       3|
+--------------------+------------------------+
pault
  • 41,343
  • 15
  • 107
  • 149
6
from pyspark.sql.functions import col,udf

@udf(returnType='int')
def cnt(s):
    return s.count('+')

DF.withColumn('Number_Products_Assigned', cnt(col("assigned_products")))
Arnon Rotem-Gal-Oz
  • 25,469
  • 3
  • 45
  • 68
2

Replace will replace the occurrence of the sub-string with null string. So we can count the occurrences by comparing the lengths before and after the replacement as follows:

Using SparkSQL:

SELECT length(x) - length(replace(x,'+')) as substring_count
FROM  (select 'abc+def+ghi++aaa' as x) -- Sample data

Output:

substring_count
---------------
4

Using PySpark functions:

import pyspark.sql.functions as F

df1 = spark.sql("select 'abc+def+ghi++aaa' as x") # Sample data
df1.withColumn('substring_count', 
                F.length(col('x')) 
               - F.length(F.regexp_replace(col('x'), '\+', '')) 
              ).show()

Output:

+----------------+---------------+
|               x|substring_count|
+----------------+---------------+
|abc+def+ghi++aaa|              4|
+----------------+---------------+
Kent Pawar
  • 2,378
  • 2
  • 29
  • 42
0

Spark 3.4+ has regexp_count

F.expr(r"regexp_count(col_name, '\\+')")

Full example:

from pyspark.sql import functions as F
df = spark.createDataFrame(
    [("POWER BI PRO+Power BI (free)+AUDIO CONFERENCING+OFFICE 365 ENTERPRISE E5 WITHOUT AUDIO CONFERENCING",)],
    ["assigned_products"])

df = df.withColumn('Number_Products_Assigned', F.expr(r"regexp_count(assigned_products, '\\+')"))

df.show()
# +--------------------+------------------------+
# |   assigned_products|Number_Products_Assigned|
# +--------------------+------------------------+
# |POWER BI PRO+Powe...|                       3|
# +--------------------+------------------------+
ZygD
  • 22,092
  • 39
  • 79
  • 102