33

Using an OCR tool I extracted texts from screenshots (about 1-5 sentences each). However, when manually verifying the extracted text, I noticed several errors that occur from time to time.

Given the text "Hello there ! I really like Spark ❤️!", I noticed that:

1) Letters like "I", "!", and "l" get replaced by "|".

2) Emojis are not correctly extracted and replaced by other characters or are left out.

3) Blank spaces are removed from time to time.

As a result, I might end up with a string like this: "Hello there 7l | real|y like Spark!"

Since I am trying to match these string against a dataset including the correct text (in thise case "Hello there ! I really like Spark ❤️!"), I am looking for an efficient way how to match the string in Spark.

Can anyone suggest an efficient algorithm for Spark which allows me to compare the extract texts (~100.000) against my dataset (~100 million)?

mrtnsd
  • 347
  • 1
  • 4
  • 3
  • 9
    This does not appear to be a Spark question but rather an OCR issue, as you say in (2) some characters are not correctly extracted. So any matching will obviously fail. Would suggest re-writing the question as how to improve OCR matching by showing examples of your screenshots, OCR software and configuration. – danny May 12 '17 at 14:06

1 Answers1

41

I wouldn't use Spark in the first place, but if you are really committed to the particular stack, you can combine a bunch of ml transformers to get best matches. You'll need Tokenizer (or split):

import org.apache.spark.ml.feature.RegexTokenizer

val tokenizer = new RegexTokenizer().setPattern("").setInputCol("text").setMinTokenLength(1).setOutputCol("tokens")

NGram (for example 3-gram)

import org.apache.spark.ml.feature.NGram

val ngram = new NGram().setN(3).setInputCol("tokens").setOutputCol("ngrams")

Vectorizer (for example CountVectorizer or HashingTF):

import org.apache.spark.ml.feature.HashingTF

val vectorizer = new HashingTF().setInputCol("ngrams").setOutputCol("vectors")

and LSH:

import org.apache.spark.ml.feature.{MinHashLSH, MinHashLSHModel}

// Increase numHashTables in practice.
val lsh = new MinHashLSH().setInputCol("vectors").setOutputCol("lsh")

Combine with Pipeline

import org.apache.spark.ml.Pipeline

val pipeline = new Pipeline().setStages(Array(tokenizer, ngram, vectorizer, lsh))

Fit on example data:

val query = Seq("Hello there 7l | real|y like Spark!").toDF("text")
val db = Seq(
  "Hello there ! I really like Spark ❤️!", 
  "Can anyone suggest an efficient algorithm"
).toDF("text")

val model = pipeline.fit(db)

Transform both:

val dbHashed = model.transform(db)
val queryHashed = model.transform(query)

and join

model.stages.last.asInstanceOf[MinHashLSHModel]
  .approxSimilarityJoin(dbHashed, queryHashed, 0.75).show
+--------------------+--------------------+------------------+                  
|            datasetA|            datasetB|           distCol|
+--------------------+--------------------+------------------+
|[Hello there ! ...|[Hello there 7l |...|0.5106382978723405|
+--------------------+--------------------+------------------+

The same approach can be used in Pyspark

from pyspark.ml import Pipeline
from pyspark.ml.feature import RegexTokenizer, NGram, HashingTF, MinHashLSH

query = spark.createDataFrame(
    ["Hello there 7l | real|y like Spark!"], "string"
).toDF("text")

db = spark.createDataFrame([
    "Hello there ! I really like Spark ❤️!", 
    "Can anyone suggest an efficient algorithm"
], "string").toDF("text")


model = Pipeline(stages=[
    RegexTokenizer(
        pattern="", inputCol="text", outputCol="tokens", minTokenLength=1
    ),
    NGram(n=3, inputCol="tokens", outputCol="ngrams"),
    HashingTF(inputCol="ngrams", outputCol="vectors"),
    MinHashLSH(inputCol="vectors", outputCol="lsh")
]).fit(db)

db_hashed = model.transform(db)
query_hashed = model.transform(query)

model.stages[-1].approxSimilarityJoin(db_hashed, query_hashed, 0.75).show()
# +--------------------+--------------------+------------------+
# |            datasetA|            datasetB|           distCol|
# +--------------------+--------------------+------------------+
# |[Hello there ! ...|[Hello there 7l |...|0.5106382978723405|
# +--------------------+--------------------+------------------+

Related

Alper t. Turker
  • 34,230
  • 9
  • 83
  • 115
  • 2
    How fast is this? I have two datasets with 10 million and 70 million rows. I have to compare strings in them. How long would it take? And as mentioned in this answer, what would you do if not spark? – Ravi Ranjan Jan 11 '18 at 05:08
  • @RaviRanjan Fast compare to what? – Alper t. Turker Jan 21 '18 at 15:23
  • 3
    I am working to calculate levenshtein distance between 10 million and 70 million rows sized tables. That of course would take time, which would be really lot. I had two questions: how fast the above mentioned algorithm is and what would you do if not use spark? – Ravi Ranjan Jan 22 '18 at 05:54
  • Seems numHashTables=5 needs to be set explicitly for the python version – Роман Коптев Jun 10 '19 at 15:41
  • @hi-zir This is still slow for the data size that we are working on in PySpark. Any chance would you be aware of a faster approach ? – stormfield Jul 01 '19 at 10:51
  • So the size of the ngram has impact. How to decide which value? Also text written in reverse order not taken into account? – thebluephantom Jul 24 '19 at 15:36
  • @Alpert.Turker Can you please look into my question I'm able to run this small set like 100 records but when I do it for like 10K records I'm running into this error https://stackoverflow.com/questions/73968030/failed-to-execute-user-defined-functionlshmodellambda – Chris_007 Oct 06 '22 at 02:18