0

I am using Python to scrape PDFs for links. I have a Regex that works for the most part.

URL_REGEX = r"""
    (?i)\b
    (?i)\b((?:https?:(?:/{1,3}|[a-z0-9%])|[a-z0-9.\-]+[.](?:com|net|org|edu|
    (?:
    gov|mil|aero|asia|biz|cat|coop|info|int|jobs|mobi|museum|name|post|pro|tel|
        [a-z][\w-]+://
    travel|xxx|ac|ad|ae|af|ag|ai|al|am|an|ao|aq|ar|as|at|au|aw|ax|az|ba|bb|bd|
        (?:\S+(?::\S*)?@)?
    be|bf|bg|bh|bi|bj|bm|bn|bo|br|bs|bt|bv|bw|by|bz|ca|cc|cd|cf|cg|ch|ci|ck|cl|
        (?:
    cm|cn|co|cr|cs|cu|cv|cx|cy|cz|dd|de|dj|dk|dm|do|dz|ec|ee|eg|eh|er|es|et|eu|
            (?:
    fi|fj|fk|fm|fo|fr|ga|gb|gd|ge|gf|gg|gh|gi|gl|gm|gn|gp|gq|gr|gs|gt|gu|gw|gy|
                [1-9]\d?|1\d\d|2[01]\d|22[0-3]
    hk|hm|hn|hr|ht|hu|id|ie|il|im|in|io|iq|ir|is|it|je|jm|jo|jp|ke|kg|kh|ki|km|
                |25[0-5]|[1-9]\d|\d
    kn|kp|kr|kw|ky|kz|la|lb|lc|li|lk|lr|ls|lt|lu|lv|ly|ma|mc|md|me|mg|mh|mk|ml|
            )\.(?:
    mm|mn|mo|mp|mq|mr|ms|mt|mu|mv|mw|mx|my|mz|na|nc|ne|nf|ng|ni|nl|no|np|nr|nu|
                [1-9]\d?|1\d\d|2[0-4]\d|25[0-5]
    nz|om|pa|pe|pf|pg|ph|
                |[1-9]\d|\d
    pk|pl|pm|pn|pr|ps|pt|pw|py|qa|re|ro|rs|ru|rw|sa|sb|sc|sd|se|sg|sh|si|sj|sk|
            )\.(?:
    sl|sm|sn|so|sr|ss|st|su|sv|sy|sz|tc|td|tf|tg|th|tj|tk|tl|tm|tn|to|tp|tr|tt|
                [1-9]\d?|1\d\d|2[0-4]\d|25[0-5]
    tv|tw|tz|ua|ug|uk|us|uy|uz|va|vc|ve|vg|vi|vn|vn|vu|wf|ws|ye|yt|za|zm|zw)/
                |[1-9]\d|\d
    [^\s()<>{}\[\]]+[^\s`!()\[\]{};:\'".,<>?«»“”‘’]))"""

But if a URL continues onto the next line, it will get cut off. As a result, I wrote this tiny change in hopes it would solve the problem, but now I am getting more false negatives, where it adds the first word of the next line, even if it is not part of the URL.

def extract_urls(text):
    """
    This function will return all the unique URLs found in the `text` argument.
    - First we use the regex to find all matches for URLs
    - Finally we turn the list into a set, so we only end up with unique URLs\
     (no duplicates)
    """
    text = text.replace("\n", '')
    return set(re.findall(URL_REGEX, text, re.IGNORECASE))

Any help would be appriciated.

  • 1
    Do existing libraries not already do the job correctly? https://stackoverflow.com/questions/34837707/how-to-extract-text-from-a-pdf-file – Dean Taylor Dec 07 '21 at 22:00
  • 1
    Have you tried `re.MULTILINE`? Or stripping lines before spliting? – Kaz Dec 07 '21 at 22:16
  • I do use PDFminer to extract the text. This Regex is specifically used to identify URLs accurately. None of the ones on that thread do it correctly. That is why I made my own. – Marshal Miller Dec 08 '21 at 01:18
  • I have not tried re.Multiline. I'm not familiar with that but I can look into it. I've never stripped lines either. I'll have to investigate that one too. Those sound like a possibility. If you want to share an example, your help is welcome. But at least I have a direction to go in. Thank you. – Marshal Miller Dec 08 '21 at 01:22
  • I tried re.Multiline. It was still unable to detect URLs that ran onto the second line correctly. I also checked out stripping lines. It returned the same issues as text = text.replace("\n", ' '). It grabbed some of the first words from the next line. So. More false positives. – Marshal Miller Dec 08 '21 at 02:22

0 Answers0