0

I can query all occurances of certain base url within a given common crawl index, saving them all to a file and get a specific article (test_article_num) using the code below. However, I have not come across a way to extract the raw html for that article from the specific crawl-data ('filename' in output), even though I know the offset and length of the data I want. I feel like there should be a way to do this in python similar to this, maybe using requests and warcio (perhaps something akin to this), but I'm not sure. Any help is greatly appreicated.

EDIT:

I found exactly what I needed here.

import requests
import pathlib
import json   

news_website_base = 'hobbsnews.com'
URL = "https://index.commoncrawl.org/CC-MAIN-2022-05-index?url="+news_website_base+"/*&output=json"
website_output = requests.get(URL)
pathlib.Path('data.json').write_bytes(website_output.content)

news_articles = []
test_article_num=300
for line in open('data.json', 'r'):
    news_articles.append(json.loads(line))
print(news_articles[test_article_num]) 
news_URL=news_articles[test_article_num]['url']
news_warc_file=news_articles[test_article_num]['filename']
news_offset=news_articles[test_article_num]['offset']
news_length=news_articles[test_article_num]['length']

Code output:

{'urlkey': 'com,hobbsnews)/2020/03/22/no-new-positive-covid-19-tests-in-lea-in-last-24-hours/{{%20data.link', 'timestamp': '20220122015439', 'url': 'https://www.hobbsnews.com/2020/03/22/no-new-positive-covid-19-tests-in-lea-in-last-24-hours/%7B%7B%20data.link', 'mime': 'text/html', 'mime-detected': 'text/html', 'status': '404', 'digest': 'GY2UDG4G3V3S5TXDL3H7HE6VCSRBD3XR', 'length': '40062', 'offset': '21016412', 'filename': 'crawl-data/CC-MAIN-2022-05/segments/1642320303729.69/crawldiagnostics/CC-MAIN-20220122012907-20220122042907-00614.warc.gz'} https://www.hobbsnews.com/2020/03/22/no-new-positive-covid-19-tests-in-lea-in-last-24-hours/%7B%7B%20data.link crawl-data/CC-MAIN-2022-05/segments/1642320300343.4/crawldiagnostics/CC-MAIN-20220117061125-20220117091125-00631.warc.gz 21016412 40062

js16
  • 43
  • 5
  • 1
    Pulled the solution shared in the [link](https://github.com/commoncrawl/cc-notebooks/blob/main/cc-index-table/bulk-url-lookups-by-table-joins.ipynb) into an answer. – Sebastian Nagel Dec 02 '22 at 11:33

1 Answers1

2

With the WARC URL, and WARC record offset and length it's simply:

  • download the range from offset until offset+length-1
  • pass the downloaded bytes to a WARC parser

Using curl and warcio CLI:

curl -s -r250975924-$((250975924+6922-1)) \
   https://data.commoncrawl.org/crawl-data/CC-MAIN-2021-10/segments/1614178365186.46/warc/CC-MAIN-20210303012222-20210303042222-00595.warc.gz \
   >warc_temp.warc.gz
warcio extract --payload warc_temp.warc.gz 0

Or with Python requests and warcio (cf. here):

import io

import requests
import warcio

warc_filename = 'crawl-data/CC-MAIN-2021-10/segments/1614178365186.46/warc/CC-MAIN-20210303012222-20210303042222-00595.warc.gz'
warc_record_offset = 250975924
warc_record_length = 6922

response = requests.get(f'https://data.commoncrawl.org/{warc_filename}',
                        headers={'Range': f'bytes={warc_record_offset}-{warc_record_offset + warc_record_length - 1}'})

with io.BytesIO(response.content) as stream:
    for record in warcio.ArchiveIterator(stream):
        html = record.content_stream().read()
Sebastian Nagel
  • 2,049
  • 10
  • 10
  • Thanks, Sebastian, for detailing the solution. Finding that link yesterday helped me out tremendously. Keep up the great work! – js16 Dec 02 '22 at 17:04
  • Is there reference on how to find `warc_record_offset` and `warc_record_length`? – camel_case Jul 17 '23 at 22:47