3

I am trying to crawl wikipedia to get some data for text mining. I am using python's urllib2 and Beautifulsoup. My question is that: is there an easy way of getting rid of the unnecessary tags(like links 'a's or 'span's) from the text I read.

for this scenario:

import urllib2
from BeautifulSoup import *
opener = urllib2.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
infile = opener.open("http://en.wikipedia.org/w/index.php?title=data_mining&printable=yes")pool = BeautifulSoup(infile.read())
res=pool.findAll('div',attrs={'class' : 'mw-content-ltr'}) # to get to content directly
paragrapgs=res[0].findAll("p") #get all paragraphs

I get the paragraphs with lots of reference tags like:

paragrapgs[0] =

<p><b>Data mining</b> (the analysis step of the <b>knowledge discovery in databases</b> process,<sup id="cite_ref-Fayyad_0-0" class="reference"><a href="#cite_note-Fayyad-0"><span>[</span>1<span>]</span></a></sup> or KDD), a relatively young and interdisciplinary field of <a href="/wiki/Computer_science" title="Computer science">computer science</a><sup id="cite_ref-acm_1-0" class="reference"><a href="#cite_note-acm-1"><span>[</span>2<span>]</span></a></sup><sup id="cite_ref-brittanica_2-0" class="reference"><a href="#cite_note-brittanica-2"><span>[</span>3<span>]</span></a></sup> is the process of discovering new patterns from large <a href="/wiki/Data_set" title="Data set">data sets</a> involving methods at the intersection of <a href="/wiki/Artificial_intelligence" title="Artificial intelligence">artificial intelligence</a>, <a href="/wiki/Machine_learning" title="Machine learning">machine learning</a>, <a href="/wiki/Statistics" title="Statistics">statistics</a> and <a href="/wiki/Database_system" title="Database system">database systems</a>.<sup id="cite_ref-acm_1-1" class="reference"><a href="#cite_note-acm-1"><span>[</span>2<span>]</span></a></sup> The goal of data mining is to extract knowledge from a data set in a human-understandable structure<sup id="cite_ref-acm_1-2" class="reference"><a href="#cite_note-acm-1"><span>[</span>2<span>]</span></a></sup> and involves database and <a href="/wiki/Data_management" title="Data management">data management</a>, <a href="/wiki/Data_Pre-processing" title="Data Pre-processing">data preprocessing</a>, <a href="/wiki/Statistical_model" title="Statistical model">model</a> and <a href="/wiki/Statistical_inference" title="Statistical inference">inference</a> considerations, interestingness metrics, <a href="/wiki/Computational_complexity_theory" title="Computational complexity theory">complexity</a> considerations, post-processing of found structure, <a href="/wiki/Data_visualization" title="Data visualization">visualization</a> and <a href="/wiki/Online_algorithm" title="Online algorithm">online updating</a>.<sup id="cite_ref-acm_1-3" class="reference"><a href="#cite_note-acm-1"><span>[</span>2<span>]</span></a></sup></p>

Any ideas how to remove them and have pure text?

pacodelumberg
  • 2,214
  • 4
  • 25
  • 32

3 Answers3

3

This is how you could do it with lxml (and the lovely requests):

import requests
import lxml.html as lh
from BeautifulSoup import UnicodeDammit

URL = "http://en.wikipedia.org/w/index.php?title=data_mining&printable=yes"
HEADERS = {'User-agent': 'Mozilla/5.0'}

def lhget(*args, **kwargs):
    r = requests.get(*args, **kwargs)
    html = UnicodeDammit(r.content).unicode
    tree = lh.fromstring(html)
    return tree

def remove(el):
    el.getparent().remove(el)

tree = lhget(URL, headers=HEADERS)

el = tree.xpath("//div[@class='mw-content-ltr']/p")[0]

for ref in el.xpath("//sup[@class='reference']"):
    remove(ref)

print lh.tostring(el, pretty_print=True)

print el.text_content()
Acorn
  • 49,061
  • 27
  • 133
  • 172
  • Thanks for the answer, any idea how to remove all the tabs after references by using xpath's remove function?basically after getting the whole content with el=tree.xpath("//div[@class='mw-content-ltr']") how can we remove the rest of the tags after a tag? – pacodelumberg Nov 09 '11 at 10:56
  • Updated to remove references. – Acorn Nov 09 '11 at 19:08
  • `requests` and `BeautifulSoup` are completely unnecessary here. `lxml.html.parse()` accepts urls. – jfs Nov 09 '11 at 21:50
  • `requests` was used for setting the user-agent string as per OP's snippet. `BeautifulSoup` was used for detecting the document's encoding because it isn't specified in the document's metadata and therefore `lxml` doesn't know what to do. – Acorn Nov 09 '11 at 22:22
3
for p in paragraphs(text=True):
    print p

Additionally you could use api.php instead of index.php:

#!/usr/bin/env python
import sys
import time
import urllib, urllib2
import xml.etree.cElementTree as etree

# prepare request
maxattempts = 5 # how many times to try the request before giving up
maxlag = 5 # seconds http://www.mediawiki.org/wiki/Manual:Maxlag_parameter
params = dict(action="query", format="xml", maxlag=maxlag,
              prop="revisions", rvprop="content", rvsection=0,
              titles="data_mining")
request = urllib2.Request(
    "http://en.wikipedia.org/w/api.php?" + urllib.urlencode(params), 
    headers={"User-Agent": "WikiDownloader/1.2",
             "Referer": "http://stackoverflow.com/q/8044814"})
# make request
for _ in range(maxattempts):
    response = urllib2.urlopen(request)
    if response.headers.get('MediaWiki-API-Error') == 'maxlag':
        t = response.headers.get('Retry-After', 5)
        print "retrying in %s seconds" % (t,)
        time.sleep(float(t))
    else:
        break # ready to read
else: # exhausted all attempts
    sys.exit(1)

# download & parse xml 
tree = etree.parse(response)

# find rev data 
rev_data = tree.findtext('.//rev')
if not rev_data:
    print 'MediaWiki-API-Error:', response.headers.get('MediaWiki-API-Error')
    tree.write(sys.stdout)
    print
    sys.exit(1)

print(rev_data)

Output

{{Distinguish|analytics|information extraction|data analysis}}

'''Data mining''' (the analysis step of the '''knowledge discovery in databases..
jfs
  • 399,953
  • 195
  • 994
  • 1,670
0

These seem to work on Beautiful soup tag nodes. The parentNode gets modified so the relevant tags are removed. The found tags are also returned as lists back to the caller.

@staticmethod
def seperateCommentTags(parentNode):
    commentTags = []
    for descendant in parentNode.descendants:
        if isinstance(descendant, element.Comment):
            commentTags.append(descendant)
    for commentTag in commentTags:
        commentTag.extract()
    return commentTags

@staticmethod
def seperateScriptTags(parentNode):
    scripttags = parentNode.find_all('script')
    scripts = []
    for scripttag in scripttags:
        script = scripttag.extract()
        if script is not None:
            scripts.append(script)
    return scripts
andrew pate
  • 3,833
  • 36
  • 28