1

I am trying to grab and parse multiple URLs using urllib and BeautifulSoup, but I get the following error:

AttributeError: 'list' object has no attribute 'timeout'

From what I understand, the parser is telling me that I submitted a list and it is looking for a single URL. How can I process multiple URLs?

Here is my code:

from bs4 import BeautifulSoup
from bs4.element import Comment
import urllib.request


def tag_visible(element):
    if element.parent.name in ['style', 'script', 'head', 'title', 'meta', '[document]']:
        return False
    if isinstance(element, Comment):
        return False
    return True

addresses = ["https://en.wikipedia.org", "https://stackoverflow.com", "https://techcrunch.com"]

def text_from_html(body):
    soup = BeautifulSoup(body, 'html.parser')
    texts = soup.findAll(text=True)
    visible_texts = filter(tag_visible, texts)  
    return u" ".join(t.strip() for t in visible_texts)

html = urllib.request.urlopen(addresses).read()
print(text_from_html(html))
alexrodri
  • 13
  • 4
  • Can you provide the full error? I don't see `timeout`, so I'm not sure which line is causing the problem. – mindfolded Oct 19 '18 at 01:26
  • 1
    You can't `urlopen` a list of addresses, see here: https://docs.python.org/3/library/urllib.request.html – Rocky Li Oct 19 '18 at 01:28
  • @mindfolded here it is: Traceback (most recent call last): File "test3.py", line 21, in html = urllib.request.urlopen(addresses).read() File "C:\Users\Pavel\AppData\Local\Programs\Python\Python37\lib\urllib\request.py", line 222, in urlopen return opener.open(url, data, timeout) File "C:\Users\Pavel\AppData\Local\Programs\Python\Python37\lib\urllib\request.py", line 516, in open req.timeout = timeout AttributeError: 'list' object has no attribute 'timeout' – alexrodri Oct 19 '18 at 01:30
  • thanks @RockyLi, I'll take a look at the docs – alexrodri Oct 19 '18 at 01:31

1 Answers1

2

Your error clearly said 'list' object has no attribute 'timeout'

It's because urlopen doesn't take in a list. you should nest it in a loop like this:

my_texts = []
for each in addresses
    html = urllib.request.urlopen(addresses).read()
    print(text_from_html(html)) # or assign to variable like:
    my_texts.append(text_from_html(html))

I would suggest you to use a better module for http than urllib, use requests instead (import requests)

Rocky Li
  • 5,641
  • 2
  • 17
  • 33