1

I am using beautiful soup with requests package in python3 for web scraping. This is my code.

import csv  
from datetime import datetime
import requests
import csv  
from datetime import datetime 
from bs4 import BeautifulSoup


quote_page = ['http://10.69.161.179:8080'];

data = []

page = requests.get(quote_page)

soup = BeautifulSoup(page.content,'html.parser')

name_box = soup.find('div', attrs={'class':'caption span10'})

name= name_box.text.strip() #strip() is used to remove starting and ending

print(name);

data.append(name)

    

with open('sample.csv', 'a') as csv_file:  
    writer = csv.writer(csv_file)
    writer.writerow([name])

print ("Success");

When I execute the above code I'm getting the following error.

Traceback (most recent call last):
  File "first_try.py", line 21, in <module>
    page = requests.get(quote_page);
  File "C:\Python\lib\site-packages\requests-2.13.0-py3.6.egg\requests\api.py", line 70, in get
    return request('get', url, params=params, **kwargs)
  File "C:\Python\lib\site-packages\requests-2.13.0-py3.6.egg\requests\api.py", line 56, in request
    return session.request(method=method, url=url, **kwargs)
  File "C:\Python\lib\site-packages\requests-2.13.0-py3.6.egg\requests\sessions.py", line 488, in request
    resp = self.send(prep, **send_kwargs)
  File "C:\Python\lib\site-packages\requests-2.13.0-py3.6.egg\requests\sessions.py", line 603, in send
    adapter = self.get_adapter(url=request.url)
  File "C:\Python\lib\site-packages\requests-2.13.0-py3.6.egg\requests\sessions.py", line 685, in get_adapter
    raise InvalidSchema("No connection adapters were found for '%s'" % url)
requests.exceptions.InvalidSchema: No connection adapters were found for '['http://10.69.161.179:8080/#/main/dashboard/metrics']'

Can anyone help me with this? :(

Vicky
  • 73
  • 1
  • 1
  • 7

1 Answers1

6

Because requests.get() only accept url schema in string format. You need to unpack string inside the list [] .

quote_page = ['http://10.69.161.179:8080']
for url in quote_page:
  page = requests.get(url)
  .....

By the way , though semicolon is harmless under following statement, you should avoid it unless you need it for some reason

quote_page = ['http://10.69.161.179:8080'];
mootmoot
  • 12,845
  • 5
  • 47
  • 44