I'm trying to scrape a local site using beautifulsoup in Jupyter Lab, but it has only one page with too much content. When I try to run this code:
import requests
from bs4 import BeautifulSoup
import re
import string
login_url=('http://192.168.1.18/index.php?go=login')
login_success=('http://192.168.1.18/cashier')
payload={
'is_submitted': 1,
'username':'admin',
'password':'admin',
'submit':'Submit',
}
headers={
'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36 Edg/91.0.864.64',
}
s = requests.session()
r = s.post(login_url,data=payload)
soup = BeautifulSoup(r.content,'html.parser')
req =s.get(login_success,headers=headers)
soups= BeautifulSoup(req.content,'html.parser')
print(soups.prettify())
it throws this error:
IOPub data rate exceeded. The Jupyter server will temporarily stop sending output to the client in order to avoid crashing it. To change this limit, set the config variable
--ServerApp.iopub_data_rate_limit
. Current values: ServerApp.iopub_data_rate_limit=1000000.0 (bytes/sec) ServerApp.rate_limit_window=3.0 (secs)
I already tried this though IOPub data rate exceeded in Jupyter notebook (when viewing image) you can check it for more details.