Your page is blocking the user-agent python, the user agent is basically "who is doing the request" install the python module fake user-agent and add a header to the request simulating that the request is being made for another one like google chrome, mozilla, etc if you want an specific user-agent i recomend you look at fake-user-agent
With urllib
i don't know how you add a header (probably will be with a flag) but i let you here a simple code using the module requests
:
import requests
from fake_useragent import UserAgent
ua = UserAgent()
header = {
"User-Agent": ua.random
}
r = requests.get('https://www.zacks.com/stock/quote/MA', headers=header)
r.text #your html code
After this you can use beatifull soup with r.text
like you did:
soup = BeautifulSoup(r.text, "lxml")
soup
EDIT:
Looking a bit if you want do it with urllib
you can do this:
import urllib
from fake_useragent import UserAgent
ua = UserAgent()
q = urllib.Request('https://www.zacks.com/stock/quote/MA')
q.add_header('User-Agent', ua.random)
a = urlopen(q).read()