You can use a regular expression to get the links the use urljoin to get the correct URLs.
import requests
import re
try:
from urlparse import urljoin # Python2
except ImportError:
from urllib.parse import urljoin # Python3
from bs4 import BeautifulSoup
url= 'https://uk-air.defra.gov.uk/latest/currentlevels'
r = requests.get(url, headers={'User-Agent': 'Not blank'})
data = r.text
soup = BeautifulSoup(data, 'html.parser')
for elem in soup('a', href=re.compile(r'site_id')):
print (elem.text)
print (urljoin(url,elem['href']))
Outputs:
Auchencorth Moss
https://uk-air.defra.gov.uk/networks/site-info?site_id=ACTH
Bush Estate
https://uk-air.defra.gov.uk/networks/site-info?site_id=BUSH
Dumbarton Roadside
https://uk-air.defra.gov.uk/networks/site-info?site_id=DUMB
Edinburgh St Leonards
https://uk-air.defra.gov.uk/networks/site-info?site_id=ED3
Glasgow Great Western Road
https://uk-air.defra.gov.uk/networks/site-info?site_id=GGWR
Glasgow High Street
https://uk-air.defra.gov.uk/networks/site-info?site_id=GHSR
...
If you just want Aberdeen use:
for elem in soup('a',href=re.compile(r'site_id'), string='Aberdeen'):
instead of:
for elem in soup('a', href=re.compile(r'site_id')):
Outputs:
Aberdeen
https://uk-air.defra.gov.uk/networks/site-info?site_id=ABD