also after that how to handle "load more" in end to extract all links. as there are total 317 links in the page.
from bs4 import BeautifulSoup
import requests
import time
r = requests.get('https://www.tradeindia.com/manufacturers/a3-paper.html')
time.sleep(5)
soup = BeautifulSoup(r.text,'lxml')
for div in soup.find_all('div',class_='company-name'):
links = div.find('a')
print(links['href'])
please someone help me to find the best way to extract all 317 links in the page.