The site requires JavaScript to be active, which isn't the case when you send a request through BeautifulSoup. A workaround has been suggested here, where you can use Selenium to open up the page in an actual browser (thereby enabling JavaScript), and then use BeautifulSoup to parse the HTML.
Something like this should work:
from bs4 import BeautifulSoup
import selenium.webdriver.chrome.service as service
from selenium import webdriver
service = service.Service("../chromedriver.exe")
service.start()
driver = webdriver.Remote(service.service_url)
def parse():
url = 'https://www.pinterest.ie/'
driver.get(url)
html = driver.page_source
soup = BeautifulSoup(html, 'lxml')
print(soup.find_all('a'))
parse()
You will, of course, need some idea of how to use Selenium. The official docs should help.