I have the following code snippet which is part of a larger chunk of code to extract image filenames from links.
for a in soup.find_all('a', href=True):
url = a['href']
path, file = url.rsplit('/', 1)
name, ext = file.rsplit('.', 1)
It works very well, however on occasion the data (which comes from an external source) will have errors.
Specifically, the last line in the snippet above will throw an error that:
name, ext = file.rsplit('.', 1)
ValueError: not enough values to unpack (expected 2, got 1)
What is the best way to ignore this error (or lines containing input not as expected) and continue on to the next entry?
I would have thought a try and catch is the right approach here, but upon googling how to do that with this type of error I did not find anything.
Is it possible to use a try block to catch this type of error? If not, why not, and what is the better approach?