Alrighty so Selenium is for web automation, but data scraping (like what it seems like you're trying to do) is more for requests, and beautiful soup. There have been posts about using Selenium that work, but it is easier to use these so you don't have to launch a web browser to do it like for selenium.
r = requests.get("https://ca.iherb.com/pr/Life-Extension-BioActive-Complete-B-Complex-60-Vegetarian-Capsules/67051")
soup = BeautifulSoup(r.content, 'html.parser')
list_items = soup.find('div', itemprop="description")
found = str(re.findall(r'itemprop="description"><ul><li>(\D+)', str(list_items)))
This take just a second whereas the other methods can take much longer to load a browser and navigate to the website for this information. Once you get this information and use regex to find the appropriate tag, you can use regex to clean it up just the text.
newfound = re.sub(r"</li>|[\[']", '', found)
newfound2 = re.sub(r"<li>", ', ', newfound)
stripped = newfound2.split('\\xa0', 1)[0]
The lines itemprop="description"><ul><li>
and \xa0
are both from viewing the source of the page and finding the list element there.
Here's some info on regex: https://www.guru99.com/python-regular-expressions-complete-tutorial.html
](https://stackoverflow.com/questions/56421690/how-to-extract-all-li-elements-under-ul)