-1

Here is my code. I've scraped the data from website but it just returns to me one long list.

How do I manipulate the data to fall under the headings? I'm getting the current error message:

ValueError: 8 columns passed, passed data had 2648 columns.

Any help is greatly appreciated.

from selenium import webdriver
from webdriver_manager.chrome import ChromeDriverManager
from bs4 import BeautifulSoup
import pandas as pd
from pandas import DataFrame
import html5lib

url = "https://www.loudnumber.com/screeners/cashflow"
driver = webdriver.Chrome(ChromeDriverManager().install())
driver.get(url)


time.sleep(5)

html = driver.page_source
soup = BeautifulSoup(html,'html.parser')
table = soup.find('table')
l = []
for tr in table:
    td = table.find_all('td') #cols
    rows = [table.text.strip() for tr in td if tr.text.strip()] 
    if rows:
        l.append(rows)
                            

driver.quit()


df = pd.DataFrame(list(l), columns=["Ticker","Company","Industry","Current Price"
                                    ,"Instrinsic Value","IV to CP ratio",
                                    "Dividend","Dividend Yield"])

print(df)
Hansel313
  • 15
  • 3

2 Answers2

0

Lots of things are totally wrong here, so fixed your code.

Note: Put selenium drivers in python folder to make this code work or give its path. refer this selenium - chromedriver executable needs to be in PATH

Here is complete code:

from selenium import webdriver
from bs4 import BeautifulSoup
import pandas as pd
import time

url = "https://www.loudnumber.com/screeners/cashflow"
driver = webdriver.Chrome()
driver.get(url)
time.sleep(3)

html = driver.page_source
soup = BeautifulSoup(html,'html.parser')
table = soup.find("div", class_="table-responsive").tbody.find_all("tr")
l = []

for tr in table:
    td = tr.find_all('td') #cols
    rows = [i.text.strip() for i in td if i.text.strip()]
    l.append(rows)

driver.quit()

df = pd.DataFrame(list(l), columns=["Ticker","Company","Industry","Current Price" ,"Instrinsic Value","IV to CP ratio","Dividend","Dividend Yield"])
print(df)

Here is the result: enter image description here

yf879
  • 168
  • 1
  • 7
0

Just in case without BeautifulSoup

You dont need bs4 to get a proper result, let pandas take care of it. read_html() is doing all the magic for you.

Example

from selenium import webdriver
import time
import pandas as pd

url = "https://www.loudnumber.com/screeners/cashflow"
driver = webdriver.Chrome('C:\Program Files\ChromeDriver\chromedriver.exe')
driver.get(url)
time.sleep(5)

df = pd.read_html(driver.page_source, header=0, skiprows=(-1,0))[0]
df = df[:-1]
driver.close()

df
HedgeHog
  • 22,146
  • 4
  • 14
  • 36