13

Since yahoo discontinued their API support pandas datareader now fails

import pandas_datareader.data as web
import datetime
start = datetime.datetime(2016, 1, 1)
end = datetime.datetime(2017, 5, 17)
web.DataReader('GOOGL', 'yahoo', start, end)

HTTPError: HTTP Error 401: Unauthorized

is there any unofficial library allowing us to temporarily work around the problem? Anything on Quandl maybe?

Scilear
  • 216
  • 1
  • 3
  • 10
  • The unsupported Yahoo finance API is shut down: https://forums.yahoo.net/t5/Yahoo-Finance-help/Is-Yahoo-Finance-API-broken/td-p/250503 – He Shiming May 19 '17 at 02:20
  • pshep123, great advice I never think to search stackoverflow!!!but like many other people apart from knowing that yahoo discontinued their API I had no temporary solution – Scilear May 19 '17 at 18:34

9 Answers9

7

The name of the fix_yahoo_finance package has been changed to yfinance. So you can try this code

import yfinance as yf
data = yf.download('MSFT', start = '2012-01-01', end='2017-01-01')
Kamaldeep Singh
  • 492
  • 4
  • 8
6

I found the workaround by "fix-yahoo-finance" in https://pypi.python.org/pypi/fix-yahoo-finance useful, for example:

from pandas_datareader import data as pdr
import fix_yahoo_finance

data = pdr.get_data_yahoo('APPL', start='2017-04-23', end='2017-05-24')

Note the order of the last 2 data columns are 'Adj Close' and 'Volume' ie. not the previous format. To re-index:

cols = ['Date', 'Open', 'High', 'Low', 'Close', 'Volume', 'Adj Close']
data.reindex(columns=cols)
artDeco
  • 470
  • 2
  • 8
  • 21
  • I get an error about the volume column on the call to get_yahoo_data, but thanks I will look into it – Scilear May 29 '17 at 18:42
  • Yes @Scilear so did I at first - try reinstall pandas_datareader to the latest version and it should be fine. – artDeco May 29 '17 at 20:26
2

So they've changed their url and now use cookies protection (and possibly javascript) so I fixed my own problem using dryscrape, which emulates a browser this is just an FYI as this surely now breaks their terms and conditions... so use at your own risk? I'm looking at Quandl for an alternative EOD price source.

I could not get anywhere with cookie browsing a CookieJar so I ended up using dryscrape to "fake" a user download

import dryscrape
from bs4 import BeautifulSoup
import time
import datetime
import re

#we visit the main page to initialise sessions and cookies
session = dryscrape.Session()
session.set_attribute('auto_load_images', False)
session.set_header('User-agent', 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95     Safari/537.36')    

#call this once as it is slow(er) and then you can do multiple download, though there seems to be a limit after which you have to reinitialise...
session.visit("https://finance.yahoo.com/quote/AAPL/history?p=AAPL")
response = session.body()


#get the dowload link
soup = BeautifulSoup(response, 'lxml')
for taga in soup.findAll('a'):
    if taga.has_attr('download'):
        url_download = taga['href']
print(url_download)

#now replace the default end date end start date that yahoo provides
s = "2017-02-18"
period1 = '%.0f' % time.mktime(datetime.datetime.strptime(s, "%Y-%m-%d").timetuple())
e = "2017-05-18"
period2 = '%.0f' % time.mktime(datetime.datetime.strptime(e, "%Y-%m-%d").timetuple())

#now we replace the period download by our dates, please feel free to improve, I suck at regex
m = re.search('period1=(.+?)&', url_download)
if m:
    to_replace = m.group(m.lastindex)
    url_download = url_download.replace(to_replace, period1)        
m = re.search('period2=(.+?)&', url_download)
if m:
    to_replace = m.group(m.lastindex)
    url_download = url_download.replace(to_replace, period2)

#and now viti and get body and you have your csv
session.visit(url_download)
csv_data = session.body()

#and finally if you want to get a dataframe from it
import sys
if sys.version_info[0] < 3: 
    from StringIO import StringIO
else:
    from io import StringIO

import pandas as pd
df = pd.read_csv(StringIO(csv_data), index_col=[0], parse_dates=True)
df
Scilear
  • 216
  • 1
  • 3
  • 10
2

I changed from Yahoo to Google Finance and it works for me, so from

data.DataReader(ticker, 'yahoo', start_date, end_date)

to

data.DataReader(ticker, 'google', start_date, end_date)

and adapted my "old" Yahoo! symbols from:

tickers = ['AAPL','MSFT','GE','IBM','AA','DAL','UAL', 'PEP', 'KO']

to

tickers = ['NASDAQ:AAPL','NASDAQ:MSFT','NYSE:GE','NYSE:IBM','NYSE:AA','NYSE:DAL','NYSE:UAL', 'NYSE:PEP', 'NYSE:KO']
Alex L.
  • 41
  • 2
2

Try this out:

import fix_yahoo_finance as yf
data = yf.download('SPY', start = '2012-01-01', end='2017-01-01')
vibhu_singh
  • 411
  • 4
  • 3
1

Yahoo finance works well with pandas. Use it like this:

import pandas as pd
import pandas_datareader as pdr
from pandas_datareader import data as wb

ticker='GOOGL'
start_date='2019-1-1'
data_source='yahoo'

ticker_data=wb.DataReader(ticker,data_source=data_source,start=start_date)
df=pd.DataFrame(ticker_data)
0

Make the thread sleep in between reading after each data. May work most of the time, so try 5-6 times and save the data in the csv file, so next time u can read from file.

### code is here ###
import pandas_datareader as web
import time
import datetime as dt
import pandas as pd

symbols = ['AAPL', 'MSFT', 'AABA', 'DB', 'GLD']
webData = pd.DataFrame()
for stockSymbol in symbols:
    webData[stockSymbol] = web.DataReader(stockSymbol, 
    data_source='yahoo',start= 
               startDate, end= endDate, retry_count= 10)['Adj Close']   
    time.sleep(22) # thread sleep for 22 seconds.
Dipen Lama
  • 147
  • 1
  • 5
0

The question is quite old, but here I am. I have found from the yfinance pypi.org project page a section titled 'pandas_datareader override'. It states,

"If your code uses pandas_datareader and you want to download data faster, you can "hijack" pandas_datareader.data.get_data_yahoo() method to use yfinance while making sure the returned data is in the same format as pandas_datareader's get_data_yahoo()."

They also provide the following code sample which is currently working.

from pandas_datareader import data as pdr

import yfinance as yf
yf.pdr_override() # <== that's all it takes :-)

# download dataframe
data = pdr.get_data_yahoo("SPY", start="2017-01-01", end="2017-04-30")
Tony Shouse
  • 106
  • 5
0

To add to the answer above from Tony Shouse, the following code works for me using Visual Studio Code if you would like to gather the Adjusted Close column for multiple ticker symbols at once.

import numpy as np
import pandas as pd
from pandas_datareader import data as wb
import matplotlib.pyplot as plt
import yfinance as yf
yf.pdr_override() # <== that's all it takes :-)

tickers = ['PG', 'MSFT', 'F', 'GE']
portfolio = pd.DataFrame()
for t in tickers:
    portfolio[t] = pdr.get_data_yahoo(t, start="2017-01-01", end="2017-04-30")['Adj Close']