First of all the request.get returns a response object. You need to parse the Response using the library BeautifulSoup or lxml.
Also, you need to attach a valid header otherwise server terminates the request refer to below links-
adding header to python requests module
How to use Python requests to fake a browser visit?
If you are just looking to get the data somehow, you can run it using selenium(below code works) although
THIS CAN BE HANDLED WITH REQUEST.
from selenium import webdriver
from bs4 import BeautifulSoup
import pandas as pd
webpage = 'https://www1.nseindia.com/live_market/dynaContent/live_watch/get_quote/getHistoricalData.jsp?symbol=ZEEL&series=EQ&fromDate=undefined&toDate=undefined&datePeriod=3months'
driver = webdriver.Chrome(executable_path='Your/path/to/chromedriver.exe')
driver.get(webpage)
html = driver.page_source
soup = BeautifulSoup(html, "html.parser")
table = soup.find('table')
table_rows = table.find_all('tr')
res = []
for tr in table_rows:
td = tr.find_all('td')
row = [tr.text.strip() for tr in td if tr.text.strip()]
if row:
res.append(row)
df = pd.DataFrame(res, columns=["Date", "Symbol", "Series", "Open Price","High Price","Low Price","Last Traded Price ","Close Price","Total Traded Quantity","Turnover (in Lakhs)"])
print(df)
driver.quit()