Try the following:
import requests
r = requests.get('http://127.0.0.1/some_path/small.csv')
print len(r.content.split('\n')) -1
Result:
10
for small.csv file as follows:
1lpcfgokakmgnkcojhhkbfbldkacnbeo,6B5108
pjkljhe2ncpnkpknbcohdijeoejaedia,678425
apdfllc5aahabafndbhieahigkjlhalf,651374
aohghmighlieiainnegkcijnfilokake,591116
coobgpohoikkiipiblmjeljniedjpjpf,587200
dmgjnkhnkblpmfjpdakehnaikgdjllic,540979
felcaaldnbdncclmgdcncolpebgiejap,480535
aapocclcgogkmnckokdopfmhonfmgoek,480441
pdehmppfilefbolgganhfihpbmjlgebh,273609
nafaimnnclfjfedmmabolbppcngeolgf,105979
Edit: (As suggested by MHawke)
import requests
line_cnt=0
r = requests.get('http://127.0.0.1/some_path/small.csv',stream=True)
for i in r.iter_lines():
if i.strip():
line_cnt +=1
print (line_cnt)
This version does not count blank lines and should be more efficient for a large file because it uses iter_lines
iter_lines(chunk_size=512, decode_unicode=None, delimiter=None)
Iterates over the response data, one line at a time. When
stream=True is set on the request, this avoids reading the content at
once into memory for large responses.
(Note: not re-entrant safe)