I found a way to do streaming reading in Python in this post's most voted answer.
Stream large binary files with urllib2 to file.
But it went wrong that I could only get partial front data when I was doing some time-consuming task after the chunk had been read.
from urllib2 import urlopen
from urllib2 import HTTPError
import sys
import time
CHUNK = 1024 * 1024 * 16
try:
response = urlopen("XXX_domain/XXX_file_in_net.gz")
except HTTPError as e:
print e
sys.exit(1)
while True:
chunk = response.read(CHUNK)
print 'CHUNK:', len(chunk)
#some time-consuming work, just as example
time.sleep(60)
if not chunk:
break
If no sleep, the output is right(the total size added is verified to be same with the actual size ):
CHUNK: 16777216
CHUNK: 16777216
CHUNK: 6888014
CHUNK: 0
If sleep:
CHUNK: 16777216
CHUNK: 766580
CHUNK: 0
And I decompressed these chunk and find only front partial content of the gz file had been read.