cURL: I'm trying to get/save the html page of some "dynamic model's blogspot" such as:
My simple trial on dos command line:
"D:\EXE_UTIL\CURL\curl.exe" -o "d:\temp.html" "http://jackturf.blogspot.fr/"
Received=21597 bytes
But google chrome CTRL-S save to HTML COMPLETE PAGE = 160 kb!
I'm using curl for many years, always ok even with cookies but now with this "google dynamic model" I don't know how to get full html page size?
My cURL version: ( also I did try few other previous versions...)
curl 7.39.0 (i386-pc-win32) libcurl/7.39.0 OpenSSL/1.0.0o zlib/1.2.8 libidn/1.18 libssh2/1.4.3 librtmp/2.3
Protocols: dict file ftp ftps gopher http https imap imaps ldap pop3 pop3s rtmp rtsp scp sftp smtp smtps telnet tftp
Features: AsynchDNS IDN Largefile SSPI SPNEGO NTLM SSL libz
Anybody have solution for a dos-command-line working?