warc.open()
is a shorthand for warc.WARCFile()
, and warc.WARCFile()
can receive a fileobj
argument, where sys.stdin
is exactly a file object. So what you need to do is something simply like this:
import sys
import warc
f = warc.open(fileobj=sys.stdin)
for record in f:
print record['WARC-Target-URI'], record['Content-Length']
But things are a little bit difficult under hadoop streaming when your input file is .gz
, as hadoop will replace all \r\n
in WARC file into \n
, which will break the WARC format(refer to this question: hadoop converting \r\n to \n and breaking ARC format). As the warc
package use a regular expression "WARC/(\d+.\d+)\r\n"
to match headers(matching exactly \r\n
), you will probably get this error:
IOError: Bad version line: 'WARC/1.0\n'
So you will either modify your PipeMapper.java
file as it is recommended in the referred question, or write your own parsing script, which parses the WARC file line by line.
BTW, simply modifying the warc.py
to use \n
instead of \r\n
in matching headers won't work, because it reads content exactly as the length of Content-Length
, and expecting two empty lines after that. Therefore what hadoop does will definitely make the length of the content mismatches the attribute Content-Length
therefore cause another error like:
IOError: Expected '\n', found 'abc\n'