I am trying to get body of an HTTP request with gzipped data + chunked encoding. The code I am using:
byte[] d; // *whole* request body
ByteArrayOutputStream b = new ByteArrayOutputStream();
int c = 0;
int p = 0;
int s = 0;
for(int i = 0; i < d.length; ++i) {
if (s == 0 && d[i] == '\r' && d[i + 1] == '\n') {
c = Integer.parseInt(new String(Arrays.copyOfRange(d, p+1, i)), 16);
if(c == 0) break;
b.write(Arrays.copyOfRange(d, i+2, i+2+c));
p = i + 1;
i += c + 1;
s = 1;
} else if (s == 1 && d[i] == '\r' && d[i + 1] == '\n') {
p = i + 1;
s = 0;
}
}
// here comes the part where I decompress b.toByteArray()
In short, the program reads chunk size and writes part of the whole request (from '\n' to the '\n'+chunk size) to the ByteArrayOutputStream b
and repeat the process until chunk with size 0 is found.
If I try to decompress such data I always get some corrupted data warning, e.g. java.util.zip.ZipException: invalid distance too far back
.
Any thoughts what I might be doing wrong?