Unfortunately the only answer is currently:
Don't process ZIP files in streaming mode like ZipInputStream
does. It seems like all currently available ZIP processing components like ZipInputStream
from the JRE and ZipArchiveInputStream
from Apache commons-compress can not handle such ZIP files.
There is a very good description of the problem on the apache commons-compress help page:
ZIP archives know a feature called the data descriptor which is a way
to store an entry's length after the entry's data. This can only work
reliably if the size information can be taken from the central
directory or the data itself can signal it is complete, which is true
for data that is compressed using the DEFLATED compression algorithm.
ZipFile has access to the central directory and can extract entries
using the data descriptor reliably. The same is true for
ZipArchiveInputStream as long as the entry is DEFLATED. For STORED
entries ZipArchiveInputStream can try to read ahead until it finds the
next entry, but this approach is not safe and has to be enabled by a
constructor argument explicitly.
https://commons.apache.org/proper/commons-compress/zip.html
Solution
The only possibility to avoid this problem is to use ZipFile
, however the ZipFile
implementation for the JRE requires a real file, therefore you may have to save to data to a temporary file.
Or if you use instead ZipFile
from Apache commons-compress and you already have the ZIP file completely in-memory you can avoid saving it to a temporary file using a SeekableInMemoryByteChannel
instead.
EDIT: solution of using in-memory ZipFile of Apache (Kotlin):
ByteArrayOutputStream().use { byteArrayOutputStream ->
inputStream.copyTo(byteArrayOutputStream)
ZipFile(SeekableInMemoryByteChannel(byteArrayOutputStream.toByteArray())).use {
for (entry in it.entries) {
it.getInputStream(entry).copyTo(someOutputStream)
}
}
}