In my team, we have an issue with a specific endpoint which, when called with some specific parameters, provides a huge JSON in chunks. So, for example, if the JSON had 1,000 rows, after about 30 seconds of opening the URL with our browser (it's a GET endpoint) we get 100 rows, then wait a few more and we get the next 200, etc until the JSON is exhausted. This is a problem for us because our application times out before retrieving the JSON. We want to emulate the behavior of the endpoint with an example endpoint of our own, for debugging purposes.
So far, the following is what I have. For simplicity, I'm not even reading a JSON, just a randomly generated string. The logs show me that I'm reading the data a few bytes at a time, writing it and flushing the OutputStream
. The crucial difference is that my browser (or POSTMAN) show me the data at the very end, not in chunks. Is there anything I can do to make it so that I can see the data coming back in chunks?
private static final int readBufSize = 10;
private static final int generatedStringSize = readBufSize * 10000;
@GetMapping(path = "/v2/payload/mocklargepayload")
public void simulateLargePayload(HttpServletResponse response){
try(InputStream inputStream = IOUtils.toInputStream(RandomStringUtils.randomAlphanumeric(generatedStringSize));
OutputStream outputStream = response.getOutputStream()) {
final byte[] buffer = new byte[readBufSize];
for(int i = 0; i < generatedStringSize; i+= readBufSize){
inputStream.read(buffer, 0, readBufSize - 1);
buffer[buffer.length - 1] = '\n';
log.info("Read bytes: {}", buffer);
outputStream.write(buffer);
log.info("Wrote bytes {}", buffer);
Thread.sleep(500);
log.info("Flushing stream");
outputStream.flush();
}
} catch (IOException | InterruptedException e) {
log.error("Received exception: {}", e.getClass().getSimpleName());
}
}