6

I have a (GET) endpoint that sends data in chunks (Transfer-Encoding: chunked). The data is JSON encoded and sent line by line.

Is there a way to consume the data sent by this endpoint in an asynchronous manner in JavaScript (or using some JavaScript library)?

To be clear, I know how to perform an asynchronous GET, but I would like to have the GET request not waiting for the whole data to be transfered, but instead read the data line by line as it arrives. For instance, when doing:

curl  http://localhost:8081/numbers

The lines below are shown one by one as they become available (the example server I made is waiting a second between sending a line and the second).

{"age":1,"name":"John"}
{"age":2,"name":"John"}
{"age":3,"name":"John"}
{"age":4,"name":"John"}

I would like to reproduce the same behavior curl exhibits, but in the browser. I don't want is leave the user wait till all the data becomes available in order to show anything.

Damian Nadales
  • 4,907
  • 1
  • 21
  • 34
  • 1
    The future answer would likely be https://developer.mozilla.org/en-US/docs/Web/API/Streams_API – Dan D. Mar 10 '18 at 08:39
  • Thanks! That's why I cannot find an answer to this problem anywhere. – Damian Nadales Mar 10 '18 at 09:00
  • 1
    Well Streams_API doesn't look like coming to Firefox anytime soon but [ReadableStream](https://developer.mozilla.org/en-US/docs/Web/API/ReadableStream) is already available with [Fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API). You might find [this article on how to handle streams with Fetch API](https://jakearchibald.com/2015/thats-so-fetch/#streams) interesting. – Redu Mar 10 '18 at 14:18
  • Damn! I wish I had seen your answer before re-implementing the end-point to use server sent events :) I'm gonna give `ReadableStream` a try. – Damian Nadales Mar 10 '18 at 14:29
  • Right, you may save some workload and a websockets library dependency at the server side. – Redu Mar 10 '18 at 14:39
  • The good news I have a working version of this! The bad news: it only works on Chrome (Firefox 58 will give the `TypeError: response.body is undefined`). – Damian Nadales Mar 10 '18 at 15:55
  • This isn't valid `Transfer-Encoding: chunked` stream format. It's missing the chunk length. – gre_gor Apr 08 '23 at 13:23

1 Answers1

8

Thanks to Dan and Redu I was able to put together an example that consumes data incrementally, using the Fetch API . The caveat is that this will not work on Internet Explorer, and it has to be enabled by the user in Firefox:

   /** This works on Edge, Chrome, and Firefox (from version 57). To use this example
    navigate to about:config and change

    - dom.streams.enabled preference to true
    - javascript.options.streams to true


    See https://developer.mozilla.org/en-US/docs/Web/API/ReadableStream
*/

fetch('http://localhost:8081/numbers').then(function(response) {

  console.log(response);

  const reader = response.body.getReader();

  function go() {
    reader.read().then(function(result) {
      if (!result.done) {
        var num = JSON.parse(
          new TextDecoder("utf-8").decode(result.value)
        );
        console.log(
          "Got number " + num.intVal
        );        
        go ();
      }
    })
  }

  go ();
})

The full example (with the server) is available at my sandbox. I find it illustrative of the limitations of XMLHttpRequest to compare this version with the this one, which does not use the fetch API.

Damian Nadales
  • 4,907
  • 1
  • 21
  • 34
  • 1
    Glad to see that you have worked it out with the `ReadStream`. Just one quick reminder; unlike in Haskell, in JS using recursion in production code might turn out to be hazardous since your call stack might get blown up eventually if you have like 200K+ chunks to read. So if you know the chunk count in advance you might as well fill an array of that size with `reader.read()` promises and then chain their `.then()` stages up by reducing the array, sequencing the promises. Just an idea. – Redu Mar 10 '18 at 18:03
  • Thanks for the reminder Redu. I spent too much time doing Haskell :) – Damian Nadales Mar 10 '18 at 18:50
  • 1
    Note that there is no guarantee that the `body` stream will give you the full chunks in one read (https://stackoverflow.com/questions/57412098/does-fetchs-response-body-chunks-correspond-to-http-chunks). You might be better off using NDJSON (buffering the decoded text until you hit a newline, then parsing the JSON in the buffer up until then). – Vidar Aug 10 '19 at 15:40
  • @Vidar your answer pretty much covers every use case, but i have a hard time applying ndjson parsing to my reader, do you have any usage sample? – moxched Jan 01 '21 at 17:28
  • @moxched This might be of some assistance: https://github.com/deanhume/streams – Vidar Feb 01 '21 at 09:46
  • As already mentioned, this fails to handle multiple received chunks in one read or partially read chunks. – gre_gor Apr 08 '23 at 13:21