0

Is it possible to transfer ImageBitmap directly from JS to WebAssembly without converting it to a typed array? Can ImageBitmap be some kind of "transferable" object between JS and WebAssembly as it is between the main thread and web worker?

The idea is to replace CanvasRenderingContext2D.getImageData() with something faster. Currently, I do this to apply filters on video:

  1. context.drawImage(imageBitmap, 0, 0);
  2. let imageData = context.getImageData();
  3. loop through imageData and apply chroma key filter.

I want to try to replace this with the next (if it's possible):

  1. transfer ImageBitmap to WebAssembly;
  2. convert ImageBitmap to ImageData;
  3. loop through ImageData and apply chrome key filter;
  4. transfer changed ImageData back to JS and draw it on canvas.

I'm unfamiliar with the possibilities of WebAssembly, so maybe sorry for the silly question.

Thanks.

Kaiido
  • 123,334
  • 13
  • 219
  • 285
Liubomyr
  • 53
  • 4

1 Answers1

2

I'm not very familiar with the WebAssembly side of the force either, but I really don't think there is any way to consume an ImageBitmap object from there.

But anyway, what's slow about reading the pixel data from an ImageBitmap object is the read-back operation from the GPU to the CPU. WebAssembly won't really help here.

You may want to look at WebGL instead which IIRC has pathes to both render then read-back with minimal operations in between, that might be a bit faster than with a 2D context. Another place to look at, with a bit less support is the incoming WebGPU context. Once again I'm not very familiar with it but it's definitely worth a look at it if you wish the best perfs.

However, the best in your case might be to come back a step before, right when you generate that ImageBitmap object. You don't say where it comes from, but certainly the source that was used to generate it can give access to its pixel data already, maybe faster than going by a bitmap. For instance, if you've got a Blob representation of an image file, you could pass its inner ArrayBuffer data to your WebAssembly and do the decoding there. If you have a video, you could use the WebCodecs API to extract the YUV planes. etc.

Kaiido
  • 123,334
  • 13
  • 219
  • 285
  • Thank you for the answer. Bitmap image comes from VideoFrame which I get from MediaStreamTrackProcessor. I tried VideoFrame.copyTo(Uint8ClampedArray), and then create ImageData(Uint8ClampedArray, .., ..), but it's even slower (and draws this ImageData with artifacts). So, probably there are no faster ways than context.getImageData() for now. – Liubomyr Sep 03 '22 at 23:51
  • `VideoFrame.copyTo` will copy the data of all the frame's planes (generally YUV(A)). You have to extract each planes yourself, while possibly resampling some planes based on the [format](https://developer.mozilla.org/en-US/docs/Web/API/VideoFrame/format) of your VideoFrame. However, reading the specs issues it seems that indeed the current implementation is at least as slow as actually rendering the frame + readback. Is the source video really a MediaStream? Doing all the decoding on the CPU might actually be faster, but with MediaStream I don't think we have a path for that... – Kaiido Sep 06 '22 at 02:41
  • Somehow the demo from [this](https://webrtc.ventures/2023/02/background-removal-using-insertable-streams) article doesn't use getImageData() for detecting humans on VideoFrame. The VideoFrame is just passed to selfieSegmentation.send method and I'm curious how it's processed next. Does it somehow retrieves pixel data from the VideoFrame, or maybe is VideoDrame passed to WebAssembly? And what is interesting - is the difference between the performances of the demo mentioned above and [this, for example](https://webrtchacks.github.io/transparent-virtual-background/playground.html) is significant. – Liubomyr Mar 20 '23 at 23:48
  • [the demo from mentioned article](https://webrtcventures.github.io/background-removal-insertable-streams/) – Liubomyr Mar 20 '23 at 23:49