1

Is it possible to play an incoming stream of BMP-images as a video with an ultra low latency (image show in under 20 ms)

The images will come with a frequency of 20 image per second.

Is this naive solution possible or should the images be encoded using H.264/5 for example ?

How should this problem be approached ?

Friendly regards

user3666197
  • 1
  • 6
  • 50
  • 92
Snaky
  • 13
  • 4

1 Answers1

0

Q : "Is this naive solution possible...?"


Yes.

enter image description here

You might also like this piece of reading on this.


Q : "...should the images be encoded using H264/5?"


That Hell depends.

Given the said 20 Hz BMP-image ingress-rate, there are about 50 [ms] per image for all the Visual-part of the (principally distributed) MVC-system.

Within those said 50 ms, there ought be zero time wasted & nothing might ever block.

So the receiving-engine must keep steady data-flow of the ingress, no traffic overloads by any other, un-coordinated bandwidth ( memory, I/O, ... ) eater ( BMP-images' size was not mentioned so far ) and must provide some means, what will get fed into the presenter-engine in cases the "next"-data due to get shown is not complete or present at all.

So what about the compression?

Compression is a double-sided sword - you obviously reduce the data-volume (with some SER/DES-codecs even at a cost of loosing some part of the original data-richness, yes, exactly - knowingly lossy compression schemes ), while typically adding some additional data-re-framing and, perhaps, R/S or other "line-code" error-detection/error-correction, so the final volume of data-to-xmit need not be as small as the pure compression-part itself allows in theory.

Result?

All that comes at remarkable costs - both on SER/coder-side, here to get as little data into the (knowingly low-bandwidth / fuzzy as most often un-manageable latency ) transport, and on the decoder/DES-side.

So, given the 20 Hz refresh rate leaves not more than a total 50 ms for one frame-repaint, the lump sum of the receiver-engine processing and presenter-engine processing cannot spend more than those 50 ms per frame. Any decode-related and DESerialiser-related processing is a deciding factor on this.

Yet, one may succeed, if proper design & flawless engineering took place for doing this right & robust enough.

Check your target device for all of:

  • transport resources limits
    (i.e. how much time get burnt & what resources get allocated / locked per arrival),
  • memory-I/O
    (latency and memory-I/O concurrency limits for any interleaved data-flow patterns),
  • cache-hierarchy
    (if present on a device) sizes, costs and I/O-limits),
  • processing limits
    (if multicore, the more if NUMA, beware of non-uniform memory-I/O traps)
  • presenter-engine hardware bottlenecks
    (memory-I/O, display device buffer-I/O limits and any other add-on latencies)

since any of these details may de-rail your smooth flow of (error-resilient) data to get finally presented on a target device in a due time for the wished to get target 20 FPS.

Good luck!

Nota bene:
if you may harness data-reduction right on the source, grab that chances & do it, like in any cases like where your know a priori that all target presenter-engines are B/W, never "send" colourful BMPs, strip off all the per-frame colour-table and high-level colour-profile tricks and stream not a bit more than just the raw, right-sized raster data, that match the worst-case processing & latency ceiling's scenario for your target terminal device(s).
Review carefully all these redundant & principally wasted (as repeating) parts of the generic BMP-data-format definition and do not re-broadcast 'em

;)

user3666197
  • 1
  • 6
  • 50
  • 92
  • Thank you very much, i just have one question before picking your ansyer as a best answer. Where there is the limitation of having 50ms as the lowest possible latency ? why does it have to be at least one frame behind ? – Snaky Oct 05 '20 at 11:24
  • Glad you have found the Answer as the best answer :o) Just the opposite - the *..."limitation of having 50 ms as the lowest possible latency"* was not my idea. There are two principal problems - what is a maximum tolerable E2E-latency ( different for bidirectional H2H interactive streaming, other for phone-alike conversation, other for stable & robust hard-RTOS [tag:distributed-system] automation, other for jitter & wander & error-resilient control-systems ). In any case, your resources' limits decide, how much latency-masking could help, if free resources suffice. **No other magic helps :o)** – user3666197 Oct 05 '20 at 13:57
  • Sorry for being annoying, one last thing From what i understood, this problem is deep embedded within the OS itself and there are no work arounds. I find this to be very weird since we're not talking about an encoded video. there is no buffer here. and image should be rendered as soon as it comes. please bear with me, what am i missing here ? and you deserve best answer, you put me into the right direction :) – Snaky Oct 06 '20 at 18:22
  • I mean, by that reasoning, if we send an image every 1 hour, would the minimum latency be one hour :O something feels wrong (probably in my understanding) – Snaky Oct 06 '20 at 18:44
  • If a system is composed by no matter how many subsystems, if image gets pushed, from an acquisition source at T0, through a Reference_point_A, then passed across some network of several hops with (un)controlled transport-latency, next transformed at a Reference_point_B into a transcoded image and again moved across a few legs of another transport-latency denominated links / network(s) so as to reach at T1 a Reference_point_C, where a processing node decodes the image, using ultra-low-latency memory / processing & displays it at T2, the overall latency is T2-T0, T1-T0 of which might get masked. – user3666197 Oct 07 '20 at 20:14
  • If you have a satellite with an ultra-fast ~ 10,000 Hz Hi-Res acquisition camera, say 8K but somewhere deep in space, from where you may receive imagery but at a speed under 2.400 baud ( netto, let's ignore line-code handshaking and other low level tricks ), yes, you will have to ( due to real-life bandwidth constraints and a principal signal propagation ceiling of the speed of light ) wait weeks to first see a first image, not speaking about a need to collect data for any next one. Here the End-to-End latency goes indeed wild. So always speak about latency between what pair of ReferencePoints – user3666197 Oct 07 '20 at 20:23