2

I'm getting both Depth & Color frames from the Kinect 2, using the Kinect SDK ( C# ), and I'm sending them to Python clients using ZeroMQ.

this.shorts     = new ushort[ 217088]; //  512 *  424
this.depthBytes = new   Byte[ 434176]; //  512 *  424 * 2
this.colorBytes = new   Byte[4147200]; // 1920 * 1080 * 4

public void SendDepthFrame(DepthFrame depthFrame)
    {
        depthFrame.CopyFrameDataToArray(this.shorts);
        Buffer.BlockCopy(shorts, 0, this.depthBytes, 0, this.depthBytes.Length);
        this.depthPublisher.SendByteArray(this.depthBytes);
    }

public void SendColorFrame(ColorFrame colorFrame, WriteableBitmap map)
    {
        colorFrame.CopyRawFrameDataToArray(this.colorBytes);
        this.colorPublisher.SendByteArray(this.colorBytes);
    }

Since I'm sending uncompressed data, I'm overloading the network and I'd like to compress these frames.

Is this possible for a continuous stream-processing?

I know that I can do that by compressing in a PNG/JPEG format, but I would like to maintain the notion of video stream.

The goal is to send the compressed data in C#, and then decoding them in Python.

Is there any libs that allow to do that ?

user3666197
  • 1
  • 6
  • 50
  • 92
  • **What transport-class** do you use for ZeroMQ and **how many clients** do you distribute the frame-data to and **what is your target FPS** to meet ? Design shall follow some quantitative metrics, verifiable in a PoC, before any code starts to get typed. – user3666197 May 23 '16 at 15:04
  • Currently I'm using tcp, but I'll probably switch to udp. I distribute data to 1 or 2 clients. Around 25 fps. –  May 23 '16 at 15:23

1 Answers1

0

May forget about compression for the moment and downscale for PoC

If your design indeed makes sense, try to focus rather on core CV-functionality first, at a cost of reduced ( downscaled ) FPS, colordepth, resolution ( in this order of priority ).

Your indicated data produces about 1 Gbps exgress data-stream, where the forthcoming CV-processing will choke anyways, having remarkable CV-process performance ( delay / latency ) / interim data-representations' memory-management bottlenecks.

This said, the PoC may benefit from 1/4 - 1/10 slower FPS acquisition/stream-processing and the finetuned solution may show you, how many nanoseconds-per-frame does your code have in stream-processing margin ( to finally decide if there is time & processing-power enough to include any sort of CODEC-processing into the otherwise working pipeline )

enter image description here

check the lower-left window delays in [usec] by a right-click -> [Open in a New Tab]
to see enlarged and realise a scale / order of magnitude of a few actual openCV procesing latencies of about a 1/4 of your one FullFD still image in a real-world processing with much smaller FPS on a single-threaded i7/3.33 GHz device, where L3 cache sizes can carry as much as 15 MB of imagery-data with fastest latencies of less than 13 ns ( core-local access case ) .. 40 ns ( core-remote NUMA access case ) + block-nature of the CV-orchestrated image-processing benefits a lot from minimal if not zero cache-miss-rate -- but this is not a universal deployment hardware scenario to rely on: enter image description here
The costs ( penalty ) of each cache-miss and a need to ask for and peform an access to data in the main DDR-RAM is about +100 ns >>> https://stackoverflow.com/a/33065382/3666197

Without a working pipeline, there are no quantitative data about the sustained stream-processing / it's margin-per-frame to decide the CODEC-dilemma a-priori of the proposed PoC-implementation.

Community
  • 1
  • 1
user3666197
  • 1
  • 6
  • 50
  • 92
  • 1
    Well, I reduced the frame because in the end i don't really need to get 25 FPS, so I'm around 15/17 right now and it's "ok". Now when I'm streaming both color & depth images, I'm sending at 250 Mbps. –  May 27 '16 at 07:31