I'm getting both Depth & Color frames from the Kinect 2, using the Kinect SDK ( C#
), and I'm sending them to Python clients using ZeroMQ
.
this.shorts = new ushort[ 217088]; // 512 * 424
this.depthBytes = new Byte[ 434176]; // 512 * 424 * 2
this.colorBytes = new Byte[4147200]; // 1920 * 1080 * 4
public void SendDepthFrame(DepthFrame depthFrame)
{
depthFrame.CopyFrameDataToArray(this.shorts);
Buffer.BlockCopy(shorts, 0, this.depthBytes, 0, this.depthBytes.Length);
this.depthPublisher.SendByteArray(this.depthBytes);
}
public void SendColorFrame(ColorFrame colorFrame, WriteableBitmap map)
{
colorFrame.CopyRawFrameDataToArray(this.colorBytes);
this.colorPublisher.SendByteArray(this.colorBytes);
}
Since I'm sending uncompressed data, I'm overloading the network and I'd like to compress these frames.
Is this possible for a continuous stream-processing?
I know that I can do that by compressing in a PNG/JPEG
format, but I would like to maintain the notion of video stream.
The goal is to send the compressed data in C#
, and then decoding them in Python.
Is there any libs that allow to do that ?