I work at a telehealth company and we are using connected medical devices in order to provide the doctor with real time information from these equipements, the equipements are used by a trained health Professional.
Those devices work with video and audio. Right now, we are using them with peerjs (so peer to peer connection) but we are trying to move away from that and have a RPI with his only job to stream data (so streaming audio and video).
Because the equipements are supposed to be used with instructions from a doctor we need the doctor to receive the data in real time.
But we also need the trained health professional to see what he is doing (so we need a local feed from the equipement)
How do we capture audio and video
We are using ffmpeg with a go client that is in charge of managing the ffmpeg clients and stream them to a SRS server. This works but we are having a 2-3 sec delay when streaming the data. (rtmp from ffmpeg and flv on the front end)
ffmpeg settings :
("ffmpeg", "-f", "v4l2", `-i`, "*/video0", "-f", "flv", "-vcodec", "libx264", "-x264opts", "keyint=15", "-preset", "ultrafast", "-tune", "zerolatency", "-fflags", "nobuffer", "-b:a", "160k", "-threads", "0", "-g", "0", "rtmp://srs-url")
My questions
- Is there a way for this set up to achieve low latency (<1 sec) (for the nurse and for the doctor) ?
- Is the way I want to achieve this good ? Is there a batter way ?
Flow schema
Data exchange and use case flow:
Note: The nurse and doctor use
HTTP-FLV
to play the live stream, for low latency.