I'm trying to stream the audio from the default microphone on a windows host to a Docker container. Mounting the devices through layers pf virtualization can be problematic. I cannot figure out how to correctly consume the incoming stream in my C# (dotnet core) code.
I am using the VLC windows app to create the test stream on my windows host, using the instructions here. I am streaming to rtp://127.0.0.1:5004, since everything will be running on the same machine. The code to consume the stream in my custom C# app looks like this:
LibVLC libvlc = new LibVLC();
libvlc.SetLogFile("c:\\temp\\vlc.log");
MediaPlayer player = new MediaPlayer(libvlc);
var media = new Media(libvlc, "rtp://127.0.0.1:5004", FromType.FromLocation);
var status = await media.Parse(MediaParseOptions.ParseNetwork);
player.Playing += (sender, e) =>
{
//Need to do something here?
};
player.Play(media);
What I was expecting to do is register some event handler on the MediaPlayer, Media, or some other VLC object and get a buffer of audio bytes that I could convert to the expected format (PCM, 1 channel, 16K samples / sec, 16 bits per sample) and then rewrite to the other stream (not shown in the code for simplicity). But, I am obviously missing something.
So, my questions are: for my scenario, should I prefer HTTP, RTSP or RTP streaming from the host? And once I get that set correctly, how do register for incoming audio events so I can process them and write the data to the other stream?