12

I have so far managed to run the following sample:

WebRTC native c++ to browser video streaming example

The sample shows how to stream video from a native C++ application (peerconnection_client.exe) to the browser (I am using Chrome). This works fine and I can see myself in the browser.

What I would like to do is to stream audio from the browser to the native application but I am not sure how. Can anyone give me some pointers please?

jpen
  • 2,129
  • 5
  • 32
  • 56

3 Answers3

4

I'm trying to find a way to stream both video and audio from browser to my native program. and here is my way so far.

To stream video from browser to your native program without gui, just follow the example here. https://chromium.googlesource.com/external/webrtc/+/refs/heads/master/examples/peerconnection/client/

use AddOrUpdateSink to add your own VideoSinkInterface and you will receive your frame data in callback void OnFrame(const cricket::VideoFrame& frame). Instead of render the frame to GUI as the example does, you can do whatever you want.

To stream audio from browser to your native program without real audio device. you can use a fake audio device.

  1. modify variable rtc_use_dummy_audio_file_devices to true in file https://chromium.googlesource.com/external/webrtc/+/master/webrtc/build/webrtc.gni
  2. invoke the global static function to specify the filename webrtc::FileAudioDeviceFactory::SetFilenamesToUse("", "file_to_save_audio");
  3. patch file_audio_device.cc with the code blew. (as I write this answer, FileAudioDevice has some issues, may already be fixed)
  4. recompile your program, touch file_to_save_audio and you will see pcm data in file_to_save_audio after webrtc connection is established.

patch:

    diff --git a/webrtc/modules/audio_device/dummy/file_audio_device.cc b/webrtc/modules/audio_device/dummy/file_audio_device.cc
index 8b3fa5e..2717cda 100644
--- a/webrtc/modules/audio_device/dummy/file_audio_device.cc
+++ b/webrtc/modules/audio_device/dummy/file_audio_device.cc
@@ -35,6 +35,7 @@ FileAudioDevice::FileAudioDevice(const int32_t id,
     _recordingBufferSizeIn10MS(0),
     _recordingFramesIn10MS(0),
     _playoutFramesIn10MS(0),
+    _initialized(false),
     _playing(false),
     _recording(false),
     _lastCallPlayoutMillis(0),
@@ -135,12 +136,13 @@ int32_t FileAudioDevice::InitPlayout() {
       // Update webrtc audio buffer with the selected parameters
       _ptrAudioBuffer->SetPlayoutSampleRate(kPlayoutFixedSampleRate);
       _ptrAudioBuffer->SetPlayoutChannels(kPlayoutNumChannels);
+      _initialized = true;
   }
   return 0;
 }

 bool FileAudioDevice::PlayoutIsInitialized() const {
-  return true;
+  return _initialized;
 }

 int32_t FileAudioDevice::RecordingIsAvailable(bool& available) {
@@ -236,7 +238,7 @@ int32_t FileAudioDevice::StopPlayout() {
 }

 bool FileAudioDevice::Playing() const {
-  return true;
+  return _playing;
 }

 int32_t FileAudioDevice::StartRecording() {
diff --git a/webrtc/modules/audio_device/dummy/file_audio_device.h b/webrtc/modules/audio_device/dummy/file_audio_device.h
index a69b47e..3f3c841 100644
--- a/webrtc/modules/audio_device/dummy/file_audio_device.h
+++ b/webrtc/modules/audio_device/dummy/file_audio_device.h
@@ -185,6 +185,7 @@ class FileAudioDevice : public AudioDeviceGeneric {
   std::unique_ptr<rtc::PlatformThread> _ptrThreadRec;
   std::unique_ptr<rtc::PlatformThread> _ptrThreadPlay;

+  bool _initialized;;
   bool _playing;
   bool _recording;
   uint64_t _lastCallPlayoutMillis;
Anil
  • 2,430
  • 3
  • 37
  • 55
simpx
  • 184
  • 1
  • 11
2

I know this is an old question, but I struggled myself to find a solution currently so I thought sharing is appreciated.

There's is one more or less simple way to get an example running which streams from the browser to native code.You need the webrtc source http://www.webrtc.org/native-code/development

The two tools you need are the peerconnection server and client. Both can be found in the folder talk/example/peerconnection

To get it working you need to patch it to enable DTLS for the peerconnection client. So patch it with the patch provided here https://code.google.com/p/webrtc/issues/detail?id=3872 and rebuild the client. Now you are set up on the native site!

For the browser I recommend the peer2peer example from here https://github.com/GoogleChrome/webrtc after starting the peerconnection_server and connection the peerconnection_client try to connect with the peer2peer example.

Maybe a connection constraint is necessary:
{ "DtlsSrtpKeyAgreement": true }

user3761776
  • 209
  • 2
  • 9
1

you could use the following example which implement a desktop client for appRTC.

https://github.com/TemasysCommunications/appRTCDesk

this completes and interop with the web client, android client and iOs client provided by the open source implementation at webrtc.org, giving you a full suite of clients to work with their free server. peer connection_{client|server} is an old example from the lib jingle time (pre webrtc) and does not interop with anything else.

Dr. Alex Gouaillard
  • 2,078
  • 14
  • 13
  • Thanks for the response. I will take a look at it. I have managed to stream audio and video from the browser to peerconnection_client. Does the sample you provided handle saving streams into a file on the receiving side or do you know what native WebRTC functions I can use to achieve this? – jpen May 10 '14 at 13:22
  • THe problem of the original peer connection client and server is that they use pure sockets to connect, so you do not beneficiate from ICE, and scaling is very hard. Also, you do not have an example on iOS and on android to interoperate with, like you have with appRTC. It's an harder staring point. appRTCDesk does not provide saving/recording capacity. There is an effort in the standard to provide recording capacity directly in the browser, it will not be out for quite some time. Soo in short, there is no native (JS) API for that now. – Dr. Alex Gouaillard May 11 '14 at 06:08
  • WebRTC's VoEFile class has some conversion functions for example "int ConvertPCMToCompressed(InStream* streamIn, OutStream* streamOut, CodecInst* compression)". I am trying to work out how I can use this function from within the receiving side (native C++ application). Also VoERTP_RTCP has StartRTPDump() but it is recommended for debugging purposes only. – jpen May 11 '14 at 12:48