3

I need to move realtime audio between two Linux machines, which are both running custom software (of mine) which builds on top of Gstreamer. (The software already has other communication between the machines, over a separate TCP-based protocol - I mention this in case having reliable out-of-band data makes a difference to the solution).

The audio input will be a microphone / line-in on the sending machine, and normal audio output as the sink on the destination; alsasrc and alsasink are the most likely, though for testing I have been using the audiotestsrc instead of a real microphone.

GStreamer offers a multitude of ways to move data round over networks - RTP, RTSP, GDP payloading, UDP and TCP servers, clients and sockets, and so on. There's also many examples on the web of streaming both audio and video - but none of them seem to work for me, in practice; either the destination pipeline fails to negotiate caps, or I hear a single packet and then the pipeline stalls, or the destination pipeline bails out immediately with no data available.

In all cases, I'm testing on the command-line just gst-launch. No compression of the audio data is required - raw audio, or trivial WAV, uLaw or aLaw encoding is fine; what's more important is low-ish latency.

James Turner
  • 2,425
  • 2
  • 19
  • 24
  • 1
    Here's an example of a pipeline that doesn't produce any errors, but also produces no output; the destination pipeline enters the PLAYING state, but no sound is heard. Source pipeline: gst-launch-0.10 audiotestsrc ! audioconvert ! rtpL16pay ! udpsink port=5005 host=localhost Destination pipeline: gst-launch-0.10 udpsrc port=5005 ! rtpL16depay ! alsasink – James Turner Apr 27 '10 at 10:22

4 Answers4

5

To debug that kind of problem i would try:

  1. Run gst-launch audiotestsrc ! alsasink to checkthat sounds works
  2. Use a fakesink or filesink to see if we get any buffers
  3. Try to find the pipeline problem with GST_DEBUG, for example check caps with GST_DEBUG=GST_CAPS:4 or check use *:2 to get all errors/warnings
  4. Use wireshark to see if packets are sent

These pipelines work for me:

with RTP:

gst-launch-0.10 -v udpsrc port=5000 ! "application/x-rtp,media=(string)audio, clock-rate=(int)44100, width=16, height=16, encoding-name=(string)L16, encoding-params=(string)1, channels=(int)1, channel-positions=(int)1, payload=(int)96" ! rtpL16depay ! audioconvert ! alsasink sync=false

gst-launch-0.10 audiotestsrc ! audioconvert ! audio/x-raw-int,channels=1,depth=16,width=16,rate=44100 ! rtpL16pay  ! udpsink host=localhost port=5000

with TCP::

gst-launch-0.10 tcpserversrc host=localhost port=3000 ! audio/x-raw-int, endianness="(int)1234", signed="(boolean)true", width="(int)16", depth="(int)16", rate="(int)44100", channels="(int)1" ! alsasink

gst-launch-0.10 audiotestsrc ! tcpclientsink host=localhost port=3000
tilljoel
  • 626
  • 5
  • 4
  • Just a note that on Windows it seems to want an audioresample to make your receiver example work: "... ! rtpL16depay ! audioconvert !audioresample ! directsoundsink" – OJW Sep 15 '11 at 17:01
1

My solution is very similar to tilljoel but I am using Microphone (which is what you need) as a source - Hence some tweaking in the gstreamer pipeline.

Decode Audio from Microphone using TCP:

gst-launch-0.10 tcpserversrc host=localhost port=3000 !  audio/x-raw-int, endianness="(int)1234", signed="(boolean)true", width="(int)16", depth="(int)16", rate="(int)22000", channels="(int)1" ! alsasink

Encode Audio from Microphone using TCP:

gst-launch-0.10 pulsesrc ! audio/x-raw-int,rate=22000,channels=1,width=16 ! tcpclientsink host=localhost port=3000

Decode Audio from Microphone using RTP:

gst-launch-0.10 -v udpsrc port=5000 ! "application/x-rtp,media=(string)audio, clock-rate=(int)22000, width=16, height=16, encoding-name=(string)L16, encoding-params=(string)1, channels=(int)1, channel-positions=(int)1, payload=(int)96" ! rtpL16depay ! audioconvert ! alsasink sync=false

Encode Audio from Microphone using RTP:

gst-launch-0.10 pulsesrc ! audioconvert ! audio/x-raw-int,channels=1,depth=16,width=16,rate=22000 ! rtpL16pay  ! udpsink host=localhost port=5000
enthusiasticgeek
  • 2,640
  • 46
  • 53
  • can we receive audio in another port like Gstremer receive video on 5000 and audio 5001 port – DURGESH Jun 01 '17 at 05:26
  • @DURGESHKUMAR You would have to use demux obviously. Have a look at https://stackoverflow.com/questions/13904975/combining-an-audio-and-video-stream-using-gstreamer – enthusiasticgeek Jun 01 '17 at 15:46
  • can you send me backend.m file – DURGESH Jun 02 '17 at 04:12
  • @DURGESHKUMAR ~huh? which m file? MATLAB? Why don't you use demux. as I stated earlier. See Audio+Video section from http://trac.gateworks.com/wiki/Yocto/gstreamer/streaming . You will have to simply use my above pipelines and fit them with demux. element. There are tonnes of examples on-line. – enthusiasticgeek Jun 02 '17 at 14:46
0

Can you post some of the gst-launch pipelines you have tried? That might help in understanding why you are having issues. In general RTP/RTSP should work pretty easily.

Edit: Couple items I can think of is 1. change host=localhost to host= where is the actual ip-address of the other linux machine 2. add caps="application/x-rtp, media=(string)audio to the udpsrc element in the receiver.

Sid Heroor
  • 663
  • 4
  • 11
0

a bit update from 2023 year.

sender:

gst-launch-1.0 pulsesrc ! audioconvert ! audio/x-raw,channels=1,depth=16,width=16,rate=44100 ! rtpL16pay  ! \
udpsink host=192.168.1.108 port=5200

receiver:

gst-launch-1.0 -v udpsrc port=5200 ! "application/x-rtp,media=(string)audio, \
clock-rate=(int)44100, width=16, height=16, encoding-name=(string)L16,\
encoding-params=(string)1, channels=(int)1, channel-positions=(int)1, \
payload=(int)96" ! rtpL16depay ! audioconvert ! autoaudiosink sync=false