64

I am testing the WebRTC AGC but I must be doing something wrong because the signal just passes through unmodified.

Here's how I create and initialize the AGC:

agcConfig.compressionGaindB = 9;
agcConfig.limiterEnable = 1;
agcConfig.targetLevelDbfs = 9;   /* 9dB below full scale */

WebRtcAgc_Create(&agc);
WebRtcAgc_Init(agc, minLevel, maxLevel, kAgcModeFixedDigital, 8000);
WebRtcAgc_set_config(agc, agcConfig);

And then for each 10ms sample block I do the following:

WebRtcAgc_Process(agc, micData, NULL, 80, micData, NULL, micLevelIn, &micLevelOut, 0, &saturationWarning);

Where micLevelIn is set to 0.

Can somebody tell me what I'm doing wrong?

I expected that a full scale sine tone would be attenuated to the target DBFS level; and a low level sine tone (i.e. -30dBFS) would be amplified to match the target DBFS level. But that's not what I'm seeing.

tux3
  • 7,171
  • 6
  • 39
  • 51
user1884325
  • 2,530
  • 1
  • 30
  • 49
  • Are you sure there is absolutely no "spike" noise that is preventing the AGC from amplifying the input signal as you expect it to? Also take a look at this [**answer**](http://stackoverflow.com/a/12712228/319204); is `WebRtcAgc_Process()` expected to set `micLevelOut` appropriately and leave it at that?... – TheCodeArtist Apr 03 '14 at 16:11
  • A nice little description of [**`WebRtcAgc_Process()`**](http://osxr.org/android/source/external/webrtc/src/modules/audio_processing/agc/interface/gain_control.h#0133) to help sort out your expectations. – TheCodeArtist Apr 03 '14 at 16:20
  • Does `WebRtcAgc_Process()` consider the sine wave input as non-speech segment and hence skips it? Can you try passing an actual speech clip and test? – TheCodeArtist Apr 06 '14 at 07:26
  • Also checking out the source-code of webrtc, the param [`vadLogRatio`](http://code.google.com/p/webrtc/source/browse/trunk/webrtc/modules/audio_processing/agc/analog_agc.c?spec=svn5571&r=5571#944) is derived from `micLevelIn` passed to `WebRtcAgc_Process`. If this is set to **`0`** then it always happens to be less than the calculated `stt->vadThreshold`. Hence the input sample is NOT detected as speech and hence is passed out untouched. Just a thought... – TheCodeArtist Apr 06 '14 at 07:38
  • 4
    Please dont flag c++ code as C, it is confusing. – Vality Aug 08 '14 at 15:33
  • 2
    I have used the similar code . however in my case the output results in -1 ( error ) so far . Anyways can u share if you have received any saturationWarning so far ? Also additionally I understand that the speech output is a combined effect of resulting dbfs , compression gain adn few more parameters . I note that this might not be very helpful but I need to ensure that this works so that I can employ the same . Please share if you have cracked the problem already – Altanai Mar 03 '15 at 15:44

2 Answers2

3

Here is the sequence of operations to be used for Webrtc_AGC:

  1. Create AGC: WebRtcAgc_Create
  2. Initialize AGC: WebRtcAgc_Init
  3. Set Config: WebRtcAgc_set_config
  4. Initialize capture_level = 0
  5. For kAgcModeAdaptiveDigital, invoke VirtualMic: WebRtcAgc_VirtualMic
  6. Process Buffer with capture_level: WebRtcAgc_Process
  7. Get the out capture level returned from WebRtcAgc_Process and set it to capture_level
  8. Repeat 5 to 7 for the audio buffers
  9. Destroy the AGC: WebRtcAgc_Free

Check webrtc/modules/audio_processing/gain_control_impl.cc for reference.

Mitch
  • 21,223
  • 6
  • 63
  • 86
ssk
  • 9,045
  • 26
  • 96
  • 169
0

Try this:


    agcConfig.compressionGaindB = 9;
    agcConfig.limiterEnable = 1;
    agcConfig.targetLevelDbfs = 9;   /* 9dB below full scale */

    WebRtcAgc_Create(&agc);
    WebRtcAgc_Init(&agc, minLevel, maxLevel, kAgcModeFixedDigital, 8000);
    WebRtcAgc_set_config(&agc, &agcConfig);

Muhammad Faizan
  • 863
  • 1
  • 7
  • 16