0

What the code in my fiddle tries to do is

  • Play a "header" sound

  • Subsequently play the body/main content sound which should have a background track supporting it

  • Lastly, play an outro/footer sound

My needs are slightly similar to this thread Mixing two audio buffers, put one on background of another by using web Audio Api with minor differences here and there although I don't understand all the promises in that thread. However, I believe my code is held back by a tiny oversight. So far, I'm unable to test which of the outlined steps are successful because calls to AudioBufferSourceNode.start() do not initiate play behavior in the speakers.

Also I inspected what data is in the processed or resulting ArrayBuffers. It appears at every point/index, the array holds 0 (which probably means it's full of white noise but should play anyway). You might observe I'm using OfflineAudioContext instances at times. I intend to pipe the ultimate buffer to a library that would export it to MP3 format.

The code can be found at this fiddle. You can use any locally hosted audio files at your convenience

I Want Answers
  • 371
  • 3
  • 15

1 Answers1

1

First of all, you can't call createMediaElementSource on an OfflineAudioContext; you have to use an AudioContext. Second, you probably should only create and use one AudioContext.

Raymond Toy
  • 5,490
  • 10
  • 13
  • None of the examples I've seen show how to fill up a buffer with reasonable data i.e not white noise. That's why I used createMediaElementSource. Another reason is that the tracks are not snippets but files with above 2 minutes duration. Then I'm using OfflineAudioContext because I need to harvest the buffers and export to the MP3 formatting library. This call to the `start()` method is just temporary. For now I want to hear whether I'm on the right track, pun intended. If they both inherit from the same BaseAudioContext interface, why does one lack an createMediaElementSource method? – I Want Answers Oct 09 '17 at 17:31
  • Check the spec: https://webaudio.github.io/web-audio-api/#BaseAudioContext; there is no `createMediaElementSource` defined on a BaseAudioContext. It's only defined for an AudioContext: https://webaudio.github.io/web-audio-api/#AudioContext – Raymond Toy Oct 09 '17 at 18:28
  • And createMediaElementSource isn't defined for OfflineAudioContext because it could be a media element (e.g. a streaming source) that we can't seek or consume data offline for, so it doesn't make sense - and thus isn't available. – cwilso Oct 09 '17 at 20:40
  • Also, the .buffer on a buffer source node should always be an AudioBuffer, not an ArrayBuffer. – cwilso Oct 09 '17 at 20:40
  • Apologies. I couldn't reply earlier because of erratic power supply. I've refactored and the new code throws 2 errors--the first being about frames provided to the createBuffer() being less than 0. The 2nd error says the value in setValueCurveAtTime overlaps with that in setValueAtTime so I made a new fiddle adding all gains to one arrayBuffer although I'm unable to test because I'm on mobile now. But if it works, that means I'll have no control over the duration of the effects. Fiddle 1: http://fiddle.net/ka830Lqq. Fiddle 2:http://fiddle.net/ka830Lqq/1. Many thanks – I Want Answers Oct 10 '17 at 08:30
  • I think you meant jsfiddle.net/ka830Lqq and jsfiddle.net/ka830Lqq/1. And smaller examples that actually run from jsfiddle.net would be super helpful – Raymond Toy Oct 10 '17 at 16:03