2

I'm getting this JavaScript warning on the latest version of Chrome Desktop, which now prevents auto-playing of sound without user interaction: http://bgr.com/2018/03/22/google-chrome-autoplay-videos-sound-block/

So I'm getting this error/warning in my console logs: "The AudioContext was not allowed to start. It must be resume (or created) after a user gesture on the page"

Is there any way for me to resume sounds or the audio context of SoundJS upon user interaction, like a click or keyboard event on the page?

My setup is that I'm using preloadJS to load my MP3s like so:

queue.loadManifest([
  {id: 'sndScore', src: 'sndScore.mp3'}
]);

Then I play the sounds like this:

createjs.Sound.play("sndScore");

I tried putting a flag var unitInteracted= false; which I then set to true if the user interacts with the page, and modified my sound playing code like so to make sure that the page has been interacted with before I attempt to play sounds:

if (unitInteracted) {                                
  createjs.Sound.play("sndScore");
}

...but for some reason, I'm still getting the above error/warning and the page ends up still getting muted. Any way to unmute it once Chrome says: "The AudioContext was not allowed to start. It must be resume (or created) after a user gesture on the page"?

Thanks!

object404
  • 71
  • 6
  • I don't know SoundJS, nor when they do create the AudioContext, but the error you've got tells you to initialize the context in the user-gesture itself (i.e synchronously in the event handler, or something like less than 60ms after). It seems it's not the same as for media elements, which only requires the page ever received an user-gesture. So your flag way would not work, and depending on how and when SoundJS does create its context, it might be hard, or as easy as an iniitial splash screen where you'll bind a click event triggering `createjs.sound.play(silence)` – Kaiido May 06 '18 at 04:27

1 Answers1

1

I didn't gone too deep into reading the source code of SoundJS, but it seems PreloadJS will trigger the creation of the AudioContext.

So to avoid it, you might need to start the fetching of your resources only in the user-gesture event: e.g you can present your users with a splash-screen, and only once they click a button on it, will you start fetching your audio resources:

// wait for user gesture
btn.onclick = e => {
  // now we can load our resources
  var queue = new createjs.LoadQueue();
  queue.installPlugin(createjs.Sound);
  queue.on("complete", handleComplete, this);
  queue.loadFile({
    id: "sound",
    src: "https://dl.dropboxusercontent.com/s/1cdwpm3gca9mlo0/kick.mp3"
  });
  btn.disabled = true;
  // and show our user we are loading things
  btn.textContent = 'loading';
};

function handleComplete() {
  // now we can play our sounds without issue
  btn.disabled = false;
  btn.textContent = 'play';
  btn.onclick = e =>
    createjs.Sound.play("sound");
  // we don't need a user gesture anymore
  setTimeout(btn.onclick, 2000);
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/SoundJS/1.0.2/soundjs.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/PreloadJS/1.0.1/preloadjs.min.js"></script>

<!-- make it part of a splash-screen -->
<button id="btn">Let's start the awesome</button>
Kaiido
  • 123,334
  • 13
  • 219
  • 285
  • I haven't tried this solution out yet as it may affect the functionality of the unit I'm working on, but I was hoping that there was a way to resume the audio context instead upon user interaction, like in PhaserJS here: https://github.com/photonstorm/phaser/issues/2913 SoundJS does seem to have audio context variables for "advanced users" according to the documentation - I'm hoping that someone here knows how to use and make them resume after user interaction... – object404 May 06 '18 at 13:49
  • @object404 Well once again I don't know SoundJS, but I can still get what happens here: 1) PreloadJS will fetch the audio file as an ArrayBuffer (through AJAX) 2) It will use SoundJS plugin to decode this file as an AudioBuffer thanks to the decodeAudioData method of the SoundJS AudioContext. 3) once this AudioBuffer is created, it will store it somewhere so that SoundJS is able to retrieve it in its `play` method, and be able to create a BufferSourceNode with previously created buffer. So as you can see, the AudioContext is actually needed since step 2), which is an asynchronous step. – Kaiido May 06 '18 at 14:44
  • This means that even if you had an option to trigger the steps 2) & 3) separately, SoundJS would still be unable to `play` it synchronously. So all you would have won is the fetching time, that you can already win by doing a first request so that the files are stored in cache by the browser. – Kaiido May 06 '18 at 14:44