12

I am trying to save the output from webAudio API for future use , so far i think getting PCM data and saving it as a file will do my expectation , I am wondering if the webAudio or mozAudio already supports saving the output stream if not how can i get the pcm data from the output stream

ShrekOverflow
  • 6,795
  • 3
  • 37
  • 48

4 Answers4

6

There isn't a good sense of the requirements here outside of attempting to capture web audio in some programmatic way. The presumption here is you want to do this from code executing in JavaScript on the page that's currently being browsed, but that also isn't entirely clear.

As Incognito points out, you can do this in Chrome by using a callback hanging off decodeAudioData(). But, this may be overly complicated for your uses if you're simply trying to capture, for example, the output of a single web stream and decode it into PCM for use in your sound tools of choice.

Another strategy you might consider, for cases when the media URL is obscured or otherwise difficult to decode using your current tools, is capture from your underlying sound card. This gives you the decoding for free, at the potential expense of a lower sampling rate if (and only if) your sound card isn't able to sample the stream effectively.

As we know, you're already encoding analog signals digitally anyway via your desire for PCM encoding. Obviously, only do this if you have the legal right to use the files being sampled.

Regardless of the route you choose, best of luck to you. Be it programmatic stream dissection or spot sampling, you should now have more than enough information to proceed.


Edit: Based on additional information from the OP, this seems like the needed solution (merged from here and here, using NodeJS' implementation of fs):

var fs = require('fs');

function saveAudio(data, saveLocation) {
    var context = new (window.AudioContext || window.webkitAudioContext)();
    var source = context.createBufferSource();

    if(context.decodeAudioData) {
        context.decodeAudioData(data, function(buffer) {
            fs.writeFile(saveLocation, buffer, function (err) {
                if (err) throw err;
                console.log('It\'s saved!');
            });
        }, function(e) {
            console.log(e);
        });
    } else {
        var buffer = context.createBuffer(data, false /*mixToMono*/);
        fs.writeFile(saveLocation, buffer, function (err) {
            if (err) throw err;
            console.log('It\'s saved!');
        });
    }
}

(Warning: untested code. If this doesn't work, edits are welcome.)

This effectively spools out decodeAudioData from the Web Audio API, decodes PCM from the supplied data, then attempts to save it to the target saveLocation. Simple enough, really.

Community
  • 1
  • 1
MrGomez
  • 23,788
  • 45
  • 72
  • consider this :: I make a dj software and some person makes a mix with it , now the person wants to record that mix ? how is that going to happen ? ??? the only way to do so is to capture the data that's processed at the end of AudioProcessing.. Makes sense ? – ShrekOverflow Apr 09 '12 at 13:14
  • @Abhishek I see: you want to be able to have them download their mix directly from the output of the tool as a feature integrated with the tool itself. That makes sense. I'll look into this and see what I can find. :) – MrGomez Apr 09 '12 at 18:00
  • Yes exactly thats what i want to implement :-) – ShrekOverflow Apr 09 '12 at 18:23
  • @Abhishek Spec example provided. In short, you'll want to render your web audio to a buffer, then save that buffer to disk. I leave it to you to handle prompting for the location from your users; all this does is take your supplied data source and a location to save your buffer to disk. It should meet your needs, after a short code review to make sure I didn't screw anything up. – MrGomez Apr 09 '12 at 20:39
  • How are you accomplishing all of this in node (i.e. where are you getting the `window` variable in node)? AFAIK, the Web Audio API has not been implemented in node.js. – ampersand Apr 10 '12 at 06:05
  • @ampersand I'm not. The rendered example uses its `fs` module and attempts to save a buffer object through interface stapling. This admittedly isn't the cleanest way to go about it, but fortunately, [alternatives exist](http://stackoverflow.com/questions/2897619/using-html5-javascript-to-generate-and-save-a-file) if the OP wishes to use them instead. If you have thoughts on improving this (or providing your own answer), I encourage you to do so. :) – MrGomez Apr 10 '12 at 06:16
  • @MrGomez you are again getting me wrong the simple question is "how to get webAudio to render to a buffer , as per best of my knowledge i havent seen anything that renders web-audio to a buffer?" – ShrekOverflow Apr 10 '12 at 11:51
  • But ur link lead to the JsAudio node i am going to see if that does what i actually need :-) , and btw to get audio the JavaScriptNode is used :-( , they should have put an example of such usage in the spec :-( instead of sine waves generators – ShrekOverflow Apr 10 '12 at 12:00
  • Kindly edit your answer the actual answer to that is using the javascriptNode and its .inputBuffer , now i need to figure out where does the javaScriptNode actually is attaching itself :-) – ShrekOverflow Apr 10 '12 at 12:01
  • @Abhishek Oh, that's easy. And again, I believe @Incognito had the right solution here. The secret is `decodeAudioData()` _returns a [callback](https://en.wikipedia.org/wiki/Callback_(computer_programming)) containing the rendered PCM audio buffer,_ allowing you to do whatever you wish with it _once you define a function to handle the callback procedure_. I believe this was the primary source of your confusion, and the above example tersely illustrates how this might be done, by providing an [anonymous function](https://en.wikipedia.org/wiki/Callback_(computer_programming)) to the procedure. – MrGomez Apr 10 '12 at 21:25
  • Well decodeAudioData gives me raw pcm of the source stream and NOT The output after mixxing :P – ShrekOverflow Apr 11 '12 at 16:11
  • @Abhishek Ah. As to audio mixing, have you looked at tools like [WebAL](https://github.com/benvanik/WebAL)? HTML5 doesn't support a native mixer at the standards level, to my understanding. – MrGomez Apr 11 '12 at 20:52
  • 3
    As there was no native cross-browser solution due to changing API i ended up doing this .. Use C++ to write a custom module for Node.js using libLame and libMpg123 and mixxed it with Audiolib.js :-) which runs fairly nice on server side in node.js :-) What i do is . use libmpg123 bridge to get samples audiolib.js to mix em and liblame to re-encode them and save as output or stream it :-) , after some finishing i will push this on git. – ShrekOverflow Jun 13 '12 at 09:05
  • @Abhishek Very nice. Thank you for posting the follow-up! :) – MrGomez Jun 13 '12 at 09:52
  • 2
    The code shown here doesnt work! There is no audioContext in nodejs. Nodejs is just V8, it is not a full browser and therefore no HTML5 api is available in nodejs. – chrisweb Jul 19 '13 at 17:38
  • @chrisweb +1. This is a super-stale entry, so edits are welcome. That said, it looks like the answer just below this one (http://stackoverflow.com/a/14372537/517815) is on the right track. – MrGomez Jul 20 '13 at 01:05
4

The latest WebAudio API draft introduced the OfflineAudioContext exactly for this purpose.

You use it exactly the same way as a regular AudioContext, but with an additional startRendering() method to trigger offline rendering, as well as an oncomplete callback so that you can act upon finishing rendering.

ruidlopes
  • 43
  • 4
2

Chrome should support it (or at the least, mostly support this new feature).

decodeAudioData()

When decodeAudioData() is finished, it calls a callback function which provides the decoded PCM audio data as an AudioBuffer

It's nearly identical to the XHR2 way of doing things, so you'll likely want to make an abstraction layer for it.

Note: I haven't tested that it works, but I only see one bug in chromium regarding this, indicating it works but fails for some files.

Incognito
  • 20,537
  • 15
  • 80
  • 120
0

I think that what you are looking for can be achieved with the startRendering-function in Web Audio. I dunno if the answers above did the trick, but if they didn't - here's a little something to get you going:

https://bugs.webkit.org/show_bug.cgi?id=57676 (scroll down to comment three)

This part is still undocumented, so it's nowhere to be seen in the spec, but you can console.log the audio context to confirm that it's actually there. I've only done some preliminary test with it, but I think it should be the answer to your question.

Oskar Eriksson
  • 2,591
  • 18
  • 32