0

I am trying to record and edit my voice in javascript. Specifically, I am trying to record it in an array that looks like this for my boss [0,102, 301,...] where the values are samples of my voice.

When I record my voice in javascript, I get a Blob type. Is there any way to transform a Blob into the [x, y, z,...] array? Or how is javascript signal processing normally completed?

This is code from this medium article that is how we are doing things. I just can't share the actual company code.

const recordAudio = () =>
    new Promise(async resolve => {
        const stream = await navigator.mediaDevices.getUserMedia({ audio:true});
        const mediaRecorder = new MediaRecorder(stream);
        const audioChunks = [];

        mediaRecorder.addEventListener("dataavailable", event => {
            audioChunks.push(event.data);
        });

        const start = () => mediaRecorder.start();

        const stop = () =>
            new Promise(resolve => {
                mediaRecorder.addEventListener("stop", () => {
                    console.log(audioChunks);
                    console.log(audioChunks)
                    const audioBlob = new Blob (audioChunks);
                    const audioURL = URL.createObjectURL(audioBlob);
                    const audio = new Audio(audioURL);
                    const play = () => audio.play();
                    resolve({ audioBlob, audioURL, play });
                });

                mediaRecorder.stop();
            });

            resolve({ start, stop});
        });

    const sleep = time => new Promise(resolve => setTimeout(resolve, time));

    const handleAction = async () => {
        const recorder = await recordAudio();
        const actionButton = document.getElementById('action');
        actionButton.disabled = true;
        recorder.start();
        await sleep(3000);
        const audio = await recorder.stop();
        audio.play();
        await sleep(3000);
        actionButton.disabled = false;

    }
  • It's completely unclear how `[0,102, 301,...]` "*are samples of your voice*". What is it that your boss wants? – Bergi Jul 11 '20 at 20:57
  • Btw, [never pass an `async function` as the executor to `new Promise`](https://stackoverflow.com/q/43036229/1048572)! Make `recordAudio` an `async` function itself, and drop the `new Promise` around its body. – Bergi Jul 11 '20 at 20:58

1 Answers1

0

you can use AudioContext and provide userMediaStream to it, then you can pick up an UInt8Array() that you want with the raw time domain signal, or already transformed frequency domain signal.

Here you can check more details.

https://developer.mozilla.org/en-US/docs/Web/API/AnalyserNode

//initialize your signal catching system
let audioContext = new AudioContext();
let analyser = audioContext.createAnalyser();
navigator.mediaDevices.getUserMedia({audio: true}).then(stream => {
    let source = audioContext.createMediaStreamSource(stream);
    source.connect(analyser);
})

//then update the array with signal every milisecond
setInterval(() => {
    const bufferLength = analyser.frequencyBinCount;
    const dataArray = new Uint8Array(bufferLength);
    //get time domain signal
    analyser.getByteTimeDomainData(dataArray);
    //get frequency domain signal
    analyser.getByteFrequencyData(dataArray)
    console.log(dataArray)
}, 1)

as for visualization it works ok, with recording there might be a problem with repeating signal if you pick it up couple times before change, or there will be holes in data, but i cant figure out how to read directly from the stream.

Kaplan
  • 1
  • 1