If you're not too concerned about latency, the MediaRecorder interface can be used to capture raw audio data from a streaming input source (like the one returned by AudioContext.createMediaStreamSource). Use the dataavailable event or MediaRecorder.requestData method to access the raw data as a Blob. There's a fairly straightforward example of this in samdutton/simpl on GitHub. (Also there's a related Q&A on StackOverflow here.)
More generally if you can get the audio you want to analyze into a AudioBuffer, the AudioBuffer.getChannelData method can be used to extract the raw PCM sample data associated with an audio channel.
However if you're doing this in "real-time" - i.e., if you're trying to process the live input audio for "live" playback or visualization - then you'll probably want to look at the AudioWorklet API. Specifically, you'll want to create an AudioWorkletProcessor that examines the individual samples as part of the process handler.
E.g., something like this:
// For this example we'll compute the average level (PCM value) for the frames
// in the audio sample we're processing.
class ExampleAudioWorkletProcessor extends AudioWorkletProcessor {
process (inputs, outputs, parameters) {
// the running total
sum = 0
// grab samples from the first channel of the first input connected to this node
pcmData = inputs[0][0]
// process each individual sample (frame)
for (i = 0; i < pcmData.length; i++ ) {
sum += pcmData[i]
}
// write something to the log just to show it's working at all
console.log("AVG:", (sum / pcmData.length).toFixed(5))
// be sure to return `true` to keep the worklet running
return true
}
}
but that's neither bullet-proof nor particularly efficient code. (Notably you don't really want to use console.log
here. You'll probably either (a) write something to the outputs
array to send audio data to the next AudioNode in the chain or (b) use postMessage to send data back to the main (non-audio) thread.)
Note that since the AudioWorkletProcessor is executing within the "audio thread" rather than the main "UI thread" like the rest of your code, there are some hoops you must jump through to set up the worklet and communicate with the primary execution context. That's probably outside the scope of what's reasonable to describe here, but here is a complete example from MDN, and there are a large number of tutorials that can walk you through the steps if you search for keywords like "AudioWorkletProcessor", "AudioWorkletNode" or just "AudioWorklet". Here's one example.