I have a plot
self.plot = EZAudioPlot()
self.plot?.frame = CGRect(x:0, y:0, width:Int(self.plotWidth!), height:Int(self.plotOutlet!.bounds.height))
self.plot?.plotType = EZPlotType.buffer
self.plot?.shouldFill = true
self.plot?.shouldMirror = true
self.plot?.shouldOptimizeForRealtimePlot = true
self.plot?.color = UIColor.white
self.plot?.backgroundColor = UIColor(red:0,green:0,blue:0,alpha:0)
self.plot?.gain = 1
self.plotOutlet?.addSubview(self.plot!)
and I want to update it via the buffer copied from an AKAudioFile:
self.plot?.updateBuffer(file.pcmBuffer.floatChannelData![0], withBufferSize: file.pcmBuffer.frameLength)
This one says: pcmBuffer:265:error cannot readIntBuffer
I have tried another method, converting the buffer to EZAudioFloatData first but I can't get the type-casting right:
let data = EZAudioFloatData(numberOfChannels: 2, buffers: file.pcmBuffer.floatChannelData, bufferSize: file.pcmBuffer.frameLength)
The inspector says: Cannot convert value of type 'UnsafePointer < UnsafeMutablePointer < Float > > ?' to expected argument type 'UnsafeMutablePointer < UnsafeMutablePointer < Float > ? > !'
What am I doing wrong? By the way: I know I can load a wav file with EZAudioFile() and then get the buffer by using .getWaveformData() but for several reasons I want to know if there's a way to do the same using an AKAudioFile instead.
---UPDATE:
I was able to add the waveform feature right into the AKWaveTableDSPKernel.hpp: File:
EZAudioFloatData* getWaveformData(UInt32 numberOfPoints){
if (numCh == 0 || current_size < numberOfPoints){
// prevent division by zero
return nil;
}
EZAudioFloatData *waveformData;
float **data = (float **)malloc( sizeof(float*) * numCh );
for (int i = 0; i < numCh; i++)
{
data[i] = (float *)malloc( sizeof(float) * numberOfPoints );
}
// calculate the required number of frames per buffer
SInt64 framesPerBuffer = ((SInt64) current_size / numberOfPoints);
// read through file and calculate rms at each point
for (SInt64 i = 0; i < numberOfPoints; i++){
for (int channel = 0; channel < numCh; channel++){
float channelData[framesPerBuffer];
for (int frame = 0; frame < framesPerBuffer; frame++)
{
if(channel == 0){
channelData[frame] = ftbl1->tbl[i * framesPerBuffer + frame];
}else{
channelData[frame] = ftbl2->tbl[i * framesPerBuffer + frame];
}
}
float rms = [EZAudioUtilities RMS:channelData length:(UInt32)framesPerBuffer];
data[channel][i] = rms;
}
}
waveformData = [EZAudioFloatData dataWithNumberOfChannels:numCh buffers:(float **)data bufferSize:(UInt32)numberOfPoints];
// cleanup
for (int i = 0; i < numCh; i++){
free(data[i]);
}
free(data);
return waveformData;
}
It just needs the common header binding. I will try to do a pull request with this feature soon.