I have read this interesting post Calculate FFT from audio file , I have downloaded the audio.analysis library suggested in one of the last comment but I am having trouble in doing this:
- Dividing in frames the .wav file
- Having a 50 % overlapping of the frames
Actually I have customized class "Read Example" taken from: http://www.labbookpages.co.uk/audio/javaWavFiles.html#methods , I had customized like follow in order to get back an float[] containing frames getted from the .wav file:
package com.example.audioanalysis;
import java.io.File;
import java.net.URL;
import java.util.ArrayList;
import android.os.Environment;
import com.example.genderrecognitionapp.MainActivity;
public class ReadExample
{
public static float[] BufferFromWavFile()
{
final String PATH = "example";
final String FILE_NAME = "AudioRecorder.wav";
ArrayList<Integer> framesRead = new ArrayList<Integer>();
try
{
URL defaultAudio = MainActivity.class.getResource(Environment.getExternalStorageDirectory() + "/" + PATH + "/" + FILE_NAME);
final Decoder decoder = null;
File soundFile = new File(defaultAudio.toURI());
// Open the wav file specified as the first argument
WavFile wavFile = WavFile.openWavFile(soundFile);
// Get the number of audio channels in the wav file
int numChannels = wavFile.getNumChannels();
// Create a buffer of 100 frames
double[] buffer = new double[100 * numChannels];
do
{
// Read frames into buffer
framesRead.add(wavFile.readFrames(buffer, 100));
}
while (framesRead != null);
// Close the wavFile
wavFile.close();
}
catch (Exception e)
{
System.err.println(e);
}
float[] arrayOfFrames = new float[framesRead.size()];
for(int i = 0; i<framesRead.size(); i++)
{
arrayOfFrames[i] = framesRead.get(i);
}
return arrayOfFrames;
}
}
My problems could be resolved by class Spectrum Provider belonging to audio.analysis but the constructor of Spectrum Provider require a Decoder object but a this object does not get in input any frames. So how could i wiring this two part?
Could you help me? Thank you in advantage.