1

I am trying to make a Decoder in android for the project published by https://github.com/rraval/pied-piper They have already created a decoder in python, in python it was pretty easy using the numpy package, but in Java I am having a hard time. The steps used in python includes :

def dominant(frame_rate,chunk):
  w=numpy.fft.fft(chunk)
  numpy.fft.fftfreq(len(chunk))
  peak_coeff = numpy.argmax(numpy.abs(w))
  peak_freq = freqs[peak_coeff]
  return abs(peak_freq * frame_rate) # in Hz

Above code returns the frequency of the audio data in chunks[ ].

I am trying make an Android code which implements the same logic. My work So far is given below :

public class MicReadThread3 extends Thread {

static final int HANDSHAKE_START_HZ = 8192;
static final int HANDSHAKE_END_HZ = 8192 + 512;
static final int START_HZ = 1024;
static final int STEP_HZ = 256;
static final int BITS = 4;
static final int FEC_BYTES = 4;
static final int sample_size=8;
boolean callBack_done=false;

private static final int AUDIO_SOURCE = MediaRecorder.AudioSource.MIC;
private static final int SAMPLE_RATE = 44100; // Hz
private static final int ENCODING = AudioFormat.ENCODING_PCM_16BIT;
private static final int CHANNEL_MASK = AudioFormat.CHANNEL_IN_MONO;
private static final int BUFFER_SIZE = AudioRecord.getMinBufferSize(SAMPLE_RATE, CHANNEL_MASK, ENCODING);
private static final int blockSize=BUFFER_SIZE;



public MicReadThread3(){
    setPriority(Thread.MAX_PRIORITY);
}

@Override
public void run(){

    System.out.println("Buffer Size : "+BUFFER_SIZE);
    AudioRecord audioRecord=null;
    double dom;
    byte[] buffer=new byte[blockSize];
    short[] bufferShort =new short[blockSize];
    audioRecord = new AudioRecord(AUDIO_SOURCE, SAMPLE_RATE, CHANNEL_MASK, ENCODING, BUFFER_SIZE);
    audioRecord.startRecording();
    while(true){
        audioRecord.read(buffer, 0, blockSize);
        dom = dominant(SAMPLE_RATE, buffer);
        System.out.println("Dominant="+dom);
        if(match(dom,HANDSHAKE_START_HZ)){
            System.out.println("Found Handshake start freq :"+dom);
        }

        if(match(dom,HANDSHAKE_END_HZ)){
            System.out.println("Found Handshake end freq :"+dom);
        }
    }

}

public boolean match(double freq1, double freq2) {
    return Math.abs(freq1 - freq2) < 20;
}

public double dominant(int frame_rate, byte[] chunk){
    int len=chunk.length;
    double[] waveTransformReal=new double[len];
    double[] waveTransformImg=new double[len];
    for(int i=0;i<len;i++){
        waveTransformReal[i]=chunk[i];
    }

    Fft.transform(waveTransformReal,waveTransformImg);

    //Calculating abs
    double[] abs=new double[len];

    for(int i=0;i<len;i++) {
        abs[i] = (Math.sqrt(waveTransformReal[i] * waveTransformReal[i] + waveTransformImg[i] * waveTransformImg[i]));

    }
    int maxIndex=0;
    for(int i=0;i<len;i++) {
        if (abs[i] > abs[maxIndex])
            maxIndex = i;
    }
    //frame_rate is sampling freq and len is no. of datapoints
    double dominantFrequency=(maxIndex*frame_rate)/len;
    return dominantFrequency;
}

}

The class I am using to get Fft can be found in the link given below: https://www.nayuki.io/res/free-small-fft-in-multiple-languages/Fft.java

I have to print dominant frequency if its equal to the HandShake Frequencies.

But when I print the values what I am getting is just junk frequency values like 1000,42050,2000,...

In python the code was just fine but in android, its getting harder... Please help, my project submission has to be done next week. This is only a part of my project, we are lagging in because of this issue! Thnanks in advance.

AKHIL KUMAR
  • 123
  • 1
  • 5
  • Hello Akhil, just checking, were you able to make the decoding work. I have been trying to make it work on Android and am not able to do so. Which FFT library did you use? – Arun Nov 15 '18 at 14:49
  • The simple answer is NO, I couldn't. We completed the project in a different way. – AKHIL KUMAR Nov 16 '18 at 16:04

1 Answers1

1

I was too quick on my original answer re

    double dominantFrequency=(maxIndex*frame_rate)/len;

In reference to your comment, I looked again and see a difference between the github code and the one you posted: github requests 8 bit audio, and here it is ENCODING_PCM_16BIT.

So each value in waveTransformReal[] would be only partial, because it is taken from the chunk[] byte data where 2 bytes make the full value. As a quick test, try using ENCODING_PCM_8BIT and see if you get the correct result.

hg123
  • 111
  • 6
  • Thanks for your reply @hg123 , I got the equation from the thread : http://stackoverflow.com/questions/7674877/how-to-get-frequency-from-fft-result , Also I have decoded frrequency correctly in a java program that run on my PC. using the same logic, same fft class, the only difference was the class used to read from mic. You can find the link to my java program here : https://github.com/akvishnuta/MainProject/blob/master/getDominant – AKHIL KUMAR Apr 15 '17 at 03:46
  • I have already tried that, but ENCODING_PCM_8BIT is not supported in android. If used then, AudioRecord.getMinBufferSize(SAMPLE_RATE, CHANNEL_MASK, ENCODING) would return -2, which will eventually cuase a negativeArraySize Exception. – AKHIL KUMAR Apr 15 '17 at 17:22
  • You will have to take 2 bytes for each value then- IE, chunk[0] and chunk[1] make up waveTransformReal[0]. Note your number of samples will be divided by 2 if you are using 16 bits. – hg123 Apr 15 '17 at 17:43
  • Oh thanks, I have changed the datatype of chunk and buffer to short. Now It detects the frequencies, but only from another phone. My problem is partially solved. The python code that I am using to encode in PC is given below : https://github.com/akvishnuta/MainProject/tree/master – AKHIL KUMAR Apr 16 '17 at 08:05
  • I think you might try debugging your code on both phones (or display some values to the screen). Looks like your initial problem is identified (if not completely solved). Good luck! – hg123 Apr 16 '17 at 20:37