4

I'm new to android and new to anything related to audio processing.

So I will need step by step guidance from where to start.

I have already used android AudioRecord to input sound from the microphone and the AudioTrack class to output it through the speakers in realtime, which is working fine.

What I'm trying to do is to change the amplitude/frequency of the input signal and add distortion effect before outputting it.

This is what I've done so far:

    // Calculate minimum buffer size
    int bufferSize = AudioRecord.getMinBufferSize(44100,
        AudioFormat.CHANNEL_IN_STEREO,AudioFormat.ENCODING_PCM_16BIT);
    // AudioRecord Lib
    final AudioRecord record = new AudioRecord(MediaRecorder.AudioSource.MIC, 44100,
            AudioFormat.CHANNEL_IN_STEREO, AudioFormat.ENCODING_PCM_16BIT,
            bufferSize);
    // AudioTrack Lib
    final AudioTrack audioPlayer = new AudioTrack(
            AudioManager.STREAM_MUSIC, 44100,
            AudioFormat.CHANNEL_IN_STEREO, AudioFormat.ENCODING_PCM_16BIT,
            bufferSize, AudioTrack.MODE_STREAM);
    record_button = (Button) findViewById(R.id.start_record);
    stop_recording = (Button) findViewById(R.id.stop_record);
    stop_recording.setOnClickListener(new OnClickListener() {
        @Override
        public void onClick(View v) {
            System.out.println("Stopped record");
            stopped = true;
            // TODO Auto-generated method stub
            // recorder.stop();
            // recorder.release();
            record.stop();
            record.release();
            audioPlayer.stop();
            audioPlayer.release();

        }
    });

    record_button.setOnClickListener(new OnClickListener() {

        @Override
        public void onClick(View v) {
            // TODO Auto-generated method stub
            System.out.println("Recording");
            record.startRecording();
            audioPlayer.play();
            new AsyncBuffer(record,audioPlayer).execute();
            }
    });

}
class asyncBuffer extends AsyncTask<Void, Void, Void>{
    AudioRecord record;
    AudioTrack audioPlayer;

    public AsyncBuffer(AudioRecord record, AudioTrack audioPlayer) {
        // TODO Auto-generated constructor stub
        this.record = record;
        this.audioPlayer = audioPlayer;
    }

    @Override
    protected Void doInBackground(Void... params) {
        System.out.println("asynch task");

        short[] buffer = new short[160];
        while (!stopped) {
            Log.i("Map", "Writing new data to buffer");
            int n = record.read(buffer, 0, buffer.length);
            audioPlayer.write(buffer, 0, n);

        }
        return null;
    }   
}

I'm already reading the input signal, I just need to know how to process it in the AsyncBuffer class.

Is it possible to add distortion in the above mentioned code by manipulating the signal in the java code itself? If so, how?

If not, are there any java libraries that you can recommend?

Also, can this be achieved with plain Android SDK using the included classes, or I will have to go deeper and use NDK (Kissfft, etc.)?

pensono
  • 336
  • 6
  • 17
boyfromnorth
  • 958
  • 4
  • 19
  • 41
  • The problem here is that you don't seem to know specifically what types of transformations you wish to make, or what types of computational operations those will entail. You may want to play with audio effects software on your PC for a bit to specifically refine your requirement. – Chris Stratton Mar 07 '14 at 16:37
  • I am also a musician,and I produce my own music so if you are talking about production softwares like logic and pro tools I'm pretty familiar with those and guitar plugins like amplitube and jam up pro ,I use them everyday.. But they don't really provide any specific in depth knowledge of audio processing it's all in the background..if you know what I mean.That's why posted this here. . I need a start.. – boyfromnorth Mar 07 '14 at 17:29
  • Again, you must identify a specific effect of interest, and research the applicable transformation algorithm which causes it. You can't write a program to vague generalities, nor do we handle non-specific questions here. – Chris Stratton Mar 07 '14 at 19:05
  • That is as clear as I can get. I asked for guidance from where to start, if research was helping thn I wouldn't be posting this question here. I clearly mentioned what I want to do and what have I done so far.So if you can't help, please stop pretending to. no offence.Peace. – boyfromnorth Mar 08 '14 at 07:41
  • Questions which do not state a specific goal not only do not receive good answers, but also typically end up closed as "off topic". You can improve your question by picking, and precisely describing, a specific effect as a goal to implement. – Chris Stratton Mar 08 '14 at 14:19
  • Edited the question. I hope it's more clear? – boyfromnorth Mar 08 '14 at 16:01
  • No. You still lack answerable clarity as to what **exactly** you wish to accomplish. For example, "distortion" typically means things like amplitude compression. This is primarily an amplitude transformation, performed in the time domain (the higher the level, the lower the gain you apply). There is an effect on the frequency content of course, but it's relatively indirect. FFT/inverse-FFT type transformations would not usually be seen as an effective way of implementing that, unless you mean to distort some frequencies only. – Chris Stratton Mar 08 '14 at 16:06
  • let us [continue this discussion in chat](http://chat.stackoverflow.com/rooms/49312/discussion-between-ashishashen-and-chris-stratton) – boyfromnorth Mar 08 '14 at 16:12
  • You need to clarify your question by editing it, not in the chat room – Chris Stratton Mar 08 '14 at 16:12
  • If I knew exactly what I want I wouldn't really need help right?I want to distort the whole signal probably remove some highs and lows.I have no idea about amplitude compression /transformation.etc. that's why this question at the first place, If you are so sure that FFT type transformations would not be an effective way of implementing this can you please tell me what should I study,which approach will be best instead of trying to prove that my question is vague!! – boyfromnorth Mar 08 '14 at 16:30
  • As I said way back in the first comment, if you do not yet know what types of transformations you wish to perform, you should experiment with effects software to identify those of interest, then research how they are applied. **We cannot guess what effects are interesting to you**. Nor can we help you develop software **when you do not have a specific requirement**. Play with audacity. Read up on digital guitar effects. Decide what **exactly** you want to do to your signal, and **only then can we help**. – Chris Stratton Mar 08 '14 at 16:32

2 Answers2

4

There is two general approaches to audio programming in Android. You have found the first one, which is to stay in the SDK in java, with AudioTrack.

The downside of this approach is that your audio processing also remains in java code, which could potentially be slower than compiled C code. The audio latency, without processing, is practically the same though.

If you stick with Java, you will probably need a solid FFT library with support for Java (through a wrapper), such as this.

If you choose the NDK approach (C with OpenSL) over SDK (Java with AudioTrack), the setup will be more complex than what you have right now. There is a very good example, with step by step instructions here. That should give you the basic setup for playback of recorded audio and a starting point for adding your processing. When implementing the OpenSL approach, you will still benefit from FFT libraries such as the one linked above, as it can be hard to write and debug audio processing code yourself.

Bartol Karuza
  • 128
  • 1
  • 6
  • Thanks for clearing the main part out.I'll definitely go with the NDK approach but If let's say I want to add a little distortion to signal in my java code. Can I do that without FFT library cause I guess I saw few examples in which the guys were just manipulating the buffer and increasing the gain of output signal. – boyfromnorth Mar 08 '14 at 15:52
  • @ashishashen - You should not accept an answer that does not answer your question. Your question is still too vague to be answerable, but there's little reason at the moment to believe that an FFT would figure in the solution. – Chris Stratton Mar 08 '14 at 16:04
  • I accepted this as an answer cause it did give me a start.Cleared some part out probably I'll repost the question when I know what exactly it is that I want.Like you said the question is too vague to be answerable. – boyfromnorth Mar 08 '14 at 16:32
  • To get better intuition about *Audio Processing*, you can visit this reference: [Android-Audio-Processing-Using-WebRTC](https://github.com/mail2chromium/Android-Audio-Processing-Using-WebRTC) – Muhammad Usman Bashir May 22 '20 at 05:34
0

If you need to do real time audio analysis on Android, the NDK is the way to go.

You can read a little bit more here: https://stackoverflow.com/a/22156746/1367571

Community
  • 1
  • 1
Sebastiano
  • 12,289
  • 6
  • 47
  • 80