2

I have created, with JavaFX, a game on desktop that works fine (20000 Java lines. As it is a game, the Real Time constraint is important (response time of player's actions).

The final aim is to run this application with Android. I have almost finished to "transfer the Java code" from PC to Android, even if I have encountered some real time trouble. I think almost all of them are solved now.

For instance, I have minimized the CPU time (consumption) of Shape or Rectangle.intersect(node1, node2) calls that are used for detecting impacts between two mobiles. Thus, the real time has been divided by 3. Great!

For testing this Android version, I use Eclipse + Neon2, JavaFX, JavaFXports + gluon and my phone (Archos Diamond S).


But, for Android phones, I had a real time problem related to the sounds that are generated with MediaPlayer and NativeAudioSrvice.

Yet, I have followed this advice that suggests the synchronous mode: javafxports how to call android native Media Player

1st question:

Does it exist an asynchronous mode with this Mediaplayer class?I think that would solve this latency problem? In practice, I have tried the asynchronous solution ... without success: the real time problem due to the audio generation with MediaPlayer stays: an audio generation costs from 50 ms to 80 ms whereas the main cyclic processing runs each 110 ms. Each audio generation can interfer with the main processing execution.

And, in each periodic task (rate: 110 ms), I can play several sounds like that. And, in a trace, there was up to six sound activations that take (together) about 300 ms (against the 110 ms of the main cyclic task )

QUESTION:

How to improve the performance of NativeAudio class (especially, the method play() with its calls that create the real time problem: setDataSource(...), prepare() and start() )?


THE SOLUTION

The main processing must be a "synchronized" method to be sure that this complete processing will be run, without any audio interruption.

More, each complete processing for generating a sound is under a dedicated thread, defined with a Thread.MIN_PRIORITY priority.

Now, the main processing is run each 110 ms and, when it begins, it cannot be disturbed by any audio generation. The display is very "soft" (no more jerky moving).

There is just a minor problem: when an audio seDataSource(), a start() or a prepare() method has begun, it seems to be that the next main processing shall wait the end of the method before beginning (TBC)

I hope this solution could help another people. It is applicable in any case of audio generations with MediaPlayer.


JAVA code of the solution

The main processing is defined like that:

public static ***synchronized*** void mainProcessing() {
// the method handles the impacts, explosions, sounds, movings, ... , in other words almost the entiere game  .. in a CRITICAL SECTION
}

/****************************************************/

In the NativeAudio class that implements "NativeAudioService":

    @Override
        public void play() {
            if (bSon) {
                Task<Void> taskSound = new Task<Void>() {

                @Override
                protected Void call() throws Exception {
                generateSound(); 
                return null;
                }}; 

              Thread threadSound = new Thread(taskSound);
              threadSound.setPriority(Thread.MIN_PRIORITY);
              threadSound.start();
            } 
        }


  /****************************************************/    
  private void generateSound() {
        currentPosition = 0;
        nbTask++;
        noTask = nbTask;
        try {
            if (mediaPlayer != null) {
                stop();
            }
            mediaPlayer = new MediaPlayer();

            AssetFileDescriptor afd = FXActivity.getInstance().getAssets().openFd(audioFileName);

            mediaPlayer.setDataSource(afd.getFileDescriptor(), afd.getStartOffset(), afd.getLength());      

            mediaPlayer.setAudioStreamType(AudioManager.STREAM_RING);

            float floatLevel = (float) audioLevel;
            mediaPlayer.setVolume(floatLevel, floatLevel);

            mediaPlayer.setOnCompletionListener(new OnCompletionListener() {
                @Override
                public void onCompletion(MediaPlayer mediaPlayer) {
                    if (nbCyclesAudio >= 1) {
                              mediaPlayer.start();
                          nbCyclesAudio--;
                        } else {
                        mediaPlayer.stop(); 
                        mediaPlayer.release(); // for freeing the resource - useful for the phone codec 
                        mediaPlayer = null;
                        }
                }
            });

            mediaPlayer.prepare();

            mediaPlayer.start();
            nbCyclesAudio--;

        } catch (IOException e) {
            }
      }
Community
  • 1
  • 1
Pascal DUTOIT
  • 121
  • 1
  • 13

1 Answers1

2

I've changed a little bit the implementation you mentioned, given that you have a bunch of short audio files to play, and that you want a very short time to play them on demand. Basically I'll create the AssetFileDescriptor for all the files once, and also I'll use the same single MediaPlayer instance all the time.

The design follows the pattern of the Charm Down library, so you need to keep the package names below.

EDIT

After the OP's feedback, I've changed the implementation to have one MediaPlayer for each audio file, so you can play any of them at any time.

  1. Source Packages/Java:

package: com.gluonhq.charm.down.plugins

AudioService interface

public interface AudioService {
    void addAudioName(String audioName);
    void play(String audioName, double volume);
    void stop(String audioName);
    void pause(String audioName);
    void resume(String audioName);
    void release();
}

AudioServiceFactory class

public class AudioServiceFactory extends DefaultServiceFactory<AudioService> {

    public AudioServiceFactory() {
        super(AudioService.class);
    }

}
  1. Android/Java Packages

package: com.gluonhq.charm.down.plugins.android

AndroidAudioService class

public class AndroidAudioService implements AudioService {

    private final Map<String, MediaPlayer> playList;
    private final Map<String, Integer> positionList;

    public AndroidAudioService() {
        playList = new HashMap<>();
        positionList = new HashMap<>();
    }

    @Override
    public void addAudioName(String audioName) {
        MediaPlayer mediaPlayer = new MediaPlayer();
        mediaPlayer.setAudioStreamType(AudioManager.STREAM_MUSIC);
        mediaPlayer.setOnCompletionListener(m -> pause(audioName)); // don't call stop, allows reuse
        try {
            mediaPlayer.setDataSource(FXActivity.getInstance().getAssets().openFd(audioName));
            mediaPlayer.setOnPreparedListener(mp -> {
                System.out.println("Adding  audio resource " + audioName);
                playList.put(audioName, mp);
                positionList.put(audioName, 0);
            });
            mediaPlayer.prepareAsync();
        } catch (IOException ex) {
            System.out.println("Error retrieving audio resource " + audioName + " " + ex);
        }

    }

    @Override
    public void play(String audioName, double volume) {
        MediaPlayer mp = playList.get(audioName);
        if (mp != null) {
            if (positionList.get(audioName) > 0) {
                positionList.put(audioName, 0);
                mp.pause();
                mp.seekTo(0);
            }
            mp.start();
        }

    }

    @Override
    public void stop(String audioName) {
        MediaPlayer mp = playList.get(audioName);
        if (mp != null) {
            mp.stop();
        }
    }

    @Override
    public void pause(String audioName) {
        MediaPlayer mp = playList.get(audioName);
        if (mp != null) {
            mp.pause();
            positionList.put(audioName, mp.getCurrentPosition());
        }
    }

    @Override
    public void resume(String audioName) {
        MediaPlayer mp = playList.get(audioName);
        if (mp != null) {
            mp.start();
            mp.seekTo(positionList.get(audioName));
        }
    }

    @Override
    public void release() {
        for (MediaPlayer mp : playList.values()) {
            if (mp != null) {
                mp.stop();
                mp.release();
            }
        }

    }

}
  1. Sample

I've added five short audio files (from here), and added five buttons to my main view:

@Override
public void start(Stage primaryStage) throws Exception {

    Button play1 = new Button("p1");
    Button play2 = new Button("p2");
    Button play3 = new Button("p3");
    Button play4 = new Button("p4");
    Button play5 = new Button("p5");
    HBox hBox = new HBox(10, play1, play2, play3, play4, play5);
    hBox.setAlignment(Pos.CENTER);

    Services.get(AudioService.class).ifPresent(audio -> {

        audio.addAudioName("beep28.mp3");
        audio.addAudioName("beep36.mp3");
        audio.addAudioName("beep37.mp3");
        audio.addAudioName("beep39.mp3");
        audio.addAudioName("beep50.mp3");

        play1.setOnAction(e -> audio.play("beep28.mp3", 5));
        play2.setOnAction(e -> audio.play("beep36.mp3", 5));
        play3.setOnAction(e -> audio.play("beep37.mp3", 5));
        play4.setOnAction(e -> audio.play("beep39.mp3", 5));
        play5.setOnAction(e -> audio.play("beep50.mp3", 5));
    });

    Scene scene = new Scene(new StackPane(hBox), Screen.getPrimary().getVisualBounds().getWidth(), 
                        Screen.getPrimary().getVisualBounds().getHeight());
    primaryStage.setScene(scene);
    primaryStage.show();
}

@Override
public void stop() throws Exception {
    Services.get(AudioService.class).ifPresent(AudioService::release);
}

The prepare step takes place when the app is launched and the service is instanced, so when playing later on any of the audio files, there won't be any delay.

I haven't checked if there could be any memory issues when adding several media players with big audio files, as that wasn't the initial scenario. Maybe a cache strategy will help in this case (see CacheService in Gluon Charm Down).

Community
  • 1
  • 1
José Pereda
  • 44,311
  • 7
  • 104
  • 132
  • José: thanks a lot for your answer and your code. I always enjoy your help. I try your solution but the Real Time improvement is not significative because to create a new Mediaplayer and a new AssetFileDescriptor instantiations takes 1 ms only, against a setDataSource, prepare and start (60 ms for all). More, even if my sounds are short, they can be mixed together, that is not compliant with a single MediaPlayer. For significately improving the real time (and even if the sound is heard a little bit later # 100-200ms), the solution could be to provide an *asynchronous* MediaPlayer, isn't it? – Pascal DUTOIT Mar 22 '17 at 10:37
  • Ok, I've edited my answer with a different approach based on your comment – José Pereda Mar 22 '17 at 14:32
  • José: For generating asynchronously sounds and avoiding to disturb the application real time, I have modified the play() method of AndroidNativeAudio Class (that implements NativeAudioService): now, this method only creates a new thread of a Task instance. And the task includes all the Java code of the previous play(): new MediaPlayer, setDataSource, ... prepare(), start(). Thus, my application running each 110ms is not really disturbed by the sounds creation. And sounds are always concurrent (i.e. mixed). Do you think this solution is correct? The first tests (with traces) seem to be fine. – Pascal DUTOIT Mar 22 '17 at 14:40
  • Without knowing your real scenario, I'm providing a general solution for loading several audio files, and being able to play them on demand, at any time. You will create them once (maybe you could do in a thread, sure), and play them in any order, all at once, and multiple times. You can try both solutions, and then use the solution that best suits you. The implementation details are up to you at the end. – José Pereda Mar 22 '17 at 14:45
  • I quite agree with you and I am sure that your solution will be more reutilisable (reusable) by the others. I am waiting for your solution with interest because my solution takes the same CPU time than your initial play() solution, even if the generation of the sound is often done when the application has finished its processing, inside110 ms. – Pascal DUTOIT Mar 22 '17 at 14:55
  • You can try it already, I've edited my answer with it, so you can adapt it to your case and compare with your current solution. – José Pereda Mar 22 '17 at 14:58
  • OK. I am going to implement your solution, now. Thanks in advance! – Pascal DUTOIT Mar 22 '17 at 15:03
  • I have implemented your solution with success: it is appropriate for some sounds, but not for many sounds (#50) . In my case, it is not easy to call addAudioName(), 50 times. On the other hand, your solution allows to divide by 2 the CPU time for generating a sound. That is great! Thank you. As you say, the solution depends on the archi. In my case, as I have the main processing, each 110 ms, moving the sounds processing in a thread (with a lower priority) distributes sounds when there is no main activity. But it would be better to use the prepareAsync method but it does not work. An idea? – Pascal DUTOIT Mar 23 '17 at 08:12
  • About the addAudioName method, you can add another method to the service, so you can pass a list, for instance: `void addAudioNames(List playList);`, and then implement it. As for the async method, it worked in my case. If you can share your code (github, gist, ...) I can have a look. – José Pereda Mar 23 '17 at 08:17
  • I have prepared a native audio Java class that allows you to test, in your environment, the asynchronism and/or the sound thread. I have also added some results. But, at this time, I have not seen a Real Time solution about sounds. In fact, the solution would be to play with the relative priorities of threads: one for the cyclic application (defined with Timeline + KeyFrame) , and the others for sounds (with lower priority). https://github.com/dutpas/HG/issues/1 . – Pascal DUTOIT Mar 24 '17 at 09:56
  • Jpsé: I have found the solution! The main processing is in a "synchronized" method to be sure that this complete processing will be run, without any audio interruption. More, the complete processing for generating a sound is under a dedicated thread defined with a Thread.MIN_PRIORITY priority. Now, the main processing is run each 110 ms and when it begins, it cannot be disturbed by any audio generation. There is a minor problem: when an audio seDataSource(), a start() or a prepare() method has begun, the next main processing shall wait the end of the method before beginning (TBC) – Pascal DUTOIT Mar 25 '17 at 09:14