I have an Android application that simulates a Drum Kit where the user is able to "record" their performances. Currently this recording consists of store in a .json file which audio sample was played and the time that it took between the previous and the current note, something like this:
[
{
drumkit: 'snare',
time: '0'
},
{
drumkit: 'kick',
time: '5'
},
{
drumkit: 'splash',
time: '3'
}
{
drumkit: 'snare',
time: '0'
},
{
drumkit: 'kick',
time: '2'
},
{
drumkit: 'crash',
time: '0'
}
]
That said, now I'm looking for a way to generate an audio file (mp3 for example) from that record, since I already have the audio files in the app and also the logical sequence of the recording.
I tried to follow the approach shown in this question (https://stackoverflow.com/a/656302). The merging of audios worked, but in this way the next audio can only be placed after the previous one ends, making it impossible for two (or more) audios to be played at the same time. It is also not possible to define the time that separates the audios.
The ffmpeg
lib seems to be good for dealing with this type of problem, there is even a version for Android.
But I don't know how to do this using it.
Would anyone have a suggestion on how to solve the problem? (Using or not ffmpeg)