In general, MediaCodec is the one that would be recommended.
The OpenMAX AL API was added as a stopgap measure in Android 4.0, before MediaCodec became available in Android 4.1. It is practically deprecated (even though I'm not sure if there's any official statement saying that).
They operate on slightly different levels of abstraction, and for most cases, MediaCodec is less work.
With OpenMAX AL, you need to provide an MPEG TS stream of the data to decode and play back. It does not support other container formats. It does not give you direct access to the decoded data either, but it is played back directly. It does, however, take care of sync of audio and video.
Pros of OpenMAX AL:
- If your input is MPEG TS, you avoid a whole extra step
- Handles sync automatically
Cons:
With MediaCodec, you need to provide individual packets of data to decode. It does not support any container format at all on its own, but you as a caller are supposed to take care of that. It does give you direct access to the decoded output data, but to present it, you need to handle sync manually. (In Android 6.0, there's a new class MediaSync, which can help with this though.)
Pros of MediaCodec:
- Generic, flexible
- Works equally well with any container (doesn't require repacking into MPEG TS)
Cons of MediaCodec:
- Requires you to handle sync manually
- Quite low level, requires you to do a lot of work
For extracting individual packets of data, there's the MediaExtractor class, which will be useful with some common file formats for static files. I don't think it is usable for streaming e.g. fragmented MP4 though.
So if you want to do streaming playback of a format other than MPEG TS, you need to handle extracting of the packets yourself (or use some other library, such as libavformat, for that task). If you use OpenMAX AL, you then would need to package the individual packets back into MPEG TS (using e.g. libavformat). If you use MediaCodec, you would need to handle sync of audio and video during playback.
If you need to do processing of the decoded frames, MediaCodec is probably the only way to go. You can either get the decoded image data as raw YUV, or get it in a GL surface that you can modify using shaders. (The latter might be possible using OpenMAX AL as well, though.)