3

I would like to record user interaction in a video that people can then upload to their social media sites.

For example, the Talking Tom Cat android app has a little camcorder icon. The user can press the camcorder icon, then interact with the app, press the icon to stop the recording and then the video is processed/converted ready for upload.

I think I can use setDrawingCacheEnabled(true) to save images but don't know how to add audio or make a video.

Update: After further reading I think I will need to use the NDK and ffmpeg. I prefer not to do this, but, if there are no other options, does anyone know how to do this?

Does anyone know how to do this in Android?

Relevant links...

Android Screen capturing or make video from images

how to record screen video as like Talking Tomcat application does in iphone?

Community
  • 1
  • 1
Mel
  • 6,214
  • 10
  • 54
  • 71

1 Answers1

8

Use the MediaCodec API with CONFIGURE_FLAG_ENCODE to set it up as an encoder. No ffmpeg required :)

You've already found how to grab the screen in the other question you linked to, now you just need to feed each captured frame to MediaCodec, setting the appropriate format flags, timestamp, etc.

EDIT: Sample code for this was hard to find, but here it is, hat tip to Martin Storsjö. Quick API walkthrough:

MediaFormat inputFormat = MediaFormat.createVideoFormat("video/avc", width, height);
inputFormat.setInteger(MediaFormat.KEY_BIT_RATE, bitRate);
inputFormat.setInteger(MediaFormat.KEY_FRAME_RATE, frameRate);
inputFormat.setInteger(MediaFormat.KEY_COLOR_FORMAT, colorFormat);
inputFormat.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 75);
inputFormat.setInteger("stride", stride);
inputFormat.setInteger("slice-height", sliceHeight);

encoder = MediaCodec.createByCodecName("OMX.TI.DUCATI1.VIDEO.H264E"); // need to find name in media codec list, it is chipset-specific

encoder.configure(inputFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
encoder.start();
encoderInputBuffers = encoder.getInputBuffers();
encoderOutputBuffers = encoder.getOutputBuffers();

byte[] inputFrame = new byte[frameSize];

while ( ... have data ... ) {
    int inputBufIndex = encoder.dequeueInputBuffer(timeout);

    if (inputBufIndex >= 0) {
        ByteBuffer inputBuf = encoderInputBuffers[inputBufIndex];
        inputBuf.clear();

        // HERE: fill in input frame in correct color format, taking strides into account
        // This is an example for I420
        for (int i = 0; i < width; i++) {
            for (int j = 0; j < height; j++) {
                inputFrame[ i * stride + j ] = ...; // Y[i][j]
                inputFrame[ i * stride/2 + j/2 + stride * sliceHeight ] = ...; // U[i][j]
                inputFrame[ i * stride/2 + j/2 + stride * sliceHeight * 5/4 ] = ...; // V[i][j]
            }
        }

        inputBuf.put(inputFrame);

        encoder.queueInputBuffer(
            inputBufIndex,
            0 /* offset */,
            sampleSize,
            presentationTimeUs,
            0);
    }

    int outputBufIndex = encoder.dequeueOutputBuffer(info, timeout);

    if (outputBufIndex >= 0) {
        ByteBuffer outputBuf = encoderOutputBuffers[outputBufIndex];

        // HERE: read get the encoded data

        encoder.releaseOutputBuffer(
            outputBufIndex, 
            false);
    }
    else {
        // Handle change of buffers, format, etc
    }
}

There are also some open issues.

EDIT: You'd feed the data in as a byte buffer in one of the supported pixel formats, for example I420 or NV12. There is unfortunately no perfect way of determining which formats would work on a particular device; however it is typical for the same formats you can get from the camera to work with the encoder.

Alex I
  • 19,689
  • 9
  • 86
  • 158
  • Hi been researching a bit but there aren'tmany examples out there.. how would I actually feed each captured frame into MediaCodec? – Mel Jan 04 '13 at 11:32
  • Hi Alex, Thanks for all the code. Say I had a series of images, say image01.png, image02.png,... etc, each of these would be assigned to the variable "inputFrame" and then eventually put into the ByteBuffer variable "inputBuf"? – Mel Jan 05 '13 at 22:23
  • @Mel: Yes, but the images need to be converted to an array of pixels in the right colorspace, you can't just assign them to inputFrame without a conversion. I showed above how to pack arrays of pixels in Y,U,V colorspace into an inputFrame; to go from a png you need to (a) decode the png to an array of RGB pixels, and (b) convert each pixel from RGB to YUV; then pack as shown above. – Alex I Jan 06 '13 at 01:00
  • Is there any way to do the same on android? – Umesh Sharma Nov 13 '14 at 10:08