4

I'm working on a project that needs to open .mp4 file format, read it's frames 1 by 1, decode them and encode them with better type of lossless compression and save them into a file.

Please correct me if i'm wrong with order of doing things, because i'm not 100% sure how this particular thing should be done. From my understanding it should go like this:

1. Open input .mp4 file
2. Find stream info -> find video stream index
3. Copy codec pointer of found video stream index into AVCodecContext type pointer
4. Find decoder -> allocate codec context -> open codec
5. Read frame by frame -> decode the frame -> encode the frame -> save it into a file

So far i encountered couple of problems. For example, if i want to save a frame using av_interleaved_write_frame() function, i can't open input .mp4 file using avformat_open_input() since it's gonna populate filename part of the AVFormatContext structure with input file name and therefore i can't "write" into that file. I've tried different solution using av_guess_format() but when i dump format using dump_format() i get nothing so i can't find stream information about which codec is it using.

So if anyone have any suggestions, i would really appreciate them. Thank you in advance.

Eugene
  • 10,957
  • 20
  • 69
  • 97
Sir DrinksCoffeeALot
  • 593
  • 2
  • 11
  • 20
  • Are you trying to convert a .mp4 to a series of lossless still images? – Eugene Mar 03 '16 at 12:11
  • Well i'm trying to split .mp4 file into frames, compress them and send them through a network. On the other end those frames will get concatenated back into a .mp4 file. – Sir DrinksCoffeeALot Mar 03 '16 at 12:20
  • What's the point of that? And compressing lossless images is pointless.The mp4 format is already designed as an efficient container. This process is slow, processor intensive, and bandwidth inefficient, etc. – Eugene Mar 03 '16 at 12:28
  • It's because i need to find out differences between multiple codecs, main goal is to split 4k video into frames, compress them and send them through 1Gbit connection. So i need to find out which codec would be the best compression-wise. And as an example i have HD file on which i need to do experiments. – Sir DrinksCoffeeALot Mar 03 '16 at 12:47

2 Answers2

6

See the "detailed description" in the muxing docs. You:

  1. set ctx->oformat using av_guess_format
  2. set ctx->pb using avio_open2
  3. call avformat_new_stream for each stream in the output file. If you're re-encoding, this is by adding each stream of the input file into the output file.
  4. call avformat_write_header
  5. call av_interleaved_write_frame in a loop
  6. call av_write_trailer
  7. close the file (avio_close) and clear up all allocated memory
Ronald S. Bultje
  • 10,828
  • 26
  • 47
  • Do you know how can i copy content of `AVFrame` structure to a `char* buffer`, so i can send that buffer using winsock2's `send()` function to a different process? I'm trying to create local server-client communication to measure time needed to send a frame using different compression methods. – Sir DrinksCoffeeALot Mar 07 '16 at 11:33
  • for (y = 0; y < height; y++) memcpy(ptr + y *width, frame->data[0] + y * frame->linesize[0], width). You can also use avcodec_encode_video2() to compress frames (instead of gzip or so, which is what you're probably trying to do). – Ronald S. Bultje Mar 07 '16 at 12:18
  • Hmm if i do it like that rather than copying whole `AVFrame` structure, considering i'm copying raw video frames, would i be able to encode that frame (based only on `frame->data[0]` and `frame->linesize[0]`) on the recipient side using `avcodec_encode_video2()` function? – Sir DrinksCoffeeALot Mar 08 '16 at 08:17
  • About compression, i was planning to use lzw. I will take closer look into `avcodec_encode_video2()` compression-wise. Btw, thank you for your responses i really appreciate it. – Sir DrinksCoffeeALot Mar 08 '16 at 08:19
  • You would do the same for data[1-2] with linesize[1-2] (assuming some planar YUV format), sorry, forgot to mention that; or rather, it depends on the number of planes for your pixfmt (see pixdesc). – Ronald S. Bultje Mar 08 '16 at 11:40
  • I succeeded sending single decoded frame from one process to another one using send/recv functions. I determined size of `char* buffer` needed to store a frame of type `PIX_FMT_RGB24` using `avpicture_get_size()`, copying `frame->data[0]` was done like this : `memcpy(buffer + y * frame->linesize[0], frame->data[0] + y*frame->linesize[0], width * 3);` and writing to a `.ppm` file on client side was done using: `fwrite(buffer + y * width * 3, 1, width * 3, pf);`. Basicly i just needed to multiply `width` from your code by 3 (since its RGB24) or use exact data stored in `frame->linesize[0]`. – Sir DrinksCoffeeALot Mar 09 '16 at 10:45
  • I have one more question before i hopefully stop bugging you about FFmpeg, for example if i want to use LZW compression method which is already implemented in FFmpeg, how would i do it using `avcodec_encode_video2()` function? I guess i need to populate `AVCodecContext` variable with different parameters before calling the encode funtion? – Sir DrinksCoffeeALot Mar 09 '16 at 10:48
  • ffmpeg has no vanilla lzw encoder. There's a tiff and a gif encoder, which use lzw internally... – Ronald S. Bultje Mar 09 '16 at 13:05
  • Yea i saw tiff uses LZW internally, was hoping i could somehow implement their source into mine. Ill try to do that today. – Sir DrinksCoffeeALot Mar 10 '16 at 07:07
-2

You can convert a video to a sequence of losses images with:

ffmpeg -i video.mp4 image-%05d.png

and then from a series of images back to a video with:

ffmpeg -i image-%05d.png video.mp4

The functionality is also available via wrappers.

You can see a similar question at: Extracting frames from MP4/FLV?

Community
  • 1
  • 1
Eugene
  • 10,957
  • 20
  • 69
  • 97
  • I'm not using static build of ffmpeg, i'm using their API in my own project. – Sir DrinksCoffeeALot Mar 03 '16 at 12:48
  • FFmpeg is the command line utility. Which library and language are you using? Libavcodec? – Eugene Mar 03 '16 at 13:03
  • I'm using libavcodec/format/device/filter/util/postproc/swscale. I'm writing in C. Basicly it's an old dev build of FFmpeg. – Sir DrinksCoffeeALot Mar 03 '16 at 13:09
  • 1
    FFmpeg is a project containing a bunch of libraries for A/V manipulation and an optional set of binaries such as ffmpeg, ffplay..etc which allow you to use the functions available in the libraries. – Gyan Mar 04 '16 at 05:35