1

I am comparing jpeg to jpeg in a constant 'video-stream'. i am using EMGU/OpenCV to compare each pixels at the byte level. There are 3 channels to each image (RGB). I had heard that it is common practice to store only the pixels that have changed between frames as a way of conserving memory space. But, if for instance/example I say EVERY pixel has changed (pls note i am using an exaggerated example to make my point and i would normally discard such large changes) then the resultant bytes saved is 3 times larger than the original jpeg.

How can I store such motion changes efficiently?

thanks

Andrew Simpson
  • 6,883
  • 11
  • 79
  • 179
  • 1
    Might be a better fit for programmers.stackexchange.com – Toby Allen Oct 20 '13 at 07:35
  • @TobyAllen probably but i do not want to have to pay a month subscription. I thought this was also a programmers forum? – Andrew Simpson Oct 20 '13 at 07:40
  • Why would you have to pay a subscription to use programmers.stackexchange.com? Its part of the stackexchange network (which stackoverflow is). Its all free. You might be thinking of expertsexchange which it is not. Programmers.stackexchange.com is a site to discuss this kind of question rather than code. Follow this link http://programmers.stackexchange.com/ – Toby Allen Oct 20 '13 at 07:43
  • @TobyAllen lol - yr were right. i was getting them confused. sorry +1 – Andrew Simpson Oct 20 '13 at 07:48

1 Answers1

1

While taking the consecutive images the camera might also move or not. If the camera is fixed, only the items on the view move and some portion of the image changes every time. If the camera also moves, even if the objects stand still, the image changes significantly. There are some algorithms to discard the effect of the motion of the camera. So the main idea is when compared with the sampling frequency of the camera (e.g. 25 frames per second) most of the objects nearly standing still.

Because most of the image is unchanged between the frames, it becomes feasible to use difference of the images. It provides some compression ratios. However after some amount of time the newly received image shows big difference with the reference image, so it becomes better to get a new image reference. Which is named a "reference frame".

In fact, modern video compression algorithms uses advanced techniques to detect the objects and follow them, which results better compression ratios.

  • Wikipedia - Different compression techniques
  • Check This - OpenCV should handle the storing of consecutive images in different video formats.
phoad
  • 1,801
  • 2
  • 20
  • 31
  • thanks for your reply. I do use the absdiff between frames. I cannot use compression techniques because jpegs do not have inter-compression between frames. at the moment I enumerate through the image that just stores the changes. I wanted to get the bytes that had changed and store the x,y coordinates. I could then use that to recreate the image when required. How should i store this binary data efficiently? – Andrew Simpson Oct 20 '13 at 09:09
  • 1
    Check out this class, you already give an image to the VideoWriter, then another then another. It handles the encoding and writing by itself. And the image might be in JPEG format also, use imLoadImage or read from webcam. http://docs.opencv.org/modules/highgui/doc/reading_and_writing_images_and_video.html?highlight=videowriter#videowriter – phoad Oct 20 '13 at 09:23
  • 1
    if the image is not big, instead of storing the x, y coordinate in int variable, use an unsigned char / unsigned short, or use a struct with specifying the variables in bits. You may also separate the rgb channels and apply this operation on each of them separately. Put some threshold to the difference so insignificant changes might be omited, like a difference of 5 out of 255. – phoad Oct 20 '13 at 09:29
  • Hi, thanks for the link. To clarify what I am doing. I am receiving jpegs. I use emgu to get the absolute differences between the reference frame and the new frame. I output the differences to a Image file. I enumerate through the data/binary array. I have 3 channels for each 'pixel'. I then want to save/store the changed pixel. I would need to store x,y position and the 3 channels. I then want to save to a binary file. I just wanted to know what is the most efficient way of storing that information? – Andrew Simpson Oct 20 '13 at 09:30
  • yes, that is the threshold I use. So, I should create a struct (for each pixel change) and write that to a binary file? – Andrew Simpson Oct 20 '13 at 09:32
  • 1
    As I said, even though all of the objects on the view are fixed, if the camera is moving all of the pixels will change. It might be a better option to find a library to encode it. http://www.ffmpeg.org/ You might use bit fields http://stackoverflow.com/questions/4129961/how-is-the-size-of-a-struct-with-bit-fields-determined-measured. Even better is just subtract two images and achieve one image with lots of zeros. Apply a compression algorithm on the resultant images. – phoad Oct 20 '13 at 09:52
  • Hi, The camera is static. So, (and forgive my ignorance here) carry on creating the image which holds the changes and then zip that image? Just that I thought u cannot zip up jpegs. Or, are you suggesting save the image that holds the changes, convert to a memory stream and zip that stream up using something like ionic.zip? +1 for link. I am using c# though and not c++ – Andrew Simpson Oct 20 '13 at 09:56
  • 1
    let us [continue this discussion in chat](http://chat.stackoverflow.com/rooms/39591/discussion-between-phoad-and-andrew-simpson) – phoad Oct 20 '13 at 10:55