12

I am aware that JPEG compression is lossy. I have 2 questions:

Given an operation T:
1. Take a JPEG-80 image
2. Decode it to a byte buffer
3. Encode given byte buffer as JPEG-80

Is T an idempotent operation in terms of visual quality? Or will the quality of the image keep degrading as I repeat T? Does the same hold true for the JPEG-XR codec?

Thank you!

Edit: Since there have been conflicting answers, it would be great if you could provide references!

Sau
  • 326
  • 2
  • 15
  • I don't know for sure but I wouldn't count on it. Especially between different engines. Even with a single engine the approximations that take place may not result in the same region when applied twice. – Sten Petrov Feb 12 '13 at 21:11
  • 1
    I would say no. After each encoding to jpeg, there will be more 'loss'. – leppie Feb 13 '13 at 20:28

2 Answers2

6

It's not guaranteed, but it may happen. Especially if you repeat the encode -> decode -> encode -> decode process enough times, it will eventually settle on a fixpoint and stop losing quality further (as long as you stick to the same quality and same encoder).

JPEG encoding is done in several steps:

  1. RGB to YUV conversion
  2. DCT (change into frequency domain)
  3. Quantization (throwing away bits of the DCT)
  4. Lossless compression

And decoding is the same process backwards.

Steps 1 and 2 have rounding errors (especially in speed-optimized encoders using integer math), so for idempotent re-encoding you need to be lucky to get encoding and decoding rounding errors to be small or cancel each other out.

The step 3, which is the major lossy step, is actually idempotent. If your decoded pixels convert to similar-enough DCT it will quantize to the same data again!

JPEG XR also uses YUV, so it may suffer some rounding errors, but OTOH instead of DCT it uses a different transform that can be computed without rounding errors, so it should be easier to round-trip JPEG-XR than other formats.

Kornel
  • 97,764
  • 37
  • 219
  • 309
  • 1
    [I did some experiments](https://stackoverflow.com/questions/52729431), in which the fixpoint was reached after 20 to 50 steps. In one case, the fixpoint cycle had length 2. – Roland Illig Oct 10 '18 at 06:33
-1

By definition, a lossy operation discards data by simplifying the representation in a way that (ideally) isn't noticeable to the end user. However, the encoder has no magic method for determining which pixels are important and which aren't, so it encodes all pixels equally, even if they are artifacts!

In other words, the encoder will treat the lossily-compressed image the same as a lossless image. The lossy image will be further simplified, discarding additional data in the process, because for all the encoder knows, the user intends to represent the artifacts.

Here are some examples of JPEG generation loss:

http://vimeo.com/3750507

http://en.wikipedia.org/wiki/File:JPEG_Generarion_Loss_rotating_90_(stitch_of_0,100,200,500,900,2000_times).png

StockB
  • 755
  • 1
  • 13
  • 33
  • 7
    It's not that simple. JPEG doesn't always throw away information, it quantizes it. If the input data happens to match quantized data perfectly, it won't change at all. It's somewhat similar to reducing number of colors in the image - if the input image has few colors already then it won't change (except that JPEG works on DCT frequencies rather than colors). – Kornel Jul 27 '14 at 21:04
  • That would explain why Wikipedia's example rotated the image by 90º You should put that into an answer, or edit mine. – StockB Jul 29 '14 at 16:24