2
Bitmap clip = new Bitmap((int)(8.5 * 72), (int)(11 * 72));
MemoryStream stream = new MemoryStream();
clip.Save(stream, System.Drawing.Imaging.ImageFormat.Png);
byte[] bytes = stream.ToArray();

I ran it on my machine and the bytes.Length was 8587, on my fellow developers' machines it was 2009. Supposedly in .NET there's no way to influence the quality (or rather the ratio in this case) of the PNG compression. This particular image is a blank one, and I have other tests which work with images with content and they confirm that the compression is lossless (I encountered some debates where that was a question).

But even if the compression is lossless, there's is a tradeoff between the compression algorithm runtime + CPU utilization vs the compression ratio / quality. I wonder how System.Drawing.Imaging determines the quality, because this above case clearly shows that there can be differences. How can I be sure that on the client's machine it won't choose 100% quality (which would yield a 1.457.337 size file)?

Related material:

Additional info:

  • Checked out another developer's machine and it's consistent with my other colleagues results, so my machine is the outlier.
  • Each machine has Win 7 Prof 64 bit installed on it, the particular tests are NUnit and we are using .NET 4
  • Can any of my installed software override the behavior of .NET in this respect. For example I have IrfanView installed, can it replace any system wide "filters" or dlls used? (BTW I checked in the Modules debug view and I don't see any unusual dll loaded)
  • Can it influenced by some windows OS desktop quality settings or something?
Community
  • 1
  • 1
Csaba Toth
  • 10,021
  • 5
  • 75
  • 121
  • 1
    You said you saved a PNG but your code is using a BMP... – Trevor Elliott Oct 10 '13 at 19:39
  • @TrevorElliott I copied the wrong test code, corrected it. – Csaba Toth Oct 10 '13 at 19:53
  • 1
    What's this about a tradeoff between anything and quality if you switch to lossless compression? The whole point of lossless compression is that the pre-compression and post-decompression images are bit-for-bit identical. There is no quality lost, unless the algorithm is broken or you're intentionally discarding it. – cHao Oct 10 '13 at 21:50
  • @cHao Well, usually quality and compression ratio is the same setting (which work against each other). People often refer to quality, in our case we rather should speak about compression ratio. Even in case of lossless compression, it's the job of the encoder algorithm of the codec how hard it tries to find the best parameters throughout the process. I'll correct the question. – Csaba Toth Oct 10 '13 at 21:59

2 Answers2

7

I've been chasing exactly this issue, and get exactly the same results as you on two of my machines. I believe I have tracked it down to different versions of System.IO.Compression.DeflateStream on the two machines - png uses deflate as its compression method, and seems to use this class.

When I run the following:

byte[] blank = new byte[1000000];
MemoryStream uncstream = new MemoryStream(blank);
MemoryStream compstream = new MemoryStream();
DeflateStream defstream = new DeflateStream(compstream, CompressionMode.Compress);
uncstream.CopyTo(defstream);
defstream.Close();
byte[] bytes = compstream.ToArray();
System.Console.WriteLine(bytes.Length);

I get 985 bytes on one machine, 8806 bytes on the other.

If I change the constructor to:

DeflateStream defstream = new DeflateStream(compstream, CompressionLevel.Optimal);

I get the same result on the first machine, and an unimplemented exception on the second, indicating that it is using an earlier version of the compression library. When I search for System.IO.Compression.dll on the second machine, I can't find it at all, even though .Net 4 is supposedly installed. I'm guessing that it's hidden somewhere in .Net 2.0. I know that MS claim to have improved DeflateStream between versions 2 and 4 of .Net - see here for a discussion:

http://www.virtualdub.org/blog/pivot/entry.php?id=335

I have also seen it said that the separate compression dll started life in .Net 4.5, although I don't know if this is correct. My next step is to get .Net 4.5 installed on the second machine to see if it makes a difference, but that will have to wait until I'm back in the office in January.

Andrew
  • 86
  • 1
  • 3
  • Kudos for you for tracking this down that deeply. I didn't have time for that. I'll try to double check your findings, but I believe your answer is the right one. From my perspective your finding is good, because it means that the default compression even if it uses the "older" method won't deteriorate to a surprisingly large size (I believe). – Csaba Toth Jan 01 '14 at 04:24
  • Also note that .NET 4.5 is the one which started to provide ZipArchive class. So there's a good chance there were development under the hood in the compression area, which could affect this. I don't have time to verify this right now, but I'll mark you answer correct. – Csaba Toth Jan 13 '14 at 19:42
  • .Net 4.5 (finally) started using zLib for compression / deflate, resulting in much better compression ratios. – Pygmy Jul 08 '14 at 10:40
1

You may have different updates of .NET framework installed.

Check it according to this: http://msdn.microsoft.com/en-us/library/hh925567.aspx

You may also have different "distributions" of the .NET Framework 4

  • Client profile
  • Extended
  • Multi - targeting pack

You can check it in the same way.

Diego Mazzaro
  • 2,734
  • 1
  • 20
  • 21
  • I also have `Microsoft Windows SDK for Windows 7`, `Debugging Tools for Windows (x64)`, and whole bunch of symbols installed, while I think my colleagues don't. Hopefully we'll take some time soon to check the versions. – Csaba Toth Oct 10 '13 at 22:05
  • I gathered some version info, but it's not consistent. Specifically, I debugged the software, set a breakpoint, and in the Modules window you can see the exact version (including build number) of all dlls. The System.Drawing is the same for all three machines. One of the other developers have almost the same builds as me, the other differs more. I need to investigate more, and even if the version is the indicator, the question is why or how? – Csaba Toth Oct 12 '13 at 22:40