4

I'm using the following code to write a sequence of 16-bit grayscale images (empty array for the purpose of this question) to a multi-page TIFF :

int numberOfPages = 1000;
int width = 256;
int height = 256;
string fileName = "test.tif";

ushort[] image = new ushort[width * height];
byte[] buffer = new byte[width * height * sizeof(ushort)];

Stopwatch stopWatch = new Stopwatch();

using (Tiff output = Tiff.Open(fileName, "w"))
{
    if (output == null)
    {
        return;
    }
    stopWatch.Start();
    for (int i = 0; i < numberOfPages; i++)
    {
        Buffer.BlockCopy(image, 0, buffer, 0, buffer.Length);

        output.SetField(TiffTag.IMAGEWIDTH, width);
        output.SetField(TiffTag.IMAGELENGTH, height);
        output.SetField(TiffTag.SAMPLESPERPIXEL, 1);
        output.SetField(TiffTag.BITSPERSAMPLE, 16);
        output.SetField(TiffTag.ORIENTATION, Orientation.TOPLEFT);
        output.SetField(TiffTag.XRESOLUTION, 96);
        output.SetField(TiffTag.YRESOLUTION, 96);
        output.SetField(TiffTag.PLANARCONFIG, PlanarConfig.CONTIG);
        output.SetField(TiffTag.PHOTOMETRIC, Photometric.MINISBLACK);
        output.SetField(TiffTag.COMPRESSION, Compression.NONE);
        output.SetField(TiffTag.FILLORDER, FillOrder.MSB2LSB);
        output.SetField(TiffTag.SUBFILETYPE, FileType.PAGE);
        output.SetField(TiffTag.PAGENUMBER, i + 1, numberOfPages);

        output.WriteEncodedStrip(0, buffer, buffer.Length);

        output.WriteDirectory();
    }
    stopWatch.Stop();
}

Debug.WriteLine(stopWatch.ElapsedMilliseconds);

It works fine up to a few hundred pages, but it seems the execution time does not scale linearly with increasing number of pages. For example :

1000 pages --- 3130 ms

2000 pages --- 11778 ms

3000 pages --- 25830 ms

I also tried using append mode inside the loop but got similar results.

Am I doing this wrong or should I expect this kind of overhead?

Souad
  • 4,856
  • 15
  • 80
  • 140
getter1
  • 43
  • 1
  • 5
  • You're very likely running up against IO/memory bottlenecks. Have you profiled memory and disk usage? – Dan Field Feb 10 '17 at 03:10
  • @DanField I'm not sure how to do a proper profiling but if I replace the Tiff output with a FileStream, the data rate averages at 250 MB/s and the write times appear to be linear. 1,000 frames or 10,000 frames does not make a big difference like TIFF. – getter1 Feb 10 '17 at 05:12

1 Answers1

3

I profiled your code in Visual Studio (Analyze -> Performance Profiler) using CPU Usage tool and here are my findings:

For 5000 pages about 91% of the time is spent to write TIFF directories. Not the data but the structure which describes the directory. This looked suspicious so I looked what does WriteDirectory do that long.

The WriteDirectory tries to link previous and the newly created directories. To do this, it searches for a previous directory always starting from the first directory. The more directories are there in the TIFF, the longer it takes to add each new one.

There is no way to change this behaviour without changing code of the library, I am afraid.

Bobrovsky
  • 13,789
  • 19
  • 80
  • 130