2

I have an application where I am using an Image control to display a large image (~8000x8000 pixels). I have the image control bound to an ImageSource like this:

<Image x:Name="_ImageSource" Source="{Binding ViewModel.Image, Mode=OneWay}" RenderOptions.BitmapScalingMode="Linear"/>

I'm using the Leadtools image library to create the ImageSource, which is a WritableBitmap.

this.Image = RasterImageConverter.ConvertToSource(displayImage, ConvertToSourceOptions.None) as BitmapSource;

Everything works, but there is a significant delay between the time the Image property is set on the viewmodel and the time the Image control actually updates the display (on the order of 10 seconds for a large image). I've done some profiling/logging and I know it is not the call to RasterImageConverter.ConvertToSource(), instead it seems to be something that the Image control itself is doing.

So far I have been unable to find out much about what could be causing this delay. At the very least I'd like to be able to get notified when the control actually does update so I can display some kind of busy notification, but there doesn't seem to be any event that fires at the right time.

Any ideas or assistance is appreciated.

LEADTOOLS Support
  • 2,755
  • 1
  • 12
  • 12
WildCrustacean
  • 5,896
  • 2
  • 31
  • 42
  • This is more of a last-ditch effort suggestion around the problem, so if a more direct solution exists it's certainly going to be better (at least from a productivity perspective). But a 64 million pixel image with 8000-pixel-wide scanlines is going to be very bottlenecky for any image processing function which requires anything beyond straight sequential access of the image's pixels. You'll likely see a considerably boost if you, say, broke it down into multiple tiled images (ex: 10x10 image tiles that are 800x800 pixels each). It's something to try if you don't find a very direct... –  Nov 19 '15 at 04:27
  • ... answer that can tweak some parameter passed to the API or something like that and get back better spatial locality in the functions processing the image. –  Nov 19 '15 at 04:28
  • Another thing to try -- `NearestNeighbor` interpolation for `BitmapScalingMode` instead of `Linear`. That's a really pessimistic suggestion since, unless you are indeed scaling the image in the slightest way, it shouldn't incur any cost no matter what mode you choose. But, in the remote chance that there is some sub-pixel sampling going on for whatever reason, nearest neighbor would avoid those accesses of vertical pixel neighbors (which is really expensive for an image that is 8k pixels wide). –  Nov 19 '15 at 08:28
  • `NearestNeighbor` doesn't seem to help. I'm thinking it is maybe just way too much data, and the image needs to be scaled down before setting the binding. Thanks for the suggestions though. – WildCrustacean Nov 19 '15 at 15:05
  • In that case, if you don't get any better answers and feel a bit desperate, "way too much data" doesn't have to be the problem so much as the way such data tends to lose spatial locality, e.g., by having such a wide scanline and contiguous page size for the whole image. It can actually be considerably faster to process 80x80 pixels 10,000 times (depending on the image processing memory access patterns) than it can be to process 8000x8000 pixels one time, as counter-intuitive as it seems. Breaking the image up into smaller images and stitching them together should provide quite a benefit. –  Nov 19 '15 at 15:10
  • It might be worth trying a quick test like just break your image up into two 4000x8000 pieces (not 8000x4000, 4000x8000 -- it's important) and see if that helps even remotely. If it helps at all, you can potentially keep improving the times that way by using smaller and smaller images stitched together (up to a point) to form a total 8000x8000 frankenstein image. The hardware can easily process hundreds of millions of pixels a second (they have to often just for our displays) but the memory access pattern needs to exploit locality of reference. You can exploit that forcefully... –  Nov 19 '15 at 15:11
  • ... if the library doesn't do so by using smaller image tiles stitched together, and here, assuming the library accesses pixels in a vertical fashion somewhere, a smaller width is more important than a smaller height. I'm approaching this purely from an image processing background with little knowledge of the inner workings of these APIs -- but 10 seconds is excessive for 64m pixels, and is most likely the result of memory access bottlenecks. –  Nov 19 '15 at 15:17
  • It's a little tricky if you're scaling down multiple image pieces, but you really only need to tile in one horizontal direction and make sure the pieces line up properly. –  Nov 19 '15 at 15:24

1 Answers1

1

Displaying an image that's 8000x8000 pixels should take between one and 2 seconds on most computers, but there's a catch. Such an image uses up to 0.25 gigabytes of memory, and you'll usually need double that amount of free RAM because of the conversion between a LEADTOOLS RasterImage and a WPF BitmapSource.
This means unless you have plenty of free contiguous memory, you could consider splitting the image into tiles. Or consider resizing it if you want to display the whole thing, since no monitor can show the whole thing without zooming anyway. Part of this is similar to Ike's suggestions in the comments.

A small test project that displays a similar image, along with some details about the test, can be found here: http://support.leadtools.com/SupportPortal/CS/forums/45002/ShowPost.aspx

LEADTOOLS Support
  • 2,755
  • 1
  • 12
  • 12