3

I have a question about reducing the overall size of a RAW image without going into the linear space. The reason is, I want to try to edit a very large megapixel image (60+ megapixels), but don't need the full image while editing on something like an iPad or iPhone screen. Once the edit is done, I do want to save out the original. The speed for saving isn't a concern, it's the editing done on the "working" image that I'm previewing the edits on.

I want to preserve the RAW data because I want to leverage the new CoreImage RAW abilities and write some of my own RAW CIFilters, but don't need to be working on a gigantic RAW image the whole time.

A plus is if this can be done with something in Swift, or any language that I can bridge. The actual resizing does not have to be super fast, and would probably be a one time operation before even starting to edit.

I believe there might be two approaches from reading this post:

  1. De-bayer the RAW image to a linear space, then resizing, and converting back to bayer format RAW, but I don't know if I can preserve the data in the downsampling in that way.
  2. Somehow manipulate the dimensions by some factor to get it smaller. But this is what I need help understanding.

Thank you!

Community
  • 1
  • 1
Art C
  • 842
  • 2
  • 10
  • 21

1 Answers1

2

I'm not intimately familiar with CoreImage or image processing in Swift/iOS in general, but let me try to give you at least a starting point.

A raw image (mosaiced image) is essentially a one-channel image, where different pixels correspond to different colors. A common layout may look like:

R G R G 
G B G B
R G R G
G B G B

Note that in this arrangement (which is common for most mosaiced files, with notable exceptions), each 2x2 pixel group repeats across the image. For the purposes of resizing, these 2x2 regions can be considered as superpixels.

I'm assuming you have access to pixel data of your image buffer, either from a file or from memory.

The simplest way to efficiently generate a lower resolution RAW image will be to downsample the image by an integer factor. To do so, you would simply take every nth superpixel along rows and columns to form a new RAW image.

However, this operation can cause image artifacts such as Aliasing to appear in your image, as you may be reducing the Nyquist frequency of your new RAW image to less than that of the highest frequency content in the original RAW image.

To avoid this, you would want to apply an Anti-Aliasing Filter before your downsampling. However, because you have not yet demosaiced your image and your different color channels (R, G, B) are not necessarily correlated, you would need to apply this filtering per color channel.

This is easily accomplished for the R and B channels which form rectangular grids, but the G channel is significantly more difficult. Perhaps the easiest possibility to overcome this difficult would be to apply filtration to both rectangular G grids across your image separately.

Now, I'm assuming most of this functionality would need to be implemented from scratch, as RAW image anti-aliased downsampling is not a common function. You may find that you save significant time by simply demosaicing the original RAW, using provided resampling functions to generate a low-resolution preview, and allow for image adjustment on the demosaiced preview. Then, when you are looking to save out a full-resolution edit, go back and apply the previewed changes to the full-resolution, mosaiced RAW image.

Hope this can provide some starting points.

Glenn
  • 1,059
  • 6
  • 14
  • Hey Glenn, thanks for the response! This all does make sense, but I'm unsure how to get access to a "superpixel." Most documentation talks about images in the linear space, using CoreGraphics, but the data you get isn't really a RAW image's superpixel. Also, what is the nth superpixel? How do you know what this is? – Art C Mar 05 '17 at 01:24
  • Can you get access to a buffer of pixel data? If so, superpixels will be 2x2 groups of pixels in that data buffer. What is the name of the CoreGraphics object that you are interfacing to? – Glenn Mar 06 '17 at 18:53
  • The ultimate goal is to use CIImage to initialize a RAW image (https://developer.apple.com/reference/coreimage/cifilter/raw_image_options), but when editing a large RAW image, the process is pretty slow. Downsampling for editing would be ideal and then when the photo is saved out, it would actually use the full sized image. That being said, I can use anything as long as I can preserve the RAW data. CoreGraphics does get pixel data, but I'm unsure how to work with it. Would this be a way to get the data: http://stackoverflow.com/a/10412292/236480. Though with that data, I'm not sure how. – Art C Mar 07 '17 at 19:36
  • The biggest problems now are: 1. How do I get the non-demosaic'd raw data in iOS 2. How to anti-alias the downsampled image 3. How to get all the metadata of the RAW image back in so CIFilter understands it – Art C May 12 '17 at 20:24