I have a question about reducing the overall size of a RAW image without going into the linear space. The reason is, I want to try to edit a very large megapixel image (60+ megapixels), but don't need the full image while editing on something like an iPad or iPhone screen. Once the edit is done, I do want to save out the original. The speed for saving isn't a concern, it's the editing done on the "working" image that I'm previewing the edits on.
I want to preserve the RAW data because I want to leverage the new CoreImage RAW abilities and write some of my own RAW CIFilters, but don't need to be working on a gigantic RAW image the whole time.
A plus is if this can be done with something in Swift, or any language that I can bridge. The actual resizing does not have to be super fast, and would probably be a one time operation before even starting to edit.
I believe there might be two approaches from reading this post:
- De-bayer the RAW image to a linear space, then resizing, and converting back to bayer format RAW, but I don't know if I can preserve the data in the downsampling in that way.
- Somehow manipulate the dimensions by some factor to get it smaller. But this is what I need help understanding.
Thank you!