I would like to detect and mark the brightest and the darkest spot on an image.
For example I am creating an AVCaptureSession and showing the video frames on screen using AVCaptureVideoPreviewLayer. Now on this camera output view I would like to be able to mark the current darkest and lightest points.
Would i have to read Image pixel data? If so, how can i do that?