0

I am trying to figure out how I can load an image into my iOS app, position a quadrangle over it and then extract and undistort the bounds. The usecase is extracting image parts that contain textures from the image to then use in a 3d application for 3d material generation. As perspective skew or other factors often make the desired object not a perfect rectangle, I need to find out how to compensate the non rectangular nature and undistort the image snippet to have 90° angles again. I could figure out how to create a freeform quadrangle like this:

struct Texture_Crop: View {
@State private var topLeftDotPosition = CGPoint(x: 50, y: 50)
@State private var topRightDotPosition = CGPoint(x: 200, y: 50)
@State private var bottomLeftDotPosition = CGPoint(x: 50, y: 200)
@State private var bottomRightDotPosition = CGPoint(x: 200, y: 200)

var body: some View {
    ZStack {
        Path { path in
            path.move(to: topLeftDotPosition)
            path.addLine(to: topRightDotPosition)
            path.addLine(to: bottomRightDotPosition)
            path.addLine(to: bottomLeftDotPosition)
            path.addLine(to: topLeftDotPosition)
        }
        .stroke(Color.black, lineWidth: 2)

        Circle()
            .fill(Color.red)
            .frame(width: 50, height: 50)
            .position(topLeftDotPosition)
            .gesture(DragGesture()
                .onChanged { value in
                    topLeftDotPosition = value.location
                }
            )

        Circle()
            .fill(Color.blue)
            .frame(width: 50, height: 50)
            .position(topRightDotPosition)
            .gesture(DragGesture()
                .onChanged { value in
                    topRightDotPosition = value.location
                }
            )

        Circle()
            .fill(Color.green)
            .frame(width: 50, height: 50)
            .position(bottomLeftDotPosition)
            .gesture(DragGesture()
                .onChanged { value in
                    bottomLeftDotPosition = value.location
                }
            )

        Circle()
            .fill(Color.yellow)
            .frame(width: 50, height: 50)
            .position(bottomRightDotPosition)
            .gesture(DragGesture()
                .onChanged { value in
                    bottomRightDotPosition = value.location
                }
            )
    }
}

}

But I don't know what to do from here on. Any help would greatly be appreciated!

Herbert
  • 21
  • 5
  • You can use a canvas to do it: https://stackoverflow.com/questions/4097688/draw-distorted-image-on-html5s-canvas – Pete Apr 14 '23 at 18:10
  • 1
    Apple has a framework called CoreImage that deals with image transformations in hardware-accelerated way. It needs a bit of work to get into it though. For perspective correction, CoreImage has, for instance, this filter: https://developer.apple.com/documentation/coreimage/cifilter/3228380-perspectivecorrectionfilter – Baglan Apr 16 '23 at 15:12
  • Apple also has ARKit and RealityKit that has some functionality that you might find interesting. For instance, here's a question that might be in the right general direction: https://stackoverflow.com/questions/63793918/lidar-and-realitykit-capture-a-real-world-texture-for-a-scanned-model – Baglan Apr 16 '23 at 15:20

0 Answers0