1

I'm trying to create a share button with SwiftUI that when pressed can share a generated image. I've found some tutorials that can screen shot a current displayed view and convert it to an UIImage. But I want to create a view programmatically off the screen and then save that to a UIImage that users can then share with a share sheet.

import SwiftUI
import SwiftyJSON
import MapKit


struct ShareRentalView : View {
    @State private var region = MKCoordinateRegion(center: CLLocationCoordinate2D(latitude: 32.786038, longitude: -117.237324) , span: MKCoordinateSpan(latitudeDelta: 0.025, longitudeDelta: 0.025))
    @State var coordinates: [JSON] = []
    @State var origin: CGPoint? = nil
    @State var size: CGSize? = nil
        
    var body: some View {
        GeometryReader{ geometry in
            VStack(spacing: 0) {
                ZStack{
                    HistoryMapView(region: region, pointsArray: $coordinates)
                        .frame(height: 300)
                }.frame(height: 300)
            }.onAppear {
                self.origin = geometry.frame(in: .global).origin
                self.size =  geometry.size
            }
        }
    }
    func returnScreenShot() -> UIImage{
        return takeScreenshot(origin: self.origin.unsafelyUnwrapped, size: self.size.unsafelyUnwrapped)
    }
}


extension UIView {
    var renderedImage: UIImage {
        // rect of capure
        let rect = self.bounds
        // create the context of bitmap
        UIGraphicsBeginImageContextWithOptions(rect.size, false, 0.0)
        let context: CGContext = UIGraphicsGetCurrentContext()!
        self.layer.render(in: context)
        // get a image from current context bitmap
        let capturedImage: UIImage = UIGraphicsGetImageFromCurrentImageContext()!
        UIGraphicsEndImageContext()
        return capturedImage
    }
}

extension View {
    func takeScreenshot(origin: CGPoint, size: CGSize) -> UIImage {

        let window = UIWindow(frame: CGRect(origin: origin, size: size))
        let hosting = UIHostingController(rootView: self)
        hosting.view.frame = window.frame
        window.addSubview(hosting.view)
        window.makeKeyAndVisible()
        return hosting.view.renderedImage
    }
}

This is kind of my code idea at the moment. I have a view I've built that onAppear sets the CGpoint and CGsize of the screen capture. Then an attached method that can then take the screen shot of the view. The problem right now this view never renders because I never add this to a parent view as I don't want this view to appear to the user. In the parent view I have

struct HistoryCell: View {
    ...
    private var shareRental : ShareRentalView? = nil
    private var uiimage: UIImage? = nil
    ...
    init(){
     ...
        self.shareRental = ShareRentalView()
    }

    var body: some View {
            ...
            Button{action: {self.uiimage = self.shareRental?.returnScreenShot()}}
            ...
     }
}

This doesn't work because there view I want to screen shot is never rendered? Is there a way to render it in memory or off screen and then create an image from it? Or do I need to think of another way of doing this?

GoBig06
  • 255
  • 2
  • 13
  • Does this answer your question https://stackoverflow.com/a/59333377/12299030? – Asperi Feb 15 '21 at 19:33
  • I've switched to use the code in the first answer and it's returning just a blank UIImage. It does return a UIImage and it does hit the render portion of the code just now Image is detected. – GoBig06 Feb 15 '21 at 20:04

1 Answers1

2

This ended up working to get the a screen shot of a view that was not presented on the screen to save as a UIImage

extension UIView {
    func asImage() -> UIImage {
        let format = UIGraphicsImageRendererFormat()
        format.scale = 1
        return UIGraphicsImageRenderer(size: self.layer.frame.size, format: format).image { context in
                self.drawHierarchy(in: self.layer.bounds, afterScreenUpdates: true)
        }
    }
}

extension View {
    func asImage() -> UIImage {
        let controller = UIHostingController(rootView: self)
        let size = controller.sizeThatFits(in: UIScreen.main.bounds.size)
        controller.view.bounds = CGRect(origin: .zero, size: size)
        let image = controller.view.asImage()
        return image
    }
}

And then in my parent view

    var shareRental: ShareRentalView?

     init(){
        ....
        self.shareRental = ShareRentalView()
     }

    var body: some View {
        Button(action: {
           let shareImage = self.shareRental.asImage()
        }

This gets me almost there. The MKMapSnapshotter has a delay while loading and the image creation happens too fast and there is no map when the UIImage is created.

In order to get around the issue with the delay in the map loading I created an array in a class that builds all the UIImages and stores them in an array.

class MyUser: ObservableObject {
   ...
    public func buildHistoryRental(){
        self.historyRentals.removeAll()
        MapSnapshot().generateSnapshot(completion: self.snapShotRsp)
                }
            }
        }
    }
    private func snapShotRsp(image: UIImage){
        self.historyRentals.append(image))
    }

And then I made a class to create snap shot images like this

func generateSnapshot(completion: @escaping (JSON, UIImage)->() ){
    let mapSnapshotOptions = MKMapSnapshotOptions()

    // Set the region of the map that is rendered. (by polyline)
    let polyLine = MKPolyline(coordinates: &yourCoordinates, count: yourCoordinates.count)
    let region = MKCoordinateRegionForMapRect(polyLine.boundingMapRect)

    mapSnapshotOptions.region = region

    // Set the scale of the image. We'll just use the scale of the current device, which is 2x scale on Retina screens.
    mapSnapshotOptions.scale = UIScreen.main.scale

    // Set the size of the image output.
    mapSnapshotOptions.size = CGSize(width: IMAGE_VIEW_WIDTH, height: IMAGE_VIEW_HEIGHT)

    // Show buildings and Points of Interest on the snapshot
    mapSnapshotOptions.showsBuildings = true
    mapSnapshotOptions.showsPointsOfInterest = true

    let snapShotter = MKMapSnapshotter(options: mapSnapshotOptions)

    var image: UIImage = UIImage()
    snapshotter.start(completionHandler: { (snapshot: MKMapSnapshotter.Snapshot?, Error) -> Void in
            if(Error != nil){
                print("\(String(describing: Error))");
            }else{
                image = self.drawLineOnImage(snapshot: snapshot.unsafelyUnwrapped, pointsToUse: pointsToUse)
            }
            completion(image)
        })
    }
}

func drawLineOnImage(snapshot: MKMapSnapshot) -> UIImage {
    let image = snapshot.image

    // for Retina screen
    UIGraphicsBeginImageContextWithOptions(self.imageView.frame.size, true, 0)

    // draw original image into the context
    image.draw(at: CGPoint.zero)

    // get the context for CoreGraphics
    let context = UIGraphicsGetCurrentContext()

    // set stroking width and color of the context
    context!.setLineWidth(2.0)
    context!.setStrokeColor(UIColor.orange.cgColor)

    // Here is the trick :
    // We use addLine() and move() to draw the line, this should be easy to understand.
    // The diificult part is that they both take CGPoint as parameters, and it would be way too complex for us to calculate by ourselves
    // Thus we use snapshot.point() to save the pain.
    context!.move(to: snapshot.point(for: yourCoordinates[0]))
    for i in 0...yourCoordinates.count-1 {
      context!.addLine(to: snapshot.point(for: yourCoordinates[i]))
      context!.move(to: snapshot.point(for: yourCoordinates[i]))
    }

    // apply the stroke to the context
    context!.strokePath()

    // get the image from the graphics context
    let resultImage = UIGraphicsGetImageFromCurrentImageContext()

    // end the graphics context 
    UIGraphicsEndImageContext()

    return resultImage!
}

It's important to return the image back async with the callback. Trying to return the image directly from the func call yielded a blank map.

GoBig06
  • 255
  • 2
  • 13