9

I am working on a video editing app in Swift. In my case my output video looks like as followingenter image description hereenter image description here

I am trying to fill the black portion with blur effect exactly like thisenter image description hereenter image description here

I searched but didn't get any working solution. Any assistance would be a great help.

Community
  • 1
  • 1
pigeon_39
  • 1,503
  • 2
  • 22
  • 34

4 Answers4

21

Swift 4 - Adding blur background to video

May be I'm late for this answer but still I didn't find any solution for this requirement. So sharing my work:

Download Sample Code Here

Features

  1. Single video support
  2. Multiple videos merging support
  3. Support any canvas in any ratio
  4. Save final video to camera roll
  5. Manage all video orientations

Step to add a blur background to videos

  1. Merge all videos without audio
    a) Need a rendered area size.
    b) Need to calculate the scale and position for video with in this area. For aspectFill property.
  2. Add blur effect to merged video
  3. Place one by one video at the center of blurred video

Merge Videos

func mergeVideos(_ videos: Array<AVURLAsset>, inArea area:CGSize, completion: @escaping (_ error: Error?, _ url:URL?) -> Swift.Void) {

    // Create AVMutableComposition Object.This object will hold our multiple AVMutableCompositionTrack.
    let mixComposition = AVMutableComposition()

    var instructionLayers : Array<AVMutableVideoCompositionLayerInstruction> = []
    
    for asset in videos {
        
        // Here we are creating the AVMutableCompositionTrack. See how we are adding a new track to our AVMutableComposition.
        let track = mixComposition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: kCMPersistentTrackID_Invalid)
        
        // Now we set the length of the track equal to the length of the asset and add the asset to out newly created track at kCMTimeZero for first track and lastAssetTime for current track so video plays from the start of the track to end.
        if let videoTrack = asset.tracks(withMediaType: AVMediaType.video).first {
            
            
            /// Hide time for this video's layer
            let opacityStartTime: CMTime = CMTimeMakeWithSeconds(0, asset.duration.timescale)
            let opacityEndTime: CMTime = CMTimeAdd(mixComposition.duration, asset.duration)
            let hideAfter: CMTime = CMTimeAdd(opacityStartTime, opacityEndTime)
            
            
            let timeRange = CMTimeRangeMake(kCMTimeZero, asset.duration)
            try? track?.insertTimeRange(timeRange, of: videoTrack, at: mixComposition.duration)
            
            
            /// Layer instrcution
            let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: track!)
            layerInstruction.setOpacity(0.0, at: hideAfter)

            /// Add logic for aspectFit in given area
            let properties = scaleAndPositionInAspectFillMode(forTrack: videoTrack, inArea: area)
            
            
            /// Checking for orientation
            let videoOrientation: UIImageOrientation = self.getVideoOrientation(forTrack: videoTrack)
            let assetSize = self.assetSize(forTrack: videoTrack)

            if (videoOrientation == .down) {
                /// Rotate
                let defaultTransfrom = asset.preferredTransform
                let rotateTransform = CGAffineTransform(rotationAngle: -CGFloat(Double.pi/2.0))
                
                // Scale
                let scaleTransform = CGAffineTransform(scaleX: properties.scale.width, y: properties.scale.height)
                
                // Translate
                var ytranslation: CGFloat = assetSize.height
                var xtranslation: CGFloat = 0
                if properties.position.y == 0 {
                    xtranslation = -(assetSize.width - ((size.width/size.height) * assetSize.height))/2.0
                }
                else {
                    ytranslation = assetSize.height - (assetSize.height - ((size.height/size.width) * assetSize.width))/2.0
                }
                let translationTransform = CGAffineTransform(translationX: xtranslation, y: ytranslation)
                
                // Final transformation - Concatination
                let finalTransform = defaultTransfrom.concatenating(rotateTransform).concatenating(translationTransform).concatenating(scaleTransform)
                layerInstruction.setTransform(finalTransform, at: kCMTimeZero)
            }
            else if (videoOrientation == .left) {

                /// Rotate
                let defaultTransfrom = asset.preferredTransform
                let rotateTransform = CGAffineTransform(rotationAngle: -CGFloat(Double.pi))
                
                // Scale
                let scaleTransform = CGAffineTransform(scaleX: properties.scale.width, y: properties.scale.height)

                // Translate
                var ytranslation: CGFloat = assetSize.height
                var xtranslation: CGFloat = assetSize.width
                if properties.position.y == 0 {
                    xtranslation = assetSize.width - (assetSize.width - ((size.width/size.height) * assetSize.height))/2.0
                }
                else {
                    ytranslation = assetSize.height - (assetSize.height - ((size.height/size.width) * assetSize.width))/2.0
                }
                let translationTransform = CGAffineTransform(translationX: xtranslation, y: ytranslation)
                
                // Final transformation - Concatination
                let finalTransform = defaultTransfrom.concatenating(rotateTransform).concatenating(translationTransform).concatenating(scaleTransform)
                layerInstruction.setTransform(finalTransform, at: kCMTimeZero)
            }
            else if (videoOrientation == .right) {
                /// No need to rotate
                // Scale
                let scaleTransform = CGAffineTransform(scaleX: properties.scale.width, y: properties.scale.height)
                
                // Translate
                let translationTransform = CGAffineTransform(translationX: properties.position.x, y: properties.position.y)
                
                let finalTransform  = scaleTransform.concatenating(translationTransform)
                layerInstruction.setTransform(finalTransform, at: kCMTimeZero)
            }
            else {
                /// Rotate
                let defaultTransfrom = asset.preferredTransform
                let rotateTransform = CGAffineTransform(rotationAngle: CGFloat(Double.pi/2.0))
                
                // Scale
                let scaleTransform = CGAffineTransform(scaleX: properties.scale.width, y: properties.scale.height)
                
                // Translate
                var ytranslation: CGFloat = 0
                var xtranslation: CGFloat = assetSize.width
                if properties.position.y == 0 {
                    xtranslation = assetSize.width - (assetSize.width - ((size.width/size.height) * assetSize.height))/2.0
                }
                else {
                    ytranslation = -(assetSize.height - ((size.height/size.width) * assetSize.width))/2.0
                }
                let translationTransform = CGAffineTransform(translationX: xtranslation, y: ytranslation)
                
                // Final transformation - Concatination
                let finalTransform = defaultTransfrom.concatenating(rotateTransform).concatenating(translationTransform).concatenating(scaleTransform)
                layerInstruction.setTransform(finalTransform, at: kCMTimeZero)
            }

            instructionLayers.append(layerInstruction)
        }
    }
    
    
    let mainInstruction = AVMutableVideoCompositionInstruction()
    mainInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, mixComposition.duration)
    mainInstruction.layerInstructions = instructionLayers

    let mainCompositionInst = AVMutableVideoComposition()
    mainCompositionInst.instructions = [mainInstruction]
    mainCompositionInst.frameDuration = CMTimeMake(1, 30)
    mainCompositionInst.renderSize = area
    
    //let url = URL(fileURLWithPath: "/Users/enacteservices/Desktop/final_video.mov")
    let url = self.videoOutputURL
    try? FileManager.default.removeItem(at: url)
    
    let exporter = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality)
    exporter?.outputURL = url
    exporter?.outputFileType = .mp4
    exporter?.videoComposition = mainCompositionInst
    exporter?.shouldOptimizeForNetworkUse = true
    exporter?.exportAsynchronously(completionHandler: {
        if let anError = exporter?.error {
            completion(anError, nil)
        }
        else if exporter?.status == AVAssetExportSessionStatus.completed {
            completion(nil, url)
        }
    })
}

Adding Blur Effect

func addBlurEffect(toVideo asset:AVURLAsset, completion: @escaping (_ error: Error?, _ url:URL?) -> Swift.Void) {
        
        let filter = CIFilter(name: "CIGaussianBlur")
        let composition = AVVideoComposition(asset: asset, applyingCIFiltersWithHandler: { request in
            // Clamp to avoid blurring transparent pixels at the image edges
            let source: CIImage? = request.sourceImage.clampedToExtent()
            filter?.setValue(source, forKey: kCIInputImageKey)
            
            filter?.setValue(10.0, forKey: kCIInputRadiusKey)
            
            // Crop the blurred output to the bounds of the original image
            let output: CIImage? = filter?.outputImage?.cropped(to: request.sourceImage.extent)
            
            // Provide the filter output to the composition
            if let anOutput = output {
                request.finish(with: anOutput, context: nil)
            }
        })
        
        //let url = URL(fileURLWithPath: "/Users/enacteservices/Desktop/final_video.mov")
        let url = self.videoOutputURL
        // Remove any prevouis videos at that path
        try? FileManager.default.removeItem(at: url)
        
        let exporter = AVAssetExportSession(asset: asset, presetName: AVAssetExportPresetHighestQuality)
        
        // assign all instruction for the video processing (in this case the transformation for cropping the video
        exporter?.videoComposition = composition
        exporter?.outputFileType = .mp4
        exporter?.outputURL = url
        exporter?.exportAsynchronously(completionHandler: {
            if let anError = exporter?.error {
                completion(anError, nil)
            }
            else if exporter?.status == AVAssetExportSessionStatus.completed {
                completion(nil, url)
            }
        })
}

Place one by one video at the center of blurred video
This will be your final video URL.

func addAllVideosAtCenterOfBlur(videos: Array<AVURLAsset>, blurVideo: AVURLAsset, completion: @escaping (_ error: Error?, _ url:URL?) -> Swift.Void) {
    
    
    // Create AVMutableComposition Object.This object will hold our multiple AVMutableCompositionTrack.
    let mixComposition = AVMutableComposition()
    
    var instructionLayers : Array<AVMutableVideoCompositionLayerInstruction> = []
    
    
    // Add blur video first
    let blurVideoTrack = mixComposition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: kCMPersistentTrackID_Invalid)
    // Blur layer instruction
    if let videoTrack = blurVideo.tracks(withMediaType: AVMediaType.video).first {
        let timeRange = CMTimeRangeMake(kCMTimeZero, blurVideo.duration)
        try? blurVideoTrack?.insertTimeRange(timeRange, of: videoTrack, at: kCMTimeZero)
    }
    
    /// Add other videos at center of the blur video
    var startAt = kCMTimeZero
    for asset in videos {
        
        /// Time Range of asset
        let timeRange = CMTimeRangeMake(kCMTimeZero, asset.duration)
        
        // Here we are creating the AVMutableCompositionTrack. See how we are adding a new track to our AVMutableComposition.
        let track = mixComposition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: kCMPersistentTrackID_Invalid)
        
        // Now we set the length of the track equal to the length of the asset and add the asset to out newly created track at kCMTimeZero for first track and lastAssetTime for current track so video plays from the start of the track to end.
        if let videoTrack = asset.tracks(withMediaType: AVMediaType.video).first {
            
            /// Hide time for this video's layer
            let opacityStartTime: CMTime = CMTimeMakeWithSeconds(0, asset.duration.timescale)
            let opacityEndTime: CMTime = CMTimeAdd(startAt, asset.duration)
            let hideAfter: CMTime = CMTimeAdd(opacityStartTime, opacityEndTime)
            
            /// Adding video track
            try? track?.insertTimeRange(timeRange, of: videoTrack, at: startAt)
            
            /// Layer instrcution
            let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: track!)
            layerInstruction.setOpacity(0.0, at: hideAfter)
            
            /// Add logic for aspectFit in given area
            let properties = scaleAndPositionInAspectFitMode(forTrack: videoTrack, inArea: size)
            
            /// Checking for orientation
            let videoOrientation: UIImageOrientation = self.getVideoOrientation(forTrack: videoTrack)
            let assetSize = self.assetSize(forTrack: videoTrack)
            
            if (videoOrientation == .down) {
                /// Rotate
                let defaultTransfrom = asset.preferredTransform
                let rotateTransform = CGAffineTransform(rotationAngle: -CGFloat(Double.pi/2.0))
                
                // Scale
                let scaleTransform = CGAffineTransform(scaleX: properties.scale.width, y: properties.scale.height)
                
                // Translate
                var ytranslation: CGFloat = assetSize.height
                var xtranslation: CGFloat = 0
                if properties.position.y == 0 {
                    xtranslation = -(assetSize.width - ((size.width/size.height) * assetSize.height))/2.0
                }
                else {
                    ytranslation = assetSize.height - (assetSize.height - ((size.height/size.width) * assetSize.width))/2.0
                }
                let translationTransform = CGAffineTransform(translationX: xtranslation, y: ytranslation)
                
                // Final transformation - Concatination
                let finalTransform = defaultTransfrom.concatenating(rotateTransform).concatenating(translationTransform).concatenating(scaleTransform)
                layerInstruction.setTransform(finalTransform, at: kCMTimeZero)
            }
            else if (videoOrientation == .left) {
                
                /// Rotate
                let defaultTransfrom = asset.preferredTransform
                let rotateTransform = CGAffineTransform(rotationAngle: -CGFloat(Double.pi))
                
                // Scale
                let scaleTransform = CGAffineTransform(scaleX: properties.scale.width, y: properties.scale.height)
                
                // Translate
                var ytranslation: CGFloat = assetSize.height
                var xtranslation: CGFloat = assetSize.width
                if properties.position.y == 0 {
                    xtranslation = assetSize.width - (assetSize.width - ((size.width/size.height) * assetSize.height))/2.0
                }
                else {
                    ytranslation = assetSize.height - (assetSize.height - ((size.height/size.width) * assetSize.width))/2.0
                }
                let translationTransform = CGAffineTransform(translationX: xtranslation, y: ytranslation)
                
                // Final transformation - Concatination
                let finalTransform = defaultTransfrom.concatenating(rotateTransform).concatenating(translationTransform).concatenating(scaleTransform)
                layerInstruction.setTransform(finalTransform, at: kCMTimeZero)
            }
            else if (videoOrientation == .right) {
                /// No need to rotate
                // Scale
                let scaleTransform = CGAffineTransform(scaleX: properties.scale.width, y: properties.scale.height)
                
                // Translate
                let translationTransform = CGAffineTransform(translationX: properties.position.x, y: properties.position.y)
                
                let finalTransform  = scaleTransform.concatenating(translationTransform)
                layerInstruction.setTransform(finalTransform, at: kCMTimeZero)
            }
            else {
                /// Rotate
                let defaultTransfrom = asset.preferredTransform
                let rotateTransform = CGAffineTransform(rotationAngle: CGFloat(Double.pi/2.0))
                
                // Scale
                let scaleTransform = CGAffineTransform(scaleX: properties.scale.width, y: properties.scale.height)
                
                // Translate
                var ytranslation: CGFloat = 0
                var xtranslation: CGFloat = assetSize.width
                if properties.position.y == 0 {
                    xtranslation = assetSize.width - (assetSize.width - ((size.width/size.height) * assetSize.height))/2.0
                }
                else {
                    ytranslation = -(assetSize.height - ((size.height/size.width) * assetSize.width))/2.0
                }
                let translationTransform = CGAffineTransform(translationX: xtranslation, y: ytranslation)
                
                // Final transformation - Concatination
                let finalTransform = defaultTransfrom.concatenating(rotateTransform).concatenating(translationTransform).concatenating(scaleTransform)
                layerInstruction.setTransform(finalTransform, at: kCMTimeZero)
            }
            
            instructionLayers.append(layerInstruction)
        }
        
        /// Adding audio
        if let audioTrack = asset.tracks(withMediaType: AVMediaType.audio).first {
            let aTrack = mixComposition.addMutableTrack(withMediaType: AVMediaType.audio, preferredTrackID: kCMPersistentTrackID_Invalid)
            try? aTrack?.insertTimeRange(timeRange, of: audioTrack, at: startAt)
        }
        
        // Increase the startAt time
        startAt = CMTimeAdd(startAt, asset.duration)
    }

    
    /// Blur layer instruction
    let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: blurVideoTrack!)
    instructionLayers.append(layerInstruction)
    
    let mainInstruction = AVMutableVideoCompositionInstruction()
    mainInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, blurVideo.duration)
    mainInstruction.layerInstructions = instructionLayers
    
    let mainCompositionInst = AVMutableVideoComposition()
    mainCompositionInst.instructions = [mainInstruction]
    mainCompositionInst.frameDuration = CMTimeMake(1, 30)
    mainCompositionInst.renderSize = size
    
    //let url = URL(fileURLWithPath: "/Users/enacteservices/Desktop/final_video.mov")
    let url = self.videoOutputURL
    try? FileManager.default.removeItem(at: url)
    
    let exporter = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality)
    exporter?.outputURL = url
    exporter?.outputFileType = .mp4
    exporter?.videoComposition = mainCompositionInst
    exporter?.shouldOptimizeForNetworkUse = true
    exporter?.exportAsynchronously(completionHandler: {
        if let anError = exporter?.error {
            completion(anError, nil)
        }
        else if exporter?.status == AVAssetExportSessionStatus.completed {
            completion(nil, url)
        }
    })
}

For helping methods used in above code please download the attached sample code.
Also I'm looking forward from you if there is any shorter way to do this. Because I have to export the video 3 times to achieve this.

General Grievance
  • 4,555
  • 31
  • 31
  • 45
TheTiger
  • 13,264
  • 3
  • 57
  • 82
  • @Anita Could you plz be specific on what is not working? – TheTiger Oct 30 '18 at 04:43
  • When i run your code and select video i am getting following error :[discovery] errors encountered while discovering extensions: Error Domain=PlugInKit Code=13 "query cancelled" UserInfo={NSLocalizedDescription=query cancelled} and then Operation stopped alert displayed – Anita Nagori Oct 30 '18 at 05:57
  • Will you please guide me how i can fix this at my side and test your demo ? – Anita Nagori Oct 30 '18 at 05:58
  • This is something related to picker controller. Have you googled for it? I found [similar error here](https://stackoverflow.com/questions/44465904/photopicker-discovery-error-error-domain-pluginkit-code-13). – TheTiger Oct 30 '18 at 06:01
  • 1
    Hey , i have solved the issue and able to get what exact needed , Thanks :) – Anita Nagori Oct 30 '18 at 10:28
  • I should donate you all my reputation points. Thanks man @TheTiger – ayon Nov 09 '18 at 04:25
  • @TheTiger Hi I have tried these things in objective c and I have a problem with CGAffineTransformMakeScale.So, can you please help me on this topic. I have seen your swift code and convert it to objective c but I think there some problem with scaling the layer. – Birendra Dec 27 '18 at 14:57
  • @Birendra Why don't you use the Swift bridging header? It will save your time. – TheTiger Dec 28 '18 at 04:57
  • @TheTiger OK. I will try this way and thanks for the reply. – Birendra Dec 28 '18 at 05:14
  • @TheTiger Hi, thanks for your suggestion. I have applied the Swift bridging header and your code work fine. But Right now I have tried to set background image instead of video. Is this possible with your current code? – Birendra Dec 31 '18 at 07:27
  • @Birendra You don't need to do this if you want a background only. You will have to add a layer then place video at center of this layer. – TheTiger Jan 02 '19 at 04:49
  • @TheTiger I have tried to add CALayer but a video is not played in the center of CALayer and also getting a black layer in a video. Can you please help me to short out this issue. – Birendra Jan 02 '19 at 04:58
  • I would strongly recommend using AVVideoComposition with processing of AVAsynchronousCIImageFilteringRequest or writing custom AVVideoCompositing object for editing. – Alexey Savchenko May 01 '19 at 09:10
  • @TheTiger your code is for exporting such a video. In your example, you first export blur video and then overlay regular video on it to create the final video. What if I would like to play blur and regular video in AVplayer like Inshot? – iCoderz Developers Jun 27 '19 at 11:24
  • @iCoderzDevelopers You can make two players for that. – TheTiger Jun 27 '19 at 12:42
  • @TheTiger..Thanks for Answer. Can't we do it with a simple player as I have to do some transition between videos. And One more Q. Is it possible to apply CIFilter to any particular video from AVVideoComposition. bcos I have multipple videos in composition. – iCoderz Developers Jun 27 '19 at 13:30
  • Possible to apply `CIFilter` at runtime to `AVPlayer` but `AVPlayer` will handle one video at a time. And there is not single filter which will make the same effect as you want... so two players will be needed. – TheTiger Jun 27 '19 at 13:41
  • @TheTiger i want to use the above code without export is it possible?. Just want a AVVideoComposition and an AVMutableComposition as a final result – Ashish Gupta Aug 19 '19 at 10:00
  • @AshishGupta I don't understand your question. You mean you want to play only? – TheTiger Aug 20 '19 at 04:52
  • I want to play only yes. but want to play with composition – Ashish Gupta Aug 20 '19 at 06:12
  • damn @TheTiger, great answer . No real comment just wanted to say your code was super helpful. Thanks – Alan S Sep 19 '19 at 13:35
  • @TheTiger Sorry you've had a lot of questions on this topic, but I wanted to ask about how would I be able to change the size of the blurred video in the background? You can't specify a size for the composition being made with the CIFilter – Alan S Oct 01 '19 at 14:15
  • @AlanS In `mergeVideos` method there is a parameter `area` ... pass the video size you needed OR you can take help from this code if you need something else. – TheTiger Oct 02 '19 at 05:10
  • Hi @TheTiger, I've been using your code and it is working. I wanted to add a scale aspect to the scaleAndPositionInAspectFitMode function, so that the video can be in the center of the blur, but not take up the whole width or height of the blurry background. The Code does work but i notice that when I record a video with the camera app and run the code, the video is never centered but in the top left corner. If i use a video I've downloaded or one from WhatsApp then it works correctly. Do you have any thoughts why this might be the case? Obligatory sorry for extra questions. – Alan S Oct 24 '19 at 11:01
  • @AlanS Not sure but you can debug `addAllVideosAtCenterOfBlur ` this function. It calculates each video asset size and set the origin by setting `xtranslation` and `ytranslation`. Also check `scaleAndPositionInAspectFitMode`. – TheTiger Oct 24 '19 at 11:10
  • @TheTiger The scaleAndPositionInAspectFitMode works perfectly. I guess it will be something with the translations. Would you mind if I posted a question about it using your code if I don't get anywhere? (has been somewhat refactored by me). – Alan S Oct 24 '19 at 11:31
  • @TheTiger Don't worry, you've helped a lot already! Just thought to ask cause you're very familiar with the code – Alan S Oct 25 '19 at 07:44
1

Starting from iOS 9.0 You can use AVAsynchronousCIImageFilteringRequest

see docs for more info

Or You can use AVVideoCompositing see example of usage

Tiko
  • 485
  • 4
  • 10
1

You can add blur on video using AVVideoComposition, it's Tested.

-(void)applyBlurOnAsset:(AVAsset *)asset Completion:(void(^)(BOOL success, NSError* error, NSURL* videoUrl))completion{
CIFilter *filter = [CIFilter filterWithName:@"CIGaussianBlur"];
AVVideoComposition *composition = [AVVideoComposition videoCompositionWithAsset: asset
                                                   applyingCIFiltersWithHandler:^(AVAsynchronousCIImageFilteringRequest *request){
                                                       // Clamp to avoid blurring transparent pixels at the image edges
                                                       CIImage *source = [request.sourceImage imageByClampingToExtent];
                                                       [filter setValue:source forKey:kCIInputImageKey];

                                                       [filter setValue:[NSNumber numberWithDouble:10.0] forKey:kCIInputRadiusKey];

                                                       // Crop the blurred output to the bounds of the original image
                                                       CIImage *output = [filter.outputImage imageByCroppingToRect:request.sourceImage.extent];

                                                       // Provide the filter output to the composition
                                                       [request finishWithImage:output context:nil];
                                                   }];


NSURL *outputUrl = [[NSURL alloc] initWithString:@"Your Output path"];

//Remove any prevouis videos at that path
[[NSFileManager defaultManager]  removeItemAtURL:outputUrl error:nil];

AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:asset presetName:AVAssetExportPreset960x540] ;

// assign all instruction for the video processing (in this case the transformation for cropping the video
exporter.videoComposition = composition;
exporter.outputFileType = AVFileTypeMPEG4;

if (outputUrl){

    exporter.outputURL = outputUrl;
    [exporter exportAsynchronouslyWithCompletionHandler:^{

        switch ([exporter status]) {
            case AVAssetExportSessionStatusFailed:
                NSLog(@"crop Export failed: %@", [[exporter error] localizedDescription]);
                if (completion){
                    dispatch_async(dispatch_get_main_queue(), ^{
                        completion(NO,[exporter error],nil);
                    });
                    return;
                }
                break;
            case AVAssetExportSessionStatusCancelled:
                NSLog(@"crop Export canceled");
                if (completion){
                    dispatch_async(dispatch_get_main_queue(), ^{
                        completion(NO,nil,nil);
                    });
                    return;
                }
                break;
            default:
                break;
        }

        if (completion){
            dispatch_async(dispatch_get_main_queue(), ^{
                completion(YES,nil,outputUrl);
            });
        }

    }];
}

}

-1

Have you tryied this?

let blurEffect = UIBlurEffect(style: .light)
var blurredView = UIVisualEffectView(effect: blurEffect)
blurredView.frame = view.bounds // Set the frame around 
// add the blurredView on top of what you want blurred
view.addSubview(blurredView)

Here's a link to documentation

pesch
  • 1,976
  • 2
  • 17
  • 29