1

If I want to get 2 views the same width and height with both of their centers in the middle of the screen I use the below code which works fine. Both are side by side in the middle of the screen with the same exact width and height.

let width = view.frame.width
let insideRect = CGRect(x: 0, y: 0, width: width / 2, height: .infinity)
let rect = AVMakeRect(aspectRatio: CGSize(width: 9, height: 16), insideRect: insideRect)

// blue
leftView.centerYAnchor.constraint(equalTo: view.centerYAnchor).isActive = true
leftView.leadingAnchor.constraint(equalTo: view.leadingAnchor).isActive = true
leftView.widthAnchor.constraint(equalToConstant: rect.width).isActive = true
leftView.heightAnchor.constraint(equalToConstant: rect.height).isActive = true

// purple
rightView.centerYAnchor.constraint(equalTo: view.centerYAnchor).isActive = true
rightView.trailingAnchor.constraint(equalTo: view.trailingAnchor).isActive = true
rightView.widthAnchor.constraint(equalToConstant: leftView.widthAnchor).isActive = true
rightView.heightAnchor.constraint(equalToConstant: leftView.heightAnchor).isActive = true

enter image description here

How can I do the same thing using CGAffineTransform? I tried to find a way to make the rightView the same size as the left view but couldn't. The top of the leftView frame is in the middle of the screen instead its center and the rightView is completely off.

let width = view.frame.width
let insideRect = CGRect(x: 0, y: 0, width: width / 2, height: .infinity)
let rect = AVMakeRect(aspectRatio: CGSize(width: 9, height: 16), insideRect: insideRect)

leftView.transform = CGAffineTransform(scaleX: 0.5, y: 0.5)
leftView.transform = CGAffineTransform(translationX: 0, y: view.frame.height / 2)

rightView.transform = leftView.transform
rightView.transform = CGAffineTransform(translationX: rect.width, y: view.frame.height / 2)
Lance Samaria
  • 17,576
  • 18
  • 108
  • 256
  • Not quite clear... First, you can get the desired results with constraints ***without*** any of the rect calculations. As to trying to use `CGAffineTransform`... how are you creating / adding `leftView` and `rightView` to `view`? What are their frames to begin with? – DonMag Jan 18 '21 at 17:04
  • I'm using the rect calculations because the views are based on a video and I need to make sure the proportions stay in tact. The left view and right view are the same size of the screen's width and height. – Lance Samaria Jan 18 '21 at 17:07
  • Still confusing... is there a reason you **don't** want to use constraints? And, are you sure you want to use transformed views rather than two layers of a single view? – DonMag Jan 18 '21 at 17:16
  • You see this question here https://stackoverflow.com/questions/34682816/is-it-possible-to-merge-two-video-files-to-one-file-one-screen-in-ios. I figured out how to do the whole thing. The only problem I'm having is getting the 2 images to be side by side in the middle of the screen. I have to set the transform on each instruction. It's a lot of code. I figured it would just be easier to ask how to set the transform to be equal w/h and in the center of the screen rather then get into all of the AVFoundation code. I have both images on screen, theyre just not positioned correctly or the same w/h – Lance Samaria Jan 18 '21 at 17:22
  • OK - your question makes a lot more sense in that context. Do you want `video1` on the left or right? – DonMag Jan 18 '21 at 18:31
  • on the left side – Lance Samaria Jan 18 '21 at 18:34
  • Your post is about Swift... Have you re-written the Obj-C code from the question you linked to in Swift? I believe I have an answer for you, but it's currently in Obj-C – DonMag Jan 18 '21 at 19:24
  • Yep, rewrote it in Swift and surprisingly I got everything to work correctly except this 1 issue. I'm not native Obj-C but I'll have to try and figure it out. Anything will help, thanks! The issue occurs in the instructions with the scale, move, and setTransfrom. I think this question would resolve the issue because it's the same thing -scale and move – Lance Samaria Jan 18 '21 at 19:27

1 Answers1

1

You need to make your transforms based on the Composited Video's output size - its .renderSize.

Based on your other question...

So, if you have two 1280.0 x 720.0 videos, and you want them side-by-side in a 640 x 480 rendered frame, you need to:

  • get the size of the first video
  • scale it to 320 x 480
  • move it to 0, 0

then:

  • get the size of the second video
  • scale it to 320 x 480
  • move it to 320, 0

So your scale transform will be:

let targetWidth = renderSize.width / 2.0
let targetHeight = renderSize.height
let widthScale = targetWidth / sourceVideoSize.width
let heightScale = targetHeight / sourceVideoSize.height

let scale = CGAffineTransform(scaleX: widthScale, y: heightScale)

That should get your there --- except...

In my testing, I took 4 8-second videos in landscape orientation.

For reasons unbeknownst to me - the "native" preferredTransforms are:

Videos 1 & 3
[-1, 0, 0, -1, 1280, 720]

Videos 2 & 4
[1, 0, 0, 1, 0, 0]

So, the sizes returned by the recommended track.naturalSize.applying(track.preferredTransform) end up being:

Videos 1 & 3
-1280 x -720

Videos 2 & 4
1280 x 720

which messes with the transforms.

After a little experimentation, if the size is negative, we need to:

  • rotate the transform
  • scale the transform (making sure to use positive widths/heights)
  • translate the transform adjusted for the change in orientation

Here is a complete implementation (without the save-to-disk at the end):

import UIKit
import AVFoundation

class VideoViewController: UIViewController {

    override func viewDidLoad() {
        super.viewDidLoad()

        view.backgroundColor = .systemYellow
    }

    override func viewDidAppear(_ animated: Bool) {
        super.viewDidAppear(animated)

        guard let originalVideoURL1 = Bundle.main.url(forResource: "video1", withExtension: "mov"),
              let originalVideoURL2 = Bundle.main.url(forResource: "video2", withExtension: "mov")
        else { return }

        let firstAsset = AVURLAsset(url: originalVideoURL1)
        let secondAsset = AVURLAsset(url: originalVideoURL2)

        let mixComposition = AVMutableComposition()
        
        guard let firstTrack = mixComposition.addMutableTrack(withMediaType: .video, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) else { return }
        let timeRange1 = CMTimeRangeMake(start: .zero, duration: firstAsset.duration)

        do {
            try firstTrack.insertTimeRange(timeRange1, of: firstAsset.tracks(withMediaType: .video)[0], at: .zero)
        } catch {
            return
        }

        guard let secondTrack = mixComposition.addMutableTrack(withMediaType: .video, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) else { return }
        let timeRange2 = CMTimeRangeMake(start: .zero, duration: secondAsset.duration)

        do {
            try secondTrack.insertTimeRange(timeRange2, of: secondAsset.tracks(withMediaType: .video)[0], at: .zero)
        } catch {
            return
        }
        
        let mainInstruction = AVMutableVideoCompositionInstruction()
        
        mainInstruction.timeRange = CMTimeRangeMake(start: .zero, duration: CMTimeMaximum(firstAsset.duration, secondAsset.duration))
        
        var track: AVAssetTrack!
        
        track = firstAsset.tracks(withMediaType: .video).first
        
        let firstSize = track.naturalSize.applying(track.preferredTransform)

        track = secondAsset.tracks(withMediaType: .video).first

        let secondSize = track.naturalSize.applying(track.preferredTransform)

        // debugging
        print("firstSize:", firstSize)
        print("secondSize:", secondSize)

        let renderSize = CGSize(width: 640, height: 480)
        
        var scale: CGAffineTransform!
        var move: CGAffineTransform!

        let firstLayerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: firstTrack)
        
        scale = .identity
        move = .identity
        
        if (firstSize.width < 0) {
            scale = CGAffineTransform(rotationAngle: .pi)
        }
        scale = scale.scaledBy(x: abs(renderSize.width / 2.0 / firstSize.width), y: abs(renderSize.height / firstSize.height))
        move = CGAffineTransform(translationX: 0, y: 0)
        if (firstSize.width < 0) {
            move = CGAffineTransform(translationX: renderSize.width / 2.0, y: renderSize.height)
        }

        firstLayerInstruction.setTransform(scale.concatenating(move), at: .zero)

        let secondLayerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: secondTrack)
        
        scale = .identity
        move = .identity
        
        if (secondSize.width < 0) {
            scale = CGAffineTransform(rotationAngle: .pi)
        }
        scale = scale.scaledBy(x: abs(renderSize.width / 2.0 / secondSize.width), y: abs(renderSize.height / secondSize.height))
        move = CGAffineTransform(translationX: renderSize.width / 2.0, y: 0)
        if (secondSize.width < 0) {
            move = CGAffineTransform(translationX: renderSize.width, y: renderSize.height)
        }
        
        secondLayerInstruction.setTransform(scale.concatenating(move), at: .zero)
        
        mainInstruction.layerInstructions = [firstLayerInstruction, secondLayerInstruction]
        
        let mainCompositionInst = AVMutableVideoComposition()
        mainCompositionInst.instructions = [mainInstruction]
        mainCompositionInst.frameDuration = CMTime(value: 1, timescale: 30)
        mainCompositionInst.renderSize = renderSize

        let newPlayerItem = AVPlayerItem(asset: mixComposition)
        newPlayerItem.videoComposition = mainCompositionInst
        
        let player = AVPlayer(playerItem: newPlayerItem)
        let playerLayer = AVPlayerLayer(player: player)

        playerLayer.frame = view.bounds
        view.layer.addSublayer(playerLayer)
        player.seek(to: .zero)
        player.play()
        
        // video export code goes here...

    }

}

It's possible that the preferredTransforms could also be different for front / back camera, mirrored, etc. But I'll leave that up to you to work out.

Edit

Sample project at: https://github.com/DonMag/VideoTest

Produces (using two 720 x 1280 video clips):

enter image description here

DonMag
  • 69,424
  • 5
  • 50
  • 86
  • THANKS THE HELP!!! Give me about 1/2 to go through this. – Lance Samaria Jan 18 '21 at 22:06
  • I just c+p the code. They are side-side but they are full size meaning they take up the whole screen. So the left side is half way on the screen and half way off the screen. The right side is the same. – Lance Samaria Jan 18 '21 at 22:19
  • 1
    @LanceSamaria - hmmm... I created a new project and copy/pasted the code from my answer. Added a screen-cap to my answer. You can grab the project to try for yourself here: https://github.com/DonMag/VideoTest ... (I'm sure it's *possible* you will see different results) – DonMag Jan 18 '21 at 23:16
  • oh wow, you updated just as I was sending you a message. The **-1280 x -720** I got **-1080 x 1920**. The way I sorta fixed it was `scale = CGAffineTransform(rotationAngle: .pi / 2)` and then `move = CGAffineTransform(translationX: renderSize.width, y: 0)`. Without setting the `y` to zero the composite image was at the bottom of the screen. I dunno. Thanks for this though. This is enough for me to dig in further. I would buy you lunch if I could. I really appreciate this! Good looking dogs btw!!! – Lance Samaria Jan 18 '21 at 23:19
  • @LanceSamaria - yeah... my last original answer line: *"It's possible that the preferredTransforms could also be different..."* reared its ugly head. Without testing (I don't currently have a video that would size at `-` on one and `+` on the other), I'd say the direction to take would be to evaluate both the width and height `< 0` independently and adjust the `x` and `y` scale/translate values appropriately. – DonMag Jan 18 '21 at 23:43
  • I figured I would have to adjust by each video’s width. I have to play around with several videos in different sizes. Once I get a handle on this I’ll send you a message and let you know how things are going. Thanks again for the help! – Lance Samaria Jan 18 '21 at 23:49
  • I found out why the video was full sized. I've been playing around with this for hrs and it turns out I was making one very simple mistake. I push on another vc that plays the combined videos. In that vc I had `playerLayer.videoGravity = .resizeAspectFill` and that was the issue. Smh. Anyhow your code works perfectly and both videos are side to side. I still had to change the rotation like in my above comment and the rotated video is off centered but I'll figure that out. Thanks again :) – Lance Samaria Jan 19 '21 at 08:08
  • @LanceSamaria - ah, yep, `.resizeAspectFill` would do that. I have no idea why the videos end up with different `preferredTransform`s ... and I haven't been able to produce one that ends up with a mix of `+ / -` sizes. I know how I would approach the transforms... If you can put one up for me somewhere, I'll be glad to take a look. – DonMag Jan 19 '21 at 12:07
  • I notice the video returned from my camera is landscape left then when I create an AVMutableComposition, I transform the image: `CGAffineTransform(rotationAngle: .pi/2)` to portrait. Whenever I send it by itself to the avplayer the video is upright. The problem is occurring somewhere when the video composite images are conjoined. Possibly in that first mixComposition before adding the instructions (or after). Very strange. Quick question. Would it be easier to set the composite images equal w/h and centered like the views from my question? – Lance Samaria Jan 19 '21 at 12:21
  • I was actually in the process of drawing up a project to send to Apple with one of my TSIs. I was going to ask them what the issue is. Do you have an email? I can send it to you. It has a camera, 2 imageViews, your code, and the avplayer. – Lance Samaria Jan 19 '21 at 12:24
  • Let us [continue this discussion in chat](https://chat.stackoverflow.com/rooms/227532/discussion-between-donmag-and-lance-samaria). – DonMag Jan 19 '21 at 12:26
  • hey what's up, hope all is well. I have a question, i ran into a situation where the code isn't combining and rotating the videos. I didn't do anything to your donAttempt method and that is why I can't figure out what the issue is. Can you take a look at it? The video is question is in portrait, from my phone, and it has no sound. – Lance Samaria Feb 15 '21 at 15:45
  • @LanceSamaria - if you push the changes to your GitHub repo I'll try to take a look. – DonMag Feb 15 '21 at 16:21
  • Ok thanks, give me like an hour+, when I get back to my cpu I’ll push them up. – Lance Samaria Feb 15 '21 at 16:22
  • I pushed the project up and added comments in the Issue section. It's an odd problem because the original video works fine. Please let me know your thoughts. Thanks! – Lance Samaria Feb 15 '21 at 18:04
  • The instructions works fine. It turns out the instructions are ignored when using AVAssetExportPresetPassthrough https://stackoverflow.com/a/15666724/4833705, but when there is no sound you have to use Passthrough. The answer was simple. if there is no sound, provide a silent sound. I found it here on line 66: https://gist.github.com/yashthaker7/89d153fe9b1e10505237a2994a73ac33. Thanks for your help!!! If you ever need anything please reach out. I'll do my best. Take care – Lance Samaria Feb 24 '21 at 17:36
  • Am I missing something obvious here? How is this taking care of maintaining the aspect ratio? – damd Apr 19 '22 at 01:27
  • @damd what’s the issue? I’ve had zero problems with this code in a live app. It works perfectly fine. If there are any issues, please comment about it. Thanks. – Lance Samaria May 03 '22 at 17:37
  • @DonMag hey what’s up, would you be able to take a look at this q&a (both from me)? It works so far but I’m not 100% about it https://stackoverflow.com/a/72090169/4833705 – Lance Samaria May 03 '22 at 17:39
  • @damd I believe setting the renderSize the same for both videos is what maintains it – Lance Samaria May 03 '22 at 17:55