1

I am recording the screen and I want to combine the mic audio and the sounds from the app's audio into a video with ONE stero audio track. With the AVAssetWriter setup I have, it creates a video file with TWO separate audio tracks; one stereo track for device audio and one mono track for the mic audio. This is no good.

I've also tried taking the resulting video file & reconstructing a NEW video file with the separate audio AVAssetTracks merged into one, using AVMutableCompositionTracks insertTimeRange( function as you will see below. But this does NOT merge the tracks, no matter what I try, it just concatenates them (in sequence, not overlayed over oneanother).

Please can someone tell me how I can either record the tracks merged in the first place with AVAssetWriter. Or how to merge them over eachother later. There is nothing online that discusses this and gets it done. Many articles refer to the use of insertTimeRange( but this function CONCATENATES the tracks. Please help.

The code I'm using so far:

func startRecording(withFileName fileName: String, recordingHandler: @escaping (Error?) -> Void) {

    let sharedRecorder = RPScreenRecorder.shared()
    currentlyRecordingURL = URL(fileURLWithPath: CaptureArchiver.filePath(fileName))        
    guard currentlyRecordingURL != nil else { return }
    desiredMicEnabled = RPScreenRecorder.shared().isMicrophoneEnabled        
    assetWriter = try! AVAssetWriter(outputURL: currentlyRecordingURL!, fileType: AVFileType.mp4)

    let appAudioOutputSettings = [
        AVFormatIDKey : kAudioFormatMPEG4AAC,
        AVNumberOfChannelsKey : 2,
        AVSampleRateKey : 44100.0,
        AVEncoderBitRateKey: 192000
    ] as [String : Any]

    let micAudioOutputSettings = [
        AVFormatIDKey : kAudioFormatMPEG4AAC,
        AVNumberOfChannelsKey : 1,
        AVSampleRateKey : 44100.0,
        AVEncoderBitRateKey: 192000
    ] as [String : Any]

    let adjustedWidth = ceil(UIScreen.main.bounds.size.width/4)*4

    let videoOutputSettings: Dictionary<String, Any> = [
        AVVideoCodecKey : AVVideoCodecType.h264,
        AVVideoWidthKey : adjustedWidth,
        AVVideoHeightKey : UIScreen.main.bounds.size.height
    ]

    let audioInput_app = AVAssetWriterInput(mediaType: AVMediaType.audio, outputSettings: appAudioOutputSettings)
    audioInput_app.expectsMediaDataInRealTime = true
    if assetWriter.canAdd(audioInput_app) { assetWriter.add(audioInput_app) }
    self.audioInput_app = audioInput_app

    let audioInput_mic = AVAssetWriterInput(mediaType: AVMediaType.audio, outputSettings: micAudioOutputSettings)
    audioInput_mic.expectsMediaDataInRealTime = true
    if assetWriter.canAdd(audioInput_mic) { assetWriter.add(audioInput_mic) }
    self.audioInput_mic = audioInput_mic

    let videoInput  = AVAssetWriterInput(mediaType: AVMediaType.video, outputSettings: videoOutputSettings)
    videoInput.expectsMediaDataInRealTime = true
    if assetWriter.canAdd(videoInput) { assetWriter.add(videoInput) }
    self.videoInput = videoInput

    RPScreenRecorder.shared().startCapture(handler: { [unowned self] (sample, bufferType, error) in

        if CMSampleBufferDataIsReady(sample) {

            DispatchQueue.main.async { [unowned self] in

                if self.assetWriter.status == AVAssetWriter.Status.unknown {

                    self.assetWriter.startWriting()

                    #if DEBUG
                    let status = self.assetWriter.status
                    log(self, message: "LAUNCH assetWriter.status[\(status.rawValue)]:\(String(describing: self.readable(status)))")
                    #endif

                    self.assetWriter.startSession(atSourceTime: CMSampleBufferGetPresentationTimeStamp(sample))

                } else if self.assetWriter.status == AVAssetWriter.Status.failed {

                    recordingHandler(error)
                    return

                } else {

                    switch bufferType {
                    case .audioApp:
                        if let audioInput_app = self.audioInput_app {
                            if audioInput_app.isReadyForMoreMediaData { audioInput_app.append(sample) }
                        }
                    case .audioMic:
                        if let audioInput_mic = self.audioInput_mic {
                            if audioInput_mic.isReadyForMoreMediaData { audioInput_mic.append(sample) }
                        }
                    case .video:
                        if let videoInput = self.videoInput {
                            if videoInput.isReadyForMoreMediaData { videoInput.append(sample) }
                        }
                    @unknown default:
                        fatalError("Unknown RPSampleBufferType:\(bufferType)")

                    }

                }

            }
        }

    }) { [unowned self] (error) in

        recordingHandler(error)

        if error == nil && self.desiredMicEnabled == true && RPScreenRecorder.shared().isMicrophoneEnabled == false {
            self.viewController.mic_cap_denied = true                
        } else {                
            self.viewController.mic_cap_denied = false                
        }

    }

}


func mergeAudioTracksInVideo(_ videoURL: URL, completion: @escaping ((Bool) -> Void)) {

    let sourceAsset = AVURLAsset(url: videoURL)

    let sourceVideoTrack: AVAssetTrack = sourceAsset.tracks(withMediaType: AVMediaType.video)[0]
    let sourceAudioTrackApp: AVAssetTrack = sourceAsset.tracks(withMediaType: AVMediaType.audio)[0]
    let sourceAudioTrackMic: AVAssetTrack = sourceAsset.tracks(withMediaType: AVMediaType.audio)[1]

    let comp: AVMutableComposition = AVMutableComposition()

    guard let newVideoTrack: AVMutableCompositionTrack = comp.addMutableTrack(withMediaType: AVMediaType.video,
                                                                           preferredTrackID: kCMPersistentTrackID_Invalid) else {
        completion(false)
        return

    }

    newVideoTrack.preferredTransform = sourceVideoTrack.preferredTransform

    guard let newAudioTrack: AVMutableCompositionTrack = comp.addMutableTrack(withMediaType: AVMediaType.audio,
                                                                           preferredTrackID: kCMPersistentTrackID_Invalid) else {
        completion(false)
        return

    }

    //THE MIXING //THIS STILL RESULTS IN TWO SEPARATE AUDIO TRACKS //LOOKS LIKE THIS IS MORE ABOUT VOLUME LEVELS

    let mix = AVMutableAudioMix()

    let audioMixInputParamsMic = AVMutableAudioMixInputParameters()
    audioMixInputParamsMic.trackID = sourceAudioTrackMic.trackID
    audioMixInputParamsMic.setVolume(1.0, at: CMTime.zero)

    let audioMixInputParamsApp = AVMutableAudioMixInputParameters()
    audioMixInputParamsApp.trackID = sourceAudioTrackApp.trackID
    audioMixInputParamsApp.setVolume(1.0, at: CMTime.zero)

    mix.inputParameters.append(audioMixInputParamsMic)
    mix.inputParameters.append(audioMixInputParamsApp)

    ///////

    let timeRange: CMTimeRange = CMTimeRangeMake(start: CMTime.zero, duration: sourceAsset.duration)

    do {

        try newVideoTrack.insertTimeRange(timeRange, of: sourceVideoTrack, at: CMTime.zero)
        try newAudioTrack.insertTimeRange(timeRange, of: sourceAudioTrackMic, at: CMTime.zero)
        try newAudioTrack.insertTimeRange(timeRange, of: sourceAudioTrackApp, at: CMTime.zero)

    } catch {
        completion(false)
        return
    }

    let exporter: AVAssetExportSession = AVAssetExportSession(asset: comp, presetName: AVAssetExportPresetHighestQuality)!
    exporter.audioMix = mix
    exporter.outputFileType = AVFileType.mp4
    exporter.outputURL = videoURL
    removeFileAtURLIfExists(url: videoURL)

    exporter.exportAsynchronously(completionHandler: {

        switch exporter.status {
        case AVAssetExportSession.Status.failed:
            #if DEBUG
            log(self, message: "1000000000failed \(String(describing: exporter.error))")
            #endif
        case AVAssetExportSession.Status.cancelled:
            #if DEBUG
            log(self, message: "1000000000cancelled \(String(describing: exporter.error))")
            #endif
        case AVAssetExportSession.Status.unknown:
            #if DEBUG
            log(self, message: "1000000000unknown\(String(describing: exporter.error))")
            #endif
        case AVAssetExportSession.Status.waiting:
            #if DEBUG
            log(self, message: "1000000000waiting\(String(describing: exporter.error))")
            #endif
        case AVAssetExportSession.Status.exporting:
            #if DEBUG
            log(self, message: "1000000000exporting\(String(describing: exporter.error))")
            #endif
        default:
            #if DEBUG
            log(self, message: "1000000000-----Mutable video exportation complete.")
            #endif
        }

        completion(true)

    })

}
Geoff H
  • 3,107
  • 1
  • 28
  • 53

1 Answers1

3

Use In this Class and after use in below function for recording start and stop:

https://gist.github.com/mspvirajpatel/f7e1e258f3c1fff96917d82fa9c4c137

import AVFoundation
import ReplayKit

var rpScreenRecorder = RPScreenRecorder.shared()
var rpScreenWriter = RPScreenWriter()

func startRecord() {
    rpScreenRecorder.isMicrophoneEnabled = true
    rpScreenRecorder.startCapture(handler: { cmSampleBuffer, rpSampleBufferType, error in
        if let error = error {
        } else {
            self.rpScreenWriter.writeBuffer(cmSampleBuffer, rpSampleType: rpSampleBufferType)
        }
    }) { error in

    }
}


func stopRecording() {
    rpScreenRecorder.stopCapture { error in
        if let error = error {

        } else {
            self.rpScreenWriter.finishWriting(completionHandler: { url, error in
                if let url = url {
                    print("\(url)")
                }
            })
        }
    }
}
Virajkumar Patel
  • 1,533
  • 12
  • 19
  • audio isn't getting recorded alongside with screen. – Naman Vaishnav Aug 29 '19 at 06:30
  • Hello @NamanVaishnav Did you find any solution? Because I am facing same issue. No Audio recorded while recording the screen. – Diken Shah Mar 11 '20 at 10:41
  • yes @DikenShah, make sure your device isn't in silent mode. and also in iOS 12, it won't work. – Naman Vaishnav Mar 12 '20 at 06:30
  • Thank you so much @NamanVaishnav You made my day. It is working perfectly. Sometimes I got "Recording interrupted by multitasking and content resizing" error and without restarting device this issue not resolved. Did you face such issue ? – Diken Shah Mar 12 '20 at 12:06
  • @DikenShah Yes make sure that once you end the recording session the next subsequent session should start after 4 to 5 seconds. and also it should record at least 4-5 seconds...avoid immediate actions... – Naman Vaishnav Mar 12 '20 at 15:28