5

I am trying to record a video of a camera while using MediaFrameReader. For my application I need the MediaFrameReader instance to process each frame individually. I have tried to apply LowLagMediaRecording additionally to simply save the camerastream to a file, but it seems that you can not use both methods simultaniously. That meant that I am stuck to MediaFrameReader only where I can access each frame in the Frame_Arrived method.

I have tried several approaches and found two working solutions with the MediaComposition class creating MediaClip objects. You can either save each frame as a JPEG-file and render all images finally to a video file. This process is very slow since you constantly need to access the hard drive. Or you can create a MediaStreamSample object from the Direct3DSurface of a frame. This way you can save the data in RAM (first in GPU RAM, if full then RAM) instead of the hard drive which in theory is much faster. The problem is that calling the RenderToFileAsync-method of the MediaComposition class requires all MediaClips already being added to the internal list. This leads to exceeding the RAM after an already quite short time of recording. After collecting data of approximately 5 minutes, windows already created a 70GB swap file which defies the cause for choosing this path in the first place.

I also tried the third party library OpenCvSharp to save the processed frames as a video. I have done that previously in python without any problems. In UWP though, I am not able to interact with the file system without a StorageFile object. So, all I get from OpenCvSharp is an UnauthorizedAccessException when I try to save the rendered video to the file system.

So, to summerize: what I need is a way to render the data of my camera frames to a video while the data is still coming in, so I can dispose every frame after it having been processed, like a python OpenCV implementation would do it. I am very thankfull for every hint. Here are parts of my code to better understand the context:

private void ColorFrameArrived(MediaFrameReader sender, MediaFrameArrivedEventArgs args)
        {
            MediaFrameReference colorFrame = sender.TryAcquireLatestFrame();
            if (colorFrame != null)
            {
                if (currentMode == StreamingMode)
                {
                    colorRenderer.Draw(colorFrame, true);

                }

                if (currentMode == RecordingMode)
                {


                    MediaStreamSample sample = MediaStreamSample.CreateFromDirect3D11Surface(colorFrame.VideoMediaFrame.Direct3DSurface, new TimeSpan(0, 0, 0, 0, 33));
                    ColorComposition.Clips.Add(MediaClip.CreateFromSurface(sample.Direct3D11Surface, new TimeSpan(0, 0, 0, 0, 33)));
                }
            }

        }

private async Task CreateVideo(MediaComposition composition, string outputFileName)
        {
            try
            {
                await mediaFrameReaderColor.StopAsync();
                mediaFrameReaderColor.FrameArrived -= ColorFrameArrived;
                mediaFrameReaderColor.Dispose();

                StorageFolder folder = await documentsFolder.GetFolderAsync(directory);
                StorageFile vid = await folder.CreateFileAsync(outputFileName + ".mp4", CreationCollisionOption.GenerateUniqueName);

                Stopwatch stopwatch = new Stopwatch();
                stopwatch.Start();
                await composition.RenderToFileAsync(vid, MediaTrimmingPreference.Precise);
                stopwatch.Stop();
                Debug.WriteLine("Video rendered: " + stopwatch.ElapsedMilliseconds);

                composition.Clips.Clear();
                composition = null;
            }
            catch (Exception ex)
            {

                Debug.WriteLine(ex.Message);
            }

        }
  • It looks only way that convert SoftwareBitmap to video file within uwp platform. But `MediaComposition` often used to trim a video, it could not handle the real time frame. I am confuse why you just need use MediaFrameReader to capture the video.? – Nico Zhu Mar 03 '20 at 07:21
  • Hi and thanks for your reply. I need to call the `MediaFrameReader` to do several tasks like getting a timestamp of each frame. – NicholasUrfe Mar 04 '20 at 07:22
  • You mentioned below that you solved it in the end. I am very interested in learning how you ended up doing it, any chance you can share your solution or point to a github? – chris Apr 05 '23 at 01:29

1 Answers1

0

I am trying to record a video of a camera while using MediaFrameReader. For my application I need the MediaFrameReader instance to process each frame individually.

Please create custom video effect and add it to the MediaCapture object to implement your requirement. Using the custom video effect will allow us to process the frames in the context of the MediaCapture object without using the MediaFrameReader, in this way all of the limitations of the MediaFrameReader that you have mentioned go away.

Besides, it also have a number of build-in effects that allow us to analyze camera frames, for more information, please check the following articles:

#MediaCapture.AddVideoEffectAsync

https://learn.microsoft.com/en-us/uwp/api/windows.media.capture.mediacapture.addvideoeffectasync.

#Custom video effects

https://learn.microsoft.com/en-us/windows/uwp/audio-video-camera/custom-video-effects.

#Effects for analyzing camera frames

https://learn.microsoft.com/en-us/windows/uwp/audio-video-camera/scene-analysis-for-media-capture.

Thanks.

Amy Peng - MSFT
  • 1,902
  • 11
  • 14
  • Thanks for your reply. I tried your approach and faced two problems: 1. Using AddVideoEffectAsync-function slowed the app considerably. 2. I was not able to extract the timestamps of the frame as I could using the MediaFrameReader. I was able to find a solution using the MediaFrameReader in the end. I will update my post when I have the time for it. Nonetheless, thanks for your suggestion, – NicholasUrfe Apr 14 '20 at 06:51