4

I am trying to access pixel data and save images from an in-game camera to disk. Initially, the simple approach was to use a render target and subsequently RenderTarget->ReadPixels(), but as the native implementation of ReadPixels() contains a call to FlushRenderingCommands(), it would block the game thread until the image is saved. Being a computationally intensive operation, this was lowering my FPS way too much.

To solve this problem, I am trying to create a dedicated thread that can access the camera as a CaptureComponent, and then follow a similar approach. But as the FlushRenderingCommands() block can only be called from a game thread, I had to rewrite ReadPixels() without that call, (in a non-blocking way of sorts, inspired by the tutorial at https://wiki.unrealengine.com/Render_Target_Lookup): but even then I am facing a problem with my in-game FPS being jerky whenever an image is saved (I confirmed this is not because of the actual saving to disk operation, but because of the pixel data access). My rewritten ReadPixels() function looks as below, I was hoping to get some suggestions as to what could be going wrong here. I am not sure if ENQUEUE_UNIQUE_RENDER_COMMAND_ONEPARAMETER can be called from a non-game thread, and if that's part of my problem.

APIPCamera* cam = GameThread->CameraDirector->getCamera(0);
USceneCaptureComponent2D* capture = cam->getCaptureComponent(EPIPCameraType::PIP_CAMERA_TYPE_SCENE, true);
if (capture != nullptr) {
    if (capture->TextureTarget != nullptr) {
        FTextureRenderTargetResource* RenderResource = capture->TextureTarget->GetRenderTargetResource();
        if (RenderResource != nullptr) {
            width = capture->TextureTarget->GetSurfaceWidth();
            height = capture->TextureTarget->GetSurfaceHeight();
            // Read the render target surface data back.    
            struct FReadSurfaceContext
            {
                FRenderTarget* SrcRenderTarget;
                TArray<FColor>* OutData;
                FIntRect Rect;
                FReadSurfaceDataFlags Flags;
            };

            bmp.Reset();
            FReadSurfaceContext ReadSurfaceContext =
            {
                RenderResource,
                &bmp,
                FIntRect(0, 0, RenderResource->GetSizeXY().X, RenderResource->GetSizeXY().Y),
                FReadSurfaceDataFlags(RCM_UNorm, CubeFace_MAX)
            };
            ENQUEUE_UNIQUE_RENDER_COMMAND_ONEPARAMETER(
                ReadSurfaceCommand,
                FReadSurfaceContext, Context, ReadSurfaceContext,
                {
                    RHICmdList.ReadSurfaceData(
                    Context.SrcRenderTarget->GetRenderTargetTexture(),
                    Context.Rect,
                    *Context.OutData,
                    Context.Flags
                );
            });
        }
    }
}

EDIT: One more thing I have noticed is that the stuttering goes away if I disable HDR in my render target settings (but this results in low quality images): so it seems plausible that the size of the image, perhaps, is still blocking one of the core threads because of the way I am implementing it.

HighVoltage
  • 722
  • 7
  • 25

1 Answers1

0

It should be possible to call ENQUEUE_UNIQUE_RENDER_COMMAND_ONEPARAMETER from any thread since there is underlying call of Task Graph. You can see it, when you analize what code this macro generates:

if(ShouldExecuteOnRenderThread()) 
{ 
    CheckNotBlockedOnRenderThread(); 
    TGraphTask<EURCMacro_##TypeName>::CreateTask().ConstructAndDispatchWhenReady(ParamValue1); 
} 

You should be cautious about accessing UObjects (like USceneCaptureComponent2D) from different threads cause these are managed by Garbage Collector and own by game thread.

(...) but even then I am facing a problem with my in-game FPS being jerky whenever an image is saved

Did you check what thread is causing FPS drop with stat unit or stat unitgraph command? You could also use profiling tools to perform more detailed insight and make sure there is no other causes of lag.

Edit: I've found yet another method of accessing pixel data. Try this without actually copying data in for loop and check, if there is any improvement in FPS. This could be a bit faster cause there is no pixel manipulation/conversion in-between.

Community
  • 1
  • 1
JKovalsky
  • 348
  • 1
  • 6
  • I also tried looking at it (single camera HDR case) through the profiling tools: most of the events that were the biggest hits in terms of time taken had "CPU stall: waiting for event", "CPU stall: sleep" written on them, which I guess indicates they were waiting for the GPU to catch up? – HighVoltage Apr 15 '17 at 18:30
  • Thanks for the comment: When I run stat unitgraph, this is what I find: No HDR: All threads at a decent rate. HDR: Render thread spikes up a lot, causing spikes in frame timing. Occasional GPU thread lag as well. i.imgur.com/S1YVGaz.png – HighVoltage Apr 15 '17 at 21:19
  • It looks like `ReadSurfaceData` method is taking too much time even in render thread. Assuming you didn't make any other mistakes (like calling read function multiple times), it makes multithreading optimization impossible using this particular method of reading pixel data. Have you thought about using raw DirectX? You can access it directly from RHI objects, however it makes your project highly platform dependent. – JKovalsky Apr 16 '17 at 22:18