183

I want to write a screencasting program for the Windows platform, but am unsure of how to capture the screen. The only method I'm aware of is to use GDI, but I'm curious whether there are other ways to go about this, and, if there are, which incurs the least overhead? Speed is a priority.

The screencasting program will be for recording game footage, although, if this does narrow down the options, I'm still open for any other suggestions that fall out of this scope. Knowledge isn't bad, after all.

Edit: I came across this article: Various methods for capturing the screen. It has introduced me to the Windows Media API way of doing it and the DirectX way of doing it. It mentions in the Conclusion that disabling hardware acceleration could drastically improve the performance of the capture application. I'm curious as to why this is. Could anyone fill in the missing blanks for me?

Edit: I read that screencasting programs such as Camtasia use their own capture driver. Could someone give me an in-depth explanation on how it works, and why it is faster? I may also need guidance on implementing something like that, but I'm sure there is existing documentation anyway.

Also, I now know how FRAPS records the screen. It hooks the underlying graphics API to read from the back buffer. From what I understand, this is faster than reading from the front buffer, because you are reading from system RAM, rather than video RAM. You can read the article here.

jww
  • 97,681
  • 90
  • 411
  • 885
someguy
  • 7,144
  • 12
  • 43
  • 57
  • Have you considered, rather than graphically recording the contents of the screen, using a [replay system](http://stackoverflow.com/questions/3064317/conceptually-how-does-replay-work-in-a-game)? – Benjamin Lindley Feb 21 '11 at 18:20
  • @PigBen That was an interesting read, but I don't think it would work. I would have to somehow hook the events, which isn't feasible using a generic application, and it sounds like I would have to do a bit of hacking. Same goes for rendering. – someguy Feb 21 '11 at 18:34
  • 2
    You don't have to hook anything. You just have to write your input events so that they don't control the game directly, but instead call other functions. For example, if the player pushes the left key, you don't simply decrement the players x position. Instead, you call a function, like `MovePlayerLeft()`. And you also record the time and duration of key presses and other input. Then, when you're in playback mode, you simply ignore the input, and instead read the recorded data. If, in the data, you see a left key press, you call `MovePlayerLeft()`. – Benjamin Lindley Feb 21 '11 at 18:45
  • 2
    @PigBen This will be a generic application for recording game footage. It's not for a specific game. Someone pressing the left key could mean move right, for all I know. Also, you haven't considered events that aren't influenced by the user. 'And what about rendering? – someguy Feb 21 '11 at 18:57
  • Oh, okay. I didn't understand that part(about this being an external application). But as for events that aren't influenced by the user, those would be recorded too. Anything in your game that is not deterministic would have to be recorded. And rendering would be handled by the game engine the same as if someone is playing. (this, of course, doesn't apply to your situation as I understand it now) – Benjamin Lindley Feb 21 '11 at 19:01
  • Have you tested the performance of `CreateOffscreenPlainSurface` and `GetFrontBufferData` in DirectX? I can't imagine this could be slower than GDI+, .NET, Windows API, or the other available methods. – AJG85 Feb 24 '11 at 22:40
  • @AJG85 I haven't done any tests, but other people have, with results that back my claim. Also, I quote from MSDN's documentation: "This function is very slow, by design, and should not be used in any performance-critical path." This is because you have to read from video RAM, which is slow because of the CPU-GPU latency. .NET's API is simply a wrapper for GDI, as far as I know. – someguy Feb 25 '11 at 10:57
  • 1
    @someguy Ok I guess you are doing something much more intense, I had added a routine using the above methods to save off replay AVIs in a game at around 30fps without a hitch. I made a multiple monitor screen recorder using windows API for "work force optimization" but that performed badly even at targeted 4fps. – AJG85 Feb 25 '11 at 15:26
  • @AJG85 By Windows API, are you talking about GDI? Hmm, that's surprising that it performed so badly compared to DirectX's `GetFrontBufferData`. – someguy Feb 25 '11 at 17:02
  • @someguy honestly I think the poor performance had more to do with the other implementation details mainly streaming the frames of many monitored machines to a single windows service for archiving. – AJG85 Feb 25 '11 at 17:54
  • are you sure the WME stuff is faster than GDI? It's possible they just use GDI underneath... – rogerdpack Apr 30 '12 at 10:26
  • @rogerdpack: Looking back at the codeprojects link, it doesn't actually mention that WME is faster :/. I misread, sorry. As for it using GDI, I'm not sure anymore. – someguy Apr 30 '12 at 16:37
  • 1
    There is an open source mirror driver for windows on UltraVNC's repository site here http://ultravnc.svn.sourceforge.net/viewvc/ultravnc/UltraVNC%20Project%20Root/UltraVNC/winvnc/winvnc/ – Beached Jun 02 '12 at 03:49
  • Did you write your program? I'm curious. – bodacydo Aug 10 '13 at 22:46
  • @bodacydo: I started something a couple of years ago, but it was nowhere near complete. I didn't feel comfortable writing it because I felt there were a lot of gaps in my knowledge, so I lost motivation. If I ever have time to read a couple of books on the subject, I might start something again. – someguy Aug 11 '13 at 12:25
  • I noticed you have not accepted any answer. Did you find what you were looking for? What is the conclusion (back vs front buffer, directx vs something else)? – Snackoverflow Mar 09 '20 at 17:37
  • @anddero: I did initially accept Brandrew's answer many years ago, because it *seemed* like it could be the best solution, but there is no evidence for this. I would accept an answer if they either (1) showed that their solution is the fastest compared to some common methods or (2) point to an authoritative source. Unfortunately, I dropped this project before it really got anywhere substantial. – someguy Mar 10 '20 at 01:10
  • 1
    @someguy Are you saying it is still an open question? You did not try or benchmark the various different methods yourself yet? I am researching on the exact same thing at the moment and can give it a shot, because authoritative sources on this subject are not easy to find actually. – Snackoverflow Mar 11 '20 at 08:19
  • 1
    @anddero: That is correct. I did not get the chance to benchmark any of the methods. I doubt I will be trying any time soon, as I dropped the project. It would be great if you could try yourself and report back some findings. – someguy Mar 12 '20 at 18:19

16 Answers16

66

This is what I use to collect single frames, but if you modify this and keep the two targets open all the time then you could "stream" it to disk using a static counter for the file name. - I can't recall where I found this, but it has been modified, thanks to whoever!

void dump_buffer()
{
   IDirect3DSurface9* pRenderTarget=NULL;
   IDirect3DSurface9* pDestTarget=NULL;
     const char file[] = "Pickture.bmp";
   // sanity checks.
   if (Device == NULL)
      return;

   // get the render target surface.
   HRESULT hr = Device->GetRenderTarget(0, &pRenderTarget);
   // get the current adapter display mode.
   //hr = pDirect3D->GetAdapterDisplayMode(D3DADAPTER_DEFAULT,&d3ddisplaymode);

   // create a destination surface.
   hr = Device->CreateOffscreenPlainSurface(DisplayMde.Width,
                         DisplayMde.Height,
                         DisplayMde.Format,
                         D3DPOOL_SYSTEMMEM,
                         &pDestTarget,
                         NULL);
   //copy the render target to the destination surface.
   hr = Device->GetRenderTargetData(pRenderTarget, pDestTarget);
   //save its contents to a bitmap file.
   hr = D3DXSaveSurfaceToFile(file,
                              D3DXIFF_BMP,
                              pDestTarget,
                              NULL,
                              NULL);

   // clean up.
   pRenderTarget->Release();
   pDestTarget->Release();
}
Brandrew
  • 876
  • 5
  • 6
  • Thanks. I heard about this method a while ago that was said to be faster than reading from the front buffer. Do you honestly do it that way and does it work properly? – someguy Feb 28 '11 at 16:54
  • The problem with front buffer is one of access, that is, trying to copy a plain that currently being rendered "interrupts" the copy. It works well enough for me and eats my hard drive! – Brandrew Feb 28 '11 at 17:21
  • @bobobobo I don't know how it would work out exactly, but I was thinking of using something like Huffyuv. Edit: Or perhaps let the user choose from available directshow filters. – someguy Jul 20 '11 at 18:09
  • use `pRenderTarget->GetDesc` to get info to `CreateOffscreenPlainSurface` – kwjsksai Mar 22 '13 at 20:31
  • 1
    I just can't manage to make that work... DirectX spits some invalid call on the GetRenderTargetData part. Obviously the way you create your device must have a lot of importance. – LightStriker Apr 11 '13 at 17:19
  • The API Calls work fine for me. However, all i get is a complete black image. Am I missing out on something? Does this approach really work? – Hrishikesh_Pardeshi Jul 24 '13 at 04:26
  • 18
    downvote, it only works for your own application, so can't be used to record generic programs – user3125280 Dec 27 '13 at 00:33
  • get blank image on win 8.1. – Maria Jul 19 '16 at 15:36
  • get blank image on win 7, did I missed some steps? – sflee Aug 15 '17 at 04:58
  • Does this code only work for DirectX 9? Because GetRenderTarget looks like a DirectX 11 API. – Friendly Genius Nov 20 '17 at 05:33
  • This method doesn't do screen capture. – mofo77 Mar 20 '19 at 17:34
  • @Brandrew Could you please tell me what are the header files you include to get it done? I tried your sample, but not sure where do both 'Device' and 'DisplayMde' come from. Your assistance is much appreciated. Perhaps, you could share the entire code snippet. – Karthick Feb 17 '20 at 02:33
45

EDIT: I can see that this is listed under your first edit link as "the GDI way". This is still a decent way to go even with the performance advisory on that site, you can get to 30fps easily I would think.

From this comment (I have no experience doing this, I'm just referencing someone who does):

HDC hdc = GetDC(NULL); // get the desktop device context
HDC hDest = CreateCompatibleDC(hdc); // create a device context to use yourself

// get the height and width of the screen
int height = GetSystemMetrics(SM_CYVIRTUALSCREEN);
int width = GetSystemMetrics(SM_CXVIRTUALSCREEN);

// create a bitmap
HBITMAP hbDesktop = CreateCompatibleBitmap( hdc, width, height);

// use the previously created device context with the bitmap
SelectObject(hDest, hbDesktop);

// copy from the desktop device context to the bitmap device context
// call this once per 'frame'
BitBlt(hDest, 0,0, width, height, hdc, 0, 0, SRCCOPY);

// after the recording is done, release the desktop context you got..
ReleaseDC(NULL, hdc);

// ..delete the bitmap you were using to capture frames..
DeleteObject(hbDesktop);

// ..and delete the context you created
DeleteDC(hDest);

I'm not saying this is the fastest, but the BitBlt operation is generally very fast if you're copying between compatible device contexts.

For reference, Open Broadcaster Software implements something like this as part of their "dc_capture" method, although rather than creating the destination context hDest using CreateCompatibleDC they use an IDXGISurface1, which works with DirectX 10+. If there is no support for this they fall back to CreateCompatibleDC.

To change it to use a specific application, you need to change the first line to GetDC(game) where game is the handle of the game's window, and then set the right height and width of the game's window too.

Once you have the pixels in hDest/hbDesktop, you still need to save it to a file, but if you're doing screen capture then I would think you would want to buffer a certain number of them in memory and save to the video file in chunks, so I will not point to code for saving a static image to disk.

  • I thought creating a memory device context was to reduce flickering (i.e. via double buffering). If you don't use CreateCompatibleDC, will it really have to convert between contexts (what is this other context anyway)? This is just something trivial I want to know. – someguy Mar 02 '11 at 16:13
  • 3
    http://msdn.microsoft.com/en-us/library/dd183370%28VS.85%29.aspx Excerpt: *If the color formats of the source and destination device contexts do not match, the BitBlt function converts the source color format to match the destination format.* –  Mar 04 '11 at 02:08
  • 2
    Some evidence to substantiate that would be good. Have you published a performance comparison somewhere, or seen an accurate one? –  Jun 04 '12 at 00:22
  • I'm found this code fragment today and I wonder if it doesn't leak memory? – bodacydo Aug 10 '13 at 21:54
  • 2
    Try profiling it. As I said in the post, I'm referencing someone who has experience with GDI. If it leaks memory, and you know how to fix it, edit the post to eliminate the leak. –  Aug 12 '13 at 00:48
  • 1
    I got about 5 fps with this method. This was on a laptop with integrated graphics, so I would expect better from a desktop with a real graphics card, but still, it is very slow. – Timmmm Sep 18 '13 at 18:21
  • 1
    @Timmmm I've added a reference to the way OBS implements this. Hopefully that might speed things up a bit for you. –  Sep 19 '13 at 02:10
  • There is a leak when you do not release `hbDesktop`. Make sure to call `DeleteObject(hbDesktop);` when you no longer need the HBITMAP. – abm Nov 20 '18 at 16:35
  • 1
    Doesn't this leave the mouse cursor out? – NetMage May 19 '20 at 23:23
  • You can only get max 30fps with GDI BitBlt.(You need at last two threads, one call BitBlt, one dump the data to another place, one thread will decrease the fps.), And it do not have the mouse cursor. – bronze man Aug 20 '21 at 05:51
24

I wrote a video capture software, similar to FRAPS for DirectX applications. The source code is available and my article explains the general technique. Look at http://blog.nektra.com/main/2013/07/23/instrumenting-direct3d-applications-to-capture-video-and-calculate-frames-per-second/

Respect to your questions related to performance,

  • DirectX should be faster than GDI except when you are reading from the frontbuffer which is very slow. My approach is similar to FRAPS (reading from backbuffer). I intercept a set of methods from Direct3D interfaces.

  • For video recording in realtime (with minimal application impact), a fast codec is essential. FRAPS uses it's own lossless video codec. Lagarith and HUFFYUV are generic lossless video codecs designed for realtime applications. You should look at them if you want to output video files.

  • Another approach to recording screencasts could be to write a Mirror Driver. According to Wikipedia: When video mirroring is active, each time the system draws to the primary video device at a location inside the mirrored area, a copy of the draw operation is executed on the mirrored video device in real-time. See mirror drivers at MSDN: http://msdn.microsoft.com/en-us/library/windows/hardware/ff568315(v=vs.85).aspx.

Hernán
  • 4,527
  • 2
  • 32
  • 47
18

I use d3d9 to get the backbuffer, and save that to a png file using the d3dx library:

    IDirect3DSurface9 *surface ;

    // GetBackBuffer
    idirect3ddevice9->GetBackBuffer(0, 0, D3DBACKBUFFER_TYPE_MONO, &surface ) ;

    // save the surface
    D3DXSaveSurfaceToFileA( "filename.png", D3DXIFF_PNG, surface, NULL, NULL ) ;

    SAFE_RELEASE( surface ) ;

To do this you should create your swapbuffer with

d3dpps.SwapEffect = D3DSWAPEFFECT_COPY ; // for screenshots.

(So you guarantee the backbuffer isn't mangled before you take the screenshot).

bobobobo
  • 64,917
  • 62
  • 258
  • 363
  • Makes sense, thanks. Do you know what the difference between this and `GetRenderTarget` is? – someguy Jul 20 '11 at 18:08
  • That just gets the current render target (could be another offscreen surface if someone is rendering to texture at the moment you call). – bobobobo Jul 20 '11 at 19:16
16

In my Impression, the GDI approach and the DX approach are different in its nature. painting using GDI applies the FLUSH method, the FLUSH approach draws the frame then clear it and redraw another frame in the same buffer, this will result in flickering in games require high frame rate.

  1. WHY DX quicker? in DX (or graphics world), a more mature method called double buffer rendering is applied, where two buffers are present, when present the front buffer to the hardware, you can render to the other buffer as well, then after the frame 1 is finished rendering, the system swap to the other buffer( locking it for presenting to hardware , and release the previous buffer ), in this way the rendering inefficiency is greatly improved.
  2. WHY turning down hardware acceleration quicker? although with double buffer rendering, the FPS is improved, but the time for rendering is still limited. modern graphic hardware usually involves a lot of optimization during rendering typically like anti-aliasing, this is very computation intensive, if you don't require that high quality graphics, of course you can just disable this option. and this will save you some time.

I think what you really need is a replay system, which I totally agree with what people discussed.

zinking
  • 5,561
  • 5
  • 49
  • 81
  • 1
    See the discussion as to why a replay system isn't feasible. The screencasting program isn't for any specific game. – someguy Jun 05 '12 at 11:42
13

I wrote a class that implemented the GDI method for screen capture. I too wanted extra speed so, after discovering the DirectX method (via GetFrontBuffer) I tried that, expecting it to be faster.

I was dismayed to find that GDI performs about 2.5x faster. After 100 trials capturing my dual monitor display, the GDI implementation averaged 0.65s per screen capture, while the DirectX method averaged 1.72s. So GDI is definitely faster than GetFrontBuffer, according to my tests.

I was unable to get Brandrew's code working to test DirectX via GetRenderTargetData. The screen copy came out purely black. However, it could copy that blank screen super fast! I'll keep tinkering with that and hope to get a working version to see real results from it.

rotanimod
  • 518
  • 4
  • 13
  • Thank you for the information. I haven't tested out Brandrew's code, but I know that taking the `GetRenderTargetData` approach works. Maybe I'll write my own answer when I finish my application. Or, you could update yours once you've gotten everything working. – someguy Mar 20 '11 at 10:42
  • 0.65 per Screencapture?! A good GDI implementation (keeping devices around, etc.) should do 30fps in 1920x1200 easily on a modern computer. – Christopher Oezbek May 08 '12 at 14:54
  • I assume the image quality rendered by GDI is definitely poorer than DX – zinking Jun 06 '12 at 01:53
  • I have performed this test in C# with SlimDX and, surprisingly, found the same results. Perhaps this may have to do with the fact that, using SlimDX, one has to create a new stream and a new bitmap for every frame update, instead of creating it once, rewinding and keep overwriting the same location. – Cesar Jan 06 '13 at 02:21
  • 1
    Just a clarification: actually, the "same results" was referring to GDI being faster - as @Christopher mentioned, 30fps+ was very doable and still left plenty of spare CPU. – Cesar Jan 06 '13 at 02:45
  • 1
    Looking back, I feel I had to have been mistakenly including some post-processing following the screen-grab in my benchmarking. However, if so, I'm sure that the same post-processing was being included for both GDI and DX. So I still feel this is evidence supporting GDI as the faster method. – rotanimod Jan 08 '13 at 14:02
12

You want the Desktop Duplication API (available since Windows 8). That is the officially recommended way of doing it, and it's also the most CPU efficient.

One nice feature it has for screencasting is that it detects window movement, so you can transmit block deltas when windows get moved around, instead of raw pixels. Also, it tells you which rectangles have changed, from one frame to the next.

The Microsoft example code is quite complex, but the API is actually simple and easy to use. I've put together an example project that is much simpler:

Simplified Sample Code

WindowsDesktopDuplicationSample

Microsoft References

Desktop Duplication API

Official example code (my example above is a stripped down version of this)

Ben Harper
  • 2,350
  • 1
  • 16
  • 15
  • The Github project works perfectly on Windows 10, tested with web video and Resident Evil 7. The best thing is the CPU load scales with the graphics upgrade rate. – jw_ Apr 17 '20 at 09:31
  • as soon as you launch GPU-Z this program stops grabbing the screen. – Marino Šimić May 25 '20 at 20:04
11

For C++ you can use: http://www.pinvoke.net/default.aspx/gdi32/BitBlt.html
This may hower not work on all types of 3D applications/video apps. Then this link may be more useful as it describes 3 different methods you can use.

Old answer (C#):
You can use System.Drawing.Graphics.Copy, but it is not very fast.

A sample project I wrote doing exactly this: http://blog.tedd.no/index.php/2010/08/16/c-image-analysis-auto-gaming-with-source/

I'm planning to update this sample using a faster method like Direct3D: http://spazzarama.com/2009/02/07/screencapture-with-direct3d/

And here is a link for capturing to video: How to capture screen to be video using C# .Net?

Community
  • 1
  • 1
Tedd Hansen
  • 12,074
  • 14
  • 61
  • 97
  • 3
    Ah, I forgot to mention I'm programming in C (possibly C++) and am not planning to use .NET. Terribly sorry :/. – someguy Feb 21 '11 at 17:35
  • I was already aware of BitBlt (GDI). I'll look into Direct3D, though. Thanks! – someguy Feb 21 '11 at 17:41
  • I was looking into this a few weeks back, but haven't gotten around to implementing it yet. Direct3D is !!way!! faster than the C# builtin method which is using GDI+. – Tedd Hansen Feb 21 '11 at 17:47
  • I've updated my original post. According to the link, DirectX is slow when having to call GetFrontBufferData(). Is this anything to consider when recording game footage? Could you contextualise this for me? – someguy Feb 21 '11 at 17:56
  • I haven't testet it yet, so I can only speak of what I remember reading (and finally deciding on) which was DirectX. I came across a good blog post on it, but I can't find it when I try to search for it now. Sorry. :/ – Tedd Hansen Feb 21 '11 at 18:18
  • 1
    GDI is slow, so it's not suitable to the problem domain, DirectX, or OpenGL would be the only sensible recommendation. –  Jun 03 '12 at 00:41
9

A few things I've been able to glean: apparently using a "mirror driver" is fast though I'm not aware of an OSS one.

Why is RDP so fast compared to other remote control software?

Also apparently using some convolutions of StretchRect are faster than BitBlt

http://betterlogic.com/roger/2010/07/fast-screen-capture/comment-page-1/#comment-5193

And the one you mentioned (fraps hooking into the D3D dll's) is probably the only way for D3D applications, but won't work with Windows XP desktop capture. So now I just wish there were a fraps equivalent speed-wise for normal desktop windows...anybody?

(I think with aero you might be able to use fraps-like hooks, but XP users would be out of luck).

Also apparently changing screen bit depths and/or disabling hardware accel. might help (and/or disabling aero).

https://github.com/rdp/screen-capture-recorder-program includes a reasonably fast BitBlt based capture utility, and a benchmarker as part of its install, which can let you benchmark BitBlt speeds to optimize them.

VirtualDub also has an "opengl" screen capture module that is said to be fast and do things like change detection http://www.virtualdub.org/blog/pivot/entry.php?id=290

Community
  • 1
  • 1
rogerdpack
  • 62,887
  • 36
  • 269
  • 388
  • I wonder, will it be faster to use your "screen-capture-recorder" or use BitBlt myself? Is there some optimization in your project? – blez Oct 22 '15 at 14:39
  • If you use Bitblt yourself you might avoid an extra memcpy (memcpy typically isn't the largest bottleneck, though it does add some time--I was only mentioning it here for its benchmark utility, but if you need something or directshow then its nice) – rogerdpack Oct 22 '15 at 15:26
6

You can try the c++ open source project WinRobot @git, a powerful screen capturer

CComPtr<IWinRobotService> pService;
hr = pService.CoCreateInstance(__uuidof(ServiceHost) );

//get active console session
CComPtr<IUnknown> pUnk;
hr = pService->GetActiveConsoleSession(&pUnk);
CComQIPtr<IWinRobotSession> pSession = pUnk;

// capture screen
pUnk = 0;
hr = pSession->CreateScreenCapture(0,0,1280,800,&pUnk);

// get screen image data(with file mapping)
CComQIPtr<IScreenBufferStream> pBuffer = pUnk;

Support :

  • UAC Window
  • Winlogon
  • DirectShowOverlay
Cayman
  • 61
  • 1
  • 2
  • 1
    That's... a lot of low level hooks. The capture speed is amazing if you have admin rights tho. – toster-cx May 18 '16 at 20:39
  • I found the winrobot very powerful and smooth. – ashishgupta_mca Feb 22 '17 at 06:49
  • I studied WinRobot code and saw nothing groundbreaking wrt screen capture: it is using same CreateCompatibleDC..BitBlt. Unless there is some magic when this is performed in service context? – shekh May 25 '18 at 11:14
  • 2
    @shekh: Your study was too superficial. The code uses IDirectDrawSurface7->BltFast() to copy the screen from the screen drawing surface to a copy DD surface, then it uses a Filemapping to copy the image. It is quite complex because the code is running in a service where you cannot easily access the desktops. – Elmue Mar 02 '20 at 22:39
5

Screen Recording can be done in C# using VLC API. I have done a sample program to demonstrate this. It uses LibVLCSharp and VideoLAN.LibVLC.Windows libraries. You could achieve many more features related to video rendering using this cross platform API.

For API documentation see: LibVLCSharp API Github

using System;
using System.IO;
using System.Reflection;
using System.Threading;
using LibVLCSharp.Shared;

namespace ScreenRecorderNetApp
{
    class Program
    {
        static void Main(string[] args)
        {
            Core.Initialize();

            using (var libVlc = new LibVLC())
            using (var mediaPlayer = new MediaPlayer(libVlc))
            {
                var media = new Media(libVlc, "screen://", FromType.FromLocation);
                media.AddOption(":screen-fps=24");
                media.AddOption(":sout=#transcode{vcodec=h264,vb=0,scale=0,acodec=mp4a,ab=128,channels=2,samplerate=44100}:file{dst=testvlc.mp4}");
                media.AddOption(":sout-keep");

                mediaPlayer.Play(media);
                Thread.Sleep(10*1000);
                mediaPlayer.Stop();
            }
        }
    }
}
Karthick
  • 357
  • 4
  • 13
  • 1
    To whom it may concern: Please note that libvlc is released under GPL. If you don't intend to release your code under GPL then don't use libvlc. – Sebastian Cabot Mar 08 '20 at 16:09
  • 2
    That's not quite accurate, LibVLC is LGPL - it was relicensed in 2011. The VLC application itself remains GPL: https://www.videolan.org/press/lgpl-libvlc.html – Andrew Apr 29 '20 at 04:48
3

This might not be the fastest method, but it is leightweight and easy to use. The image is returned as an integer array containing the RGB colors.

#define WIN32_LEAN_AND_MEAN
#define VC_EXTRALEAN
#include <Windows.h>
int* screenshot(int& width, int& height) {
    HDC hdc = GetDC(NULL); // get the desktop device context
    HDC cdc = CreateCompatibleDC(hdc); // create a device context to use yourself
    height = (int)GetSystemMetrics(SM_CYVIRTUALSCREEN); // get the width and height of the screen
    width  = 16*height/9; // only capture left monitor for dual screen setups, for both screens use (int)GetSystemMetrics(SM_CXVIRTUALSCREEN);
    HBITMAP hbitmap = CreateCompatibleBitmap(hdc, width, height); // create a bitmap
    SelectObject(cdc, hbitmap); // use the previously created device context with the bitmap
    BITMAPINFOHEADER bmi = { 0 };
    bmi.biSize = sizeof(BITMAPINFOHEADER);
    bmi.biPlanes = 1;
    bmi.biBitCount = 32;
    bmi.biWidth = width;
    bmi.biHeight = -height; // flip image upright
    bmi.biCompression = BI_RGB;
    bmi.biSizeImage = 3*width*height;
    BitBlt(cdc, 0, 0, width, height, hdc, 0, 0, SRCCOPY); // copy from desktop device context to bitmap device context
    ReleaseDC(NULL, hdc);
    int* image = new int[width*height];
    GetDIBits(cdc, hbitmap, 0, height, image, (BITMAPINFO*)&bmi, DIB_RGB_COLORS);
    DeleteObject(hbitmap);
    DeleteDC(cdc);
    return image;
}

The above code combines this answer and this answer.

Example on how to use it:

int main() {
    int width=0, height=0;
    int* image = screenshot(width, height);

    // access pixel colors for position (x|y)
    const int x=0, y=0;
    const int color = image[x+y*width];
    const int red   = (color>>16)&255;
    const int green = (color>> 8)&255;
    const int blue  =  color     &255;

    delete[] image;
}
ProjectPhysX
  • 4,535
  • 2
  • 14
  • 34
2

i myself do it with directx and think it's as fast as you would want it to be. i don't have a quick code sample, but i found this which should be useful. the directx11 version should not differ a lot, directx9 maybe a little more, but thats the way to go

Community
  • 1
  • 1
cppanda
  • 1,235
  • 1
  • 15
  • 29
2

DXGI Desktop Capture

Project that captures the desktop image with DXGI duplication. Saves the captured image to the file in different image formats (*.bmp; *.jpg; *.tif).

This sample is written in C++. You also need some experience with DirectX (D3D11, D2D1).

What the Application Can Do

  • If you have more than one desktop monitor, you can choose.
  • Resize the captured desktop image.
  • Choose different scaling modes.
  • You can show or hide the mouse icon in the output image.
  • You can rotate the image for the output picture, or leave it as default.
gerdogdu
  • 44
  • 5
0

I realize the following suggestion doesn't answer your question, but the simplest method I have found to capture a rapidly-changing DirectX view, is to plug a video camera into the S-video port of the video card, and record the images as a movie. Then transfer the video from the camera back to an MPG, WMV, AVI etc. file on the computer.

Pierre
  • 4,114
  • 2
  • 34
  • 39
0

Windows.Graphics.Capture

Enables apps to capture environments, application windows, and displays in a secure, easy to use way with the use of a system picker UI control.

https://blogs.windows.com/windowsdeveloper/2019/09/16/new-ways-to-do-screen-capture/

Rex L
  • 331
  • 2
  • 6