1 is very hard, too many different APIs.
2 The hard part is making a fake monitor. If you’ll instead buy a $10 device called “HDMI dummy plug” it will become relatively simple, with 100% documented API. Use desktop duplication API to get texture of monitor 1, apply whatever effect you want and show that on monitor 2. If you want good performance, you better implement the processing completely on GPU, e.g. render a quad with a custom pixel shader.
3 will work but very hard to do.
There’s another way. It’s tricky to implement and uses undocumented APIs, but quite reliable in my experience, at least it was so on Windows 7. Write a DLL that injects itself into dwm.exe. That’s a windows process “desktop windows manager”, it composes whatever is visible on desktop. After DLL inject, create a new thread, in that thread call D3D11CreateDeviceAndSwapChain, then use e.g. MinHook to intercept Present
, and ideally also ResizeBuffers
methods of IDXGISwapChain
interface. If succeeded, dwm.exe will call functions from your DLL every time it presents a frame, or when desktop resolution changes. Then, in the present functions you can do your stuff, e.g. add another render pass implementing your effect, then call original implementations to actually present the result to desktop.
This is easy in theory but quite tricky to implement in practice, however. E.g. it’s hard to debug dwm.exe, you’ll have to rely on logging, or maybe use a virtual machine with remote debugger. Also this is not portable across windows versions. Another limitation, it won’t work for full-screen apps like videogames, they bypass dwm.exe. With videogames, will only work for “borderless full-screen window” in-game setting.
Update: another approach, much simpler. Create a topmost full screen window with per-pixel transparency. The OS supports them for decades, set WS_EX_LAYERED
and WS_EX_TRANSPARENT
extended style bits. You won’t be able to do grayscale because you can only overlay your stuff but not read what's underneath, but edges, scanlines, and glitches are totally doable. For best performance, use some GPU-centric API to render that window, e.g. Direct2D or D3D in C++. It’s also much easier to debug, either use 2 monitors, or position the transparent window so it occupies a rectangle in the corner leaving you enough screen space for the IDE. Here's an example (not mine).