I dug a little bit in the web and I came across the Magnification API.
I discovered that a callback can be registered for the magnification thread using the MagSetImageScalingCallback API.
As far as I understand this callback is called whenever a new frame need to be drawn into the magnifier window registered with the MagSetWindowSource API.
The raw screen bitmap and all related information are passed to the callback whose goal is to transform the bitmap that will be drawn to the window upon the callback returns.
In my opinion the name "ImageScalingCallback" can lead to a misunderstanding of the real usage.
Anyway I finally realized how this can be used in my application:
1) The magnifier window is created and set as fullscreen-topmost.
2) The callback is called as soon as the first frame need to be drawn
3) The original bitmap is copied to another buffer
4) The original bitmap content is replaced with a flat-black bitmap
5) The callback returns and the modified bitmap is drawn to the magnifier window
These step can be iterated without loosing the "capture" capability.
In fact even if the screen is covered with a black image this doesn't prevent the Magnification API from capturing the screen.
That's because the window registered as magnifier window is never included in the capture (even in case of a fullscreen window).
This is exactly the behavior I was looking for.
I slightly modified the sample Screenshot using the Magnification library on the CodeProject web site to implemented this behavior. The captured images contained into the srcdata pointer are dumped to a set of file to demonstrate that the capture is working and that each image contains the updated capture.
Unfortunately these API are deprecated and no replacement has been provided yet.