Any ideas/workarounds on how to improve my BitBlt()
reliability when running it in VMware? I have a timer that measures the time spent:
SmartTimer debugTimer = SmartTimer();
debugTimer.restart();
int ms1 = debugTimer.elapsedTimeMs();
POINT origin = { rect.left, rect.top };
int ms2 = debugTimer.elapsedTimeMs();
ClientToScreen(win_hwnd, &origin);
int ms3 = debugTimer.elapsedTimeMs();
BitBlt(hMemoryDC, 0, 0, (rect.right - rect.left) + 1, (rect.bottom - rect.top) + 1, hScreenDC, origin.x, origin.y, SRCCOPY); // SRCCOPY
int ms4 = debugTimer.elapsedTimeMs();
if (ms1 > 150 || ms2 > 150 || ms3 > 150 || ms4 > 150) {
logs(std::format("getPixelData took very long: {} {} {} {}", ms1, ms2, ms3, ms4));
}
I have observed in the logs, often it says "0, 0, 0, 420" (or similar) indicating that BitBlt()
took a very long time, even in cases where I only BitBlt()
one pixel.
I know that for reading one pixel, there is GetPixel()
, but for some reason this was very buggy for me when reading from a game. It would at times get pixel value 255, 255, 255
for everything, but when I switched to BitBlt()
, the bug stopped happening.
It also seems like BitBlt()
is usually very fast, but sometimes it is very slow.