You're most likely reading from uninitialized memory. To aid debugging this kind of issue, in debug builds, the runtime library (and depending on compiler toolchain also the compiler) insert code, that fill all memory allocations with canary value patterns. These patterns may either show up in reads that are out-of-bounds or to uninitialized memory. Similarly sanity checking code tests the integrity patterns to test for out-of-bounds writes.
Since on currently widespread computer architectures memory protection only works at a certain granularity, namely the page size, which in most cases is 4096 bytes¹, to detect memory corruption those canary values are used.
Anyway, the reason you're seeing different values in debug vs. release builds is, that memory is actually initialized differently, for each case, and that shows up. What's important for you is, that because a difference shows up, your code does something wrong, i.e. you have a bug that must be fixed!
If you were building this for Linux my recommendation would be to run your program through the Valgrind memory debugger. Valgrind is a tool that's specifically designed for debugging these kinds of errors. It roughly works by running your code through an emulated CPU, tracking each and every single memory allocation and acces, telling you down to the source code line, where illegal accesses happen.
There's a SO Q&A on Valgrind substitutes for Windows development: Is there a good Valgrind substitute for Windows?
1: huge pages (typically 2MiB) and gigapages (typically 1GiB) are often available as well, but unless you request those explicitly, it's just the standard page size.