tl;dr: Compilers are complex, and undefined behaviour allows them to do all sorts of things.
int* test;
std::cout << test << std::endl;
Using test
(even just to evaluate its own value!) in this manner when it hasn't been given a value is not permitted, so your program has undefined behaviour.
Your compiler apparently uses this fact to take some particular path. Perhaps it's assuming a zero value, or it's prepared to optimise away the variable and leave you only with some hardcoded thing. It's arbitrarily picked zero for that thing, because why not? The value is unspecified by the standard, so that's fine.
&test;
This is another thing. It is perfectly legal to take the address of an uninitialised thing, so this aspect of your program is well-defined. It appears that this triggers a path in the compiler that prepares to create actual, honest-to-god storage for the pointer. This odr-use effectively prevents any of the optimise-it-out machinery. Somehow, that's taken it down a road that doesn't trigger the "pretend it's zero" case, and you end up with (possibly) some actual, memory read instead; that memory read results in the unspecified value that you have come to expect from outputting uninitialised things.
That value is still "garbage", though. You indicate that you "can" deference it, that you "can" memmove it, that you "can" work with it without triggering a segmentation fault. But this is all an illusion! Do not "expect" segmentation faults from the use of invalid pointers. That is only one possible result. The operating system doesn't detect all bad accesses (unless you use some debug tool to make it do so), usually only those that cross page boundaries or such.
Anyway, the specifics of the above are complete speculation but it shows the sort of factors that can go into different outcomes of programs with undefined behaviour. Ultimately there is not a lot of point in trying to rationalise about this sort of code, and there is certainly no point in writing it!