It depends upon the instruction set architecture and the ABI. They both influence the organization (and growth direction) of the call stack.
Usually the call stack grows downwards, but there have been ISAs where that is not the case (see this). And some ISAs (e.g. IBM serie Z mainframes) don't have any hardware call stack (then, the call stack is just an ABI convention about register usage).
Most application software (e.g. your game, word processor, compiler, ...) are running above some operating system, in some process having some virtual address space (so in virtual memory).
Read some book on OSes, e.g. Operating Systems: Three Easy Pieces.
In practice (unless you code some operating system kernel managing virtual memory) you care mostly about the virtual address space (often made of numerous discontinuous segments). On Linux, use /proc/
(see proc(5)) to explore it (e.g. try cat /proc/$$/maps
in your terminal). And notice that for a multi-threaded application each thread has its own call stack. Then "top" or "bottom" of the virtual address space don't really matter and don't have much sense.
If (as most people) you are writing (in any programming language other than the assembler) some application software above some OS, you don't care (as a developer) about real memory, but about virtual memory and resident set size. You don't care about the stack growth (it is managed by the OS, compiler, ISA, ... for the automatic variables of your code). You need to avoid stack overflow. It often happens that some pages (e.g. of your code segment[s]), perhaps those containing never used code, never get into RAM and stay paged out. And in practice most of the (virtual) memory of some process is not for its call stack: you usually allocate memory in the heap. My firefox browser (on my Linux desktop) has a virtual address space of 2.3 gigabytes (in more than a thousand of segments), but only 124 kilobytes of stack. Read about memory management. The call stack is often limited (e.g. to a few megabytes).