For most systems the maximum size of virtual memory is determined by the number of bits in a virtual address that are supported by the MMU.
For example; for typical 64-bit 80x86 CPUs a virtual address is 64 bits, but only the lowest 48 bits are supported by the MMU, so the virtual address size is 1 << 48 = 256 TiB
. Everything else (amount of RAM, swap space, etc) doesn't matter.
In theory, you can (e.g.) fill the entire virtual address space by mapping the same page of RAM everywhere; and (for 64-bit 80x86) it would only cost 4 KiB of RAM (for the data that's mapped everywhere) plus another 16 KiB of RAM (for MMU's own data - page tables, etc). In other words, a measly 20 KiB of physical RAM is enough to fill a whopping 256 TiB of virtual space.
Of course for practical purposes often the kernel reserves some of the virtual address space (e.g. half of it), so a process can only use the remainder (e.g. 128 TiB); and (for modern 64-bit CPUs) unless you're mapping the same data at different places in the virtual address space (which would be silly/pointless) it's likely that you'll run out of things to put in a virtual address space before you run out of virtual address space.
This isn't the case for older 32-bit CPUs, where the virtual address space is a lot smaller (where maybe a process can only use 2 GiB out of a 4 GiB total space) and it's a lot easier to run out of space.