Well, it is a strange question, since the immediate answer is straightforward: as you lose memory to memory leaks, you can/will eventually run out of memory. How big a problem that represents to a specific program depends on how big each leak is, how often these leaks occur and for how long. That's all there is to it.
A program that allocates relatively low amount of memory and/or is not run continuously might not suffer any problems from memory leaks at all. But a program that is run continuously will eventually run out of memory, even if it leaks it very slowly.
Now, if one decided to look at it closer, every block of memory has two sides to it: it occupies a region of addresses in the address space of the process and it occupies a portion of the actual physical storage.
On a platform without virtual memory, both sides work against you. Once the memory block is leaked, you lose the address space and you lose the storage.
On a platform with virtual memory the actual storage is a practically unlimited resource. You can leak as much memory as you want, you will never run out of the actual storage (within practically reasonable limits, of course). A leaked memory block will eventually be pushed out to external storage and forgotten for good, so it will not directly affect the program in any negative way. However, it will still hold its region of address space. And the address space still remains a limited resource, which you can run out of.
One can say, that if we take an imaginary virtual memory platform with address space that is overwhelmingly larger than anything ever consumable by our process (say, 2048-bit platform and a typical text editor), then memory leaks will have no consequence for our program. But in real life memory leaks typically constitute a serious problem.