First, SVN has two different repository backends: BDB (Berkley DB) and FSFS (File system). How the repository exists on disk is dependent on this choice, with the BDB typically being a bit larger. Which do you use?
If you use FSFS, then you should read up on sharding: when you commit a change, however small, it will be committed into a file whose minimum size is set by the disk - normally 2kb -16kb. Now multiply that up by the number of files being committed, and you can get very big numbers. The good news is that you can run a command to condense the shards into a single file:
svnadmin pack /path/to/repository
This might greatly improve your on-disk size.
Or the space problem might be the massive-number-of-files-per-commit problem you mention.
In any case, you ask why the dump file is very much smaller than the repository size. The dump file is a single file in a format that essentially is every commit ever made on the repository - this is a very terse form of the repository (especially if --deltas is used). Since this is placed into a single file, the issue of sharding is avoided.
I used to use and champion SVN in a previous organisation. Recently I moved myself to the Mercurial DVCS (also called Hg, and is similar to Git). Once you have made the switch, it's difficult ever thinking of going back. Anyway, here is a quote from Softpedia about repository size:
Disk space: When the Mozilla project was ported from SVN to Mercurial (very similar to Git in performance), disk space usage went down from 12GB to 420MB, 30 times smaller than the original size. Git is supposed to use the same storage algorithms, so file size should be around the same value.
You might want to investigate what would happen in your case if you switched to Hg or Git. If it is as dramatic as Softpedia's example, you could recommend Hg/Git to your management.