2000+ nested objects is not so huge, and won't be so slow, even with RTTI (disk access or compression will be much slower). With a direct SaveToStream manual serialization, it is very fast, if you use FastMM4 (the default memory manager since Delphi 2006).
Perhaps you may change your algorithm and use dynamic arrays instead (there is an open source serializer here). But your data may not fit this kind of records.
Or never/seldom release memory, and only use objects or record references. You can have an in-memory pool of objects, with some kind of manual garbage collector, and only handle reference to objects (or records).
If you have string
inside the objects, you may not reallocate them and maintain a global list of used strings: reference counting and copy-on-write will make it much faster than standard serialization and allocation (even with FastMM4).
In all cases, it is worth making a real profiling of your application. General human guess about performance bottlenecks are most of the time wrong. Only trust a profiler, and a wall clock. Perhaps your current implementation is not so slow, and the real bottleneck is not the object process, but somewhere else.
Do not optimize too early. "Make it right before you make it fast. Make it clear before you make it faster. Keep it right when you make it faster." — Kernighan and Plauger, Elements of Programming Style.