I have a very large data model (C# objects taking GBs of memory) which I want to send efficiently to Node.js process.
Today I serialize it to JSON file/buffer and read it from Node.js.
Since a lot of the data model are sub-objects which are references from a lot of places, when loading the JSON on node side, the memory turns extremely big since they are duplicated.
I'm looking for a serialization engine that knows to take duplicate references into account and is cross-language.
In addition, since I have hundreds of classes I prefer an engine that doesn't require changes to the data model classes or some code generation.
Any other alternatives are welcomed as well.
Asked
Active
Viewed 29 times
0

Avner Levy
- 6,601
- 9
- 53
- 92
-
Generally, if you are serializing an arbitrary graph of objects, you need to create a stable "Object ID" that represents a reference to an object - you can't just use a standard language-enabled object reference (which is effectively a pointer). When you deserialize the graph, you deserialize the Object IDs, and then rebuild the object-reference to object-Id relationship (either right after the deserialization, or in *lazy* fashion). Doing it between languages would make this even more *fun* – Flydog57 May 13 '20 at 15:20
-
Found one solution at https://stackoverflow.com/q/15312529/927477 – Avner Levy May 14 '20 at 04:39