To begin, it's important here to understand something of how the CLR handles memory. The CLR uses a generational garbage collector, where memory is moved to a higher order generation as it survives each collection. Additionally, there is a special generation, called the "Large Object Heap" (LOH), that is for objects of a certain size and larger. Your JSON string is almost certainly ending up here. The important thing to know is that the LOH is rarely collected; worse, it's almost never compacted. This means that even after an object has been collected and removed from memory, the virtual address space reserved for the object within your process is still in use... and remember that your process has only 2GB of address space by default.
That out of the way, we can see things in your code that may cause a problem. If (and that "if" will be important in a moment) you're doing straight-up string concatenation, this excerpt would end up creating three total objects that copy your string:
"{""root"":" + JsonContents + "}"
There would be one copy for "{""root"":" + JsonContents
portion and a second new copy for the trailing + "}"
. Thankfully, the compiler will typically re-write that into something a little more efficient, and you'll only end up with one additional copy. While I'm here, let's not forget all the CPU work in moving those bytes from the old string to the new string.
One or two additional copies normally isn't too much of a problem, though. I normally see people get into trouble when they're working on a document that uses a doubling algorithm, where it reads through a stream of data and allocates a new buffer that's twice the size of the old one every time the buffer fills up. Most of the collection types in .Net work this way. I'm not familiar with the internals of the JSONConvert
type, but it's possible the DeserializeXmlNode()
method works this way. If that's case, you'll need to find another way to create your xml document.
We don't have to look that far, though, to find a process creating new string objects over and over. Take a look at the documentation for File.ReadAllText()
This method opens a file, reads each line of the file, and then adds each line as an element of a string.
Uh oh. That sounds like it's concatenating lines to a string over and over. If this were really the cause of your error, I'd expect the File.ReadAllText()
call to be where the process breaks. However, it's possible the extra copies created by the string concatenation I showed in the beginning, plus any memory used for the results of the DeserializeXmlNode()
call and the CType()
cast — remember: if CType() is actually doing any work those will be two separate (large) objects — are the straws that broke the camel's back.
You know how big the file will be, and so we can do a lot better by reading from a StreamReader and writing each line into a StringBuilder
object created using the constructor overload the pre-allocates space for buffer. This way, all of the intermediate steps are small enough to avoid the large object heap. But I suspect it won't be enough, and you'll need to find an alternative to the JSONConvert
type for creating your XmlDocument
object.
Of course, it's possible that your JSON file is just that large. In that case, you may need to split this into multiple calls into the database, and read from the file a section at a time. No single memory object in .Net is allowed to exceed 2GB, even on 64bit systems (the process can get larger, but individual objects cannot).