Up to certain extent the performance quite depends on .NET version your application is running on. Another quick reference is Microsoft Patterns and Practices article.
There're 4 ways : XMLDocument, XPathNavigator, XmlTextReader, Linq to XML, I think the differences between them are valuable!
XmlDocument:
It represents the contents of an XML file. When loading it from a file, you read the entire file into memory. Generally speaking, XML parsing is much slower if you're using XmlDocument, which is more geared towards loading the whole DOM in the RAM, ... your application's memory consumption might become just like how a caterpillar moves !!
Using the DOM-model and the XmlDocument or XPathDocument classes to parse large XML documents can place significant demands on memory. These demands may severely limit the scalability of server-side Web applications.
XPath or LINQ-To-XML:
If your concern is more on performance I'd personally not recommend using the XPath or LINQ-To-XML Queries. XPathNavigator provides a cursor model for navigating and editing XML data.
XmlReader:
It might help achieving better in performance as compared to XmlDocument. As others have already suggested. XmlReader is an abstract class and provides an API for fast, forward-only, read-only parsing of an XML data stream ... It can read from a file, from an internet location, or from any other stream of data. When reading from a file, you don't load the entire document at once. And that is where it shines on.
XmlTextReader:
XmlTextReader, is an implementation of XmlReader. Use XmlTextReader to process XML data quickly in a forward, read-only manner without using validation, XPath, and XSLT services.
The EOL normalization is always on in the XmlReader from XmlReader.Create, which affects the XDocument. The normalization is off by default on the XmlTextReader, which affects the XmlDocument and XmlNodeReader. It can be turned on via the Normalization property.
Design Considerations
Benchmark Test