For ASCII, CP-437, CP-1252, ISO-8859-1, or code pages similar to these, then the number of characters will be the number of bytes.
If the file is in UTF-16, then you cannot know the number of characters from the number of bytes, but it will likely be something similar to the number of bytes / 2. In any case, you can exactly calculate the size of memory needed to hold the file in a .NET string, because it will be the size of the file (since .NET uses UTF-16 internally) plus a constant overhead. The Length of such a string will be number of bytes divided by 2.
If the file is in UTF-8 (or any other vairable-width encoding), then the number of characters could be a wide range up to several times the number of bytes, or it could be one character per byte. It just depends on the data.
If the file is in UTF-32 (which is extremely unlikely), then the number of characters will be exactly the length of the file in bytes divided by four. But even though this is the exact number of characters, it does not indicate the length of the .NET string created from this file, since that might involve the use of surrogate code points for characters in the high planes, so the answer still depends on what you inted to do with the information.