Concerning this passage from Chapter 1: A Tutorial Introduction in Kernighan and Ritchie: The C Programming Language (I've bolded the specific part that I need clarification on and have elaborated down below):
Given getchar and putchar, you can write a surprising amount of useful code without knowing anything more about input and output. The simplest example is a program that copies its input to its output one character at a time: read a character while (character is not end-of-file indicator) output the character just read read a character Converting this into C gives:
#include <stdio.h>
/* copy input to output; 1st version */ main()
{
int c;
c = getchar();
while (c != EOF) {
putchar(c);
c = getchar();
}
}
The relational operator != means "not equal to". What appears to be a character on the keyboard or screen is of course, like everything else, stored internally just as a bit pattern. The type char is specifically meant for storing such character data, but any integer type can be used. We used int for a subtle but important reason.
The problem is distinguishing the end of input from valid data. The solution is that getchar returns a distinctive value when there is no more input, a value that cannot be confused with any real character. This value is called EOF, for ``end of file''. We must declare c to be a type big enough to hold any value that getchar returns. We can't use char since c must be big enough to hold EOF in addition to any possible char. Therefore we use int.
My understanding is that Char is a type of Int, but it is just smaller (in the same way that Int16, Int32, Int64 in other languages are the same but can represent magnitudes of numbers).
I get that every character can be represented by an integer of type Char, so why can't the EOF value be represented as a Char? Is it because every single integer in the Char type is already accounted for, and even one more number is too large for the data type?
Any explanation or corrections to my knowledge would be appreciated.