7

The prototypes for getchar() and putchar() are:

int getchar(void);

int putchar(int c);

As ,its prototype shows, the getchar() function is declared as returning an integer.However, you can assign this value to a char variable, as is usually done, because the character is contained in the low-order byte.(The high-order byte is normally zero.)

Similary in case of putchar(),even though it is declared as taking an integer parameter you will generally call it using a character argument.Only the low order byte of its parameter is actually output to the screen.

What do you mean by high order and low order bytes?

rooni
  • 1,036
  • 3
  • 17
  • 33
  • An int data type is either 16 or 32 bits - either 2 or 4 bytes. – OldProgrammer Nov 05 '17 at 02:15
  • Questions asking us to recommend or find a book, tool, software library, tutorial or other off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. – Rob Nov 05 '17 at 02:17
  • Burn the book... – wildplasser Nov 05 '17 at 02:18
  • Out of curiosity, what source is this from? – templatetypedef Nov 05 '17 at 02:20
  • Please dont confuse C and C++. Is this one of the HS books, BTW? – wildplasser Nov 05 '17 at 02:23
  • You should *not* normally assign `getchar()` to a `char` variable, because `getchar()` will eventually return `EOF`, which has lots of high-order 1s. (Also because some input characters will be returned as positive integers outside of the range of a signed `char`, which is Undefined Behaviour.) My recommendation: find a better book. – rici Nov 05 '17 at 02:31
  • I'll join the crowd here and suggest getting a better C++ book. There's a great list on the site here if you search for it. – templatetypedef Nov 05 '17 at 02:58

2 Answers2

14

In C, the size of an int is implementation defined, but is usually 2, or 4 bytes in size. The high-order byte would be the byte that contains the largest portion of the value. The low-order byte would be the byte that contains the smallest portion of the value. For example, if you have a 16-bit int, and the value is 5,243, you'd write that in hex as 0x147B. The high order byte is the 0x14, and the low-order byte is the 0x7B. A char is only 1 byte, so it is always contained within the lowest order byte. When written in hex (in left-to-right fashion) the low-order byte will always be the right-most 2 digits, and the high-order byte will be the left-most 2 digits (assuming they write all the bytes out, including leading 0s).

user1118321
  • 25,567
  • 4
  • 55
  • 86
4

I think the best analogy for this would be to look at decimal numbers.

Although this isn't literally how things work, for the purposes of this analogy let's pretend that a char represents a single decimal digit and that an int represents four decimal digits. If you have a char with some numeric value, you could store that char inside of an int by writing it as the last digit of the integer, padding the front with three zeros. For example, the value 7 would be represented as 0007. Numerically, the char value 7 and the int value 0007 are identical to one another, since we padded the int with zeros. The "low-order digit" of the int would be the one on the far right, which has value 7, and the "high-order bytes" of the int would be the other three values, which are all zeros.

In actuality, on most systems a char represents a single byte (8 bits), and an int is represented by four bytes (32 bits). You can stuff the value of a char into an int by having the three higher-order bytes all hold the value 0 and the low-order byte hold the char's value. The low-order byte of the int is kinda sorta like the one's place in our above analogy, and the higher-order bytes of the int are kinda sorta like the tens, hundreds, and thousands place in the above analogy.

templatetypedef
  • 362,284
  • 104
  • 897
  • 1,065
  • @templatedef can i infer that when integer or any other type(numeric type) gets typecasted to a char var then that char variable contains the low order byte of the numeric type? – rooni Nov 05 '17 at 02:44
  • @rimiro, no you can't. If you convert an integer to an `unsigned char`, you will get the low order bits (at least 8 of them). But converting a number outside of the range to a signed `char' is not defined by the standard, so you have to search the documentation for the compiler(s) you use (which you also need to do to see whether`char` is signed or unsigned). – rici Nov 05 '17 at 04:42
  • Actually, the length of the data types is dependent on the used architecture and the compiler. While `char` is one byte long (8 bits) in most cases, `int` may be either 16 bits long (two bytes) on 8/16-bit systems, or 32 bits long (four bytes) on 16/32-bit systems. Make use of the `sizeof(int)` statement to make sure what length your `int` datatype occupies in memory. – Dare The Darkness Dec 15 '22 at 01:41