1

given the following Code:

    byte x, xmin, xmax, xstep;
    x = (x + xstep < xmax ? x + xstep : xmax)

the compiler tells me

Cannot implicitly convert type 'int' to 'byte'. An explicit conversion exists (are you missing a cast?) 

Where does the conversion from byte to int happen? And why?

Vertigo
  • 634
  • 1
  • 9
  • 24

3 Answers3

7

Break it down. We have

sum = expression

Sum is of type byte. What is the type of expression? Break it down. Expression is

summand1 + summand2

Summand1 is of type byte. What type is summand2? Break it down. It is:

test ? consequence : alternative

Test is of type bool. Alternative is of type byte. What type is consequence? Break it down! It is:

summand3 + summand4

That's byte + byte. Byte + byte is int, so consequence is of type int.

Now we have enough information to work out the type of summand2. Consequence is int, alternative is byte, and int is the more general of those two types. (Because every byte is convertible to int but not every int is convertible to byte.)

Therefore the type of summand2 is int. So we have sum equal to a byte plus an int. Byte plus int is int, and therefore we have int assigned to byte. Which is an explicit conversion, not an implicit conversion.

Eric Lippert
  • 647,829
  • 179
  • 1,238
  • 2,067
6

Adding a byte to a byte results in an int, according to the MSDN:

Consider, for example, the following two byte variables x and y:

    byte x = 10, y = 20;

The following assignment statement will produce a compilation error, because the arithmetic expression on the right-hand side of the assignment operator evaluates to int by default.

    // Error: conversion from int to byte:
    byte z = x + y;

To fix this problem, use a cast:

    // OK: explicit conversion:
    byte z = (byte)(x + y);
T.J. Crowder
  • 1,031,962
  • 187
  • 1,923
  • 1,875
  • Thank you, didn't realize that they defined a different behaviour for byte, as int ia, ib = int.MaxValue, ic = int.MaxValue; ia = ib + ic; compiles and produces the expected overflow. – Vertigo Feb 24 '12 at 22:33
  • 3
    @Vertigo: The difference is: people are *highly likely* to be working with byte values that overflow a byte when added, and *highly unlikely* to be working with int values that overflow when added. If you are working with ints that big then you should be using longs in the first place. – Eric Lippert Feb 24 '12 at 22:42
0

Would this work?

byte x, xmin, xmax, xstep;
// assign your variables x, xmin, xmax, and xstep,
// then...
x = ((x + xstep) < xmax) ? (byte)(x + xstep) : xmax;

You only need to cast that one part, probably because it could exceed a byte size, but I don't know.

...or really care why. :)

According to T.J. up there, adding a byte to a byte produces an int, so I guess that's the real answer.

But, my version compiles with no complaints with only the one cast.

  • It fails on what, @DavidHeffernan? I just got it to compile in VS2010 before posting this. Not 1 complaint. –  Feb 24 '12 at 22:38
  • 1
    Fair enough, didn't see the cast. But I don't see how this answers the Q. I mean the Q was not, how do I cast, but why do I need to cast? – David Heffernan Feb 24 '12 at 22:40
  • It says the post by T.J. Crowder explains why it is not allowed, and shows where the one cast would be needed to make it work. That said, you may have a point. I did not run the code, so I do not know what behavior the compiler exhibits when the bytes are added together. –  Feb 24 '12 at 22:43