0

I have a 32-bit integer called dim, and I'm reserving an array of unsigned chars using new that's dim * dim * dim big. Visual Studio says I should cast dim to a 64-bit integer before doing this, but I feel like it doesn't really matter.

unsigned char* cells = new unsigned char[dim * dim * dim];
C26451: Arithmetic overflow: Using operator '*' on a 4 byte value and then casting the result to a 8 byte value. Cast the value to the wider type before calling operator '*' to avoid overflow (io.2).

dim isn't going to be bigger than 1000.

paper man
  • 488
  • 5
  • 19

2 Answers2

3

Since dim isn't going to be bigger than 1,000, the largest your expression can be is 1,000 × 1,000 × 1,000 — also known as 1,000,000,000. This fits into a 32-bit integer (which your int apparently is), so your program does not risk an overflow.

However, the compiler doesn't know this. Sometimes it has enough information to "prove" these constraints itself; sometimes it doesn't. Apparently, here, it doesn't. And, since your "argument" to the array dimension is a 64-bit integer, it's telling you that — if you did potentially have larger dim values — you could reduce the risk of an avoidable overflow by pre-casting to the type you're going to end up with anyway.

(Of course if your expression could end up larger than Uint64.Max then all bets would be off, no matter what you did.)

Questions about C26451 come up from time to time because it does seem to be a little overzealous on occasion. Personally I'd consider disabling it, but if you subscribe to the "never disable a warning" category of people, just do as it says. It won't really hurt you, and this particular suggestion is arguably a good habit to get into.

Asteroids With Wings
  • 17,071
  • 2
  • 21
  • 35
-1

You should pay attention to the warning; operator new[] expects an array size value of std::size_t.

Phil Brubaker
  • 1,257
  • 3
  • 11
  • 14