20

I am learning C++ by reading Stroustrup's "Principles and Practice Using C++".

In the section about pre- and post-conditions there is the following example of function:

int area(int length, int width)
// calculate area of a rectangle;
// pre-conditions: length and width are positive
// post-condition: returns a positive value that is the area
{
    if (length<=0 || width <=0) 
        error("area() pre-condition");

    int a = length*width;

    if (a<=0) 
        error("area() post-condition");

    return a;
}

What confused me is the task about this code:

Find a pair of values so that the pre-condition of this version of area holds, but the post-condition doesn’t.

Are there such possible values for integer that pre-conditions is ok but post-condition not?

Pavel_K
  • 10,748
  • 13
  • 73
  • 186
  • No there are not , except values causing undefined behavior. Lucky you there is a way to check this http://stackoverflow.com/questions/199333/how-to-detect-integer-overflow-in-c-c – g24l Jan 06 '16 at 01:10

9 Answers9

34

Are there such possible values for integer that pre-conditions is ok but post-condition not?

Yes there's a number of input values, that can cause the post condition to fail. If e.g.

int a = length*width;

overflows the positive int range (std::numeric_limits<int>::max()) and the compiler implementation yields a negative value for this case.


As others noted in their answers, the situation that length*width goes out of bounds from ]0-std::numeric_limits<int>::max()[ is actually undefined behavior, and the post condition renders merely useless, because any value might need to be expected for a.

The key point to fix this, is given in @Deduplicator's answer, the pre-condition needs to be improved.


As a lance for Bjarne Stroustrup's reasonings to give that example:

I assume he wanted to point out that such undefined behavior might lead to unexpected negative values in the post-condition and surprising results for a naive assumption checked with the pre-condition.

Community
  • 1
  • 1
πάντα ῥεῖ
  • 1
  • 13
  • 116
  • 190
  • 2
    Needs correction. The post-condition does not hold **even** if the overflow yields a positive value. – Anonymous Coward Dec 31 '15 at 16:41
  • @JoseAntonioDuraOlmos: How is a positive value `<=0`, pray tell? – Lightness Races in Orbit Dec 31 '15 at 17:15
  • 3
    @LightnessRacesinOrbit Indeed a positive value is not <=0. But bear in mind that the post condition is "returns a positive value that **is the area**". If the multiplication overflows into a positive integer the result is **NOT** the area. – Anonymous Coward Dec 31 '15 at 17:19
  • 7
    @LightnessRacesinOrbit For example, 46000*46000 is 2116000000, this is in bounds. Bump it up to 47000*47000 and you get -2085967296, out of bounds. Keep going up to 66000 and you get 61032704, which is positive, but is not the 4356000000 that is the true area. – corsiKa Dec 31 '15 at 17:27
  • @JoseAntonioDuraOlmos THX for pointing out. The post condition might not catch all cases, but that wasn't in question primarily. – πάντα ῥεῖ Jan 01 '16 at 00:41
27

No, there aren't any values that will, within the bounds of defined behavior of standard C++, violate the post-condition. However, there are values that still can make the function behave incorrectly, namely values so large that their product does not fit within an integer. Try passing 200'000 and 15'000.

Due to the way most compilers implement C++, you might see the post-condition being violated, but what you're actually observing is undefined behavior due to integer overflow.

Sebastian Redl
  • 69,373
  • 8
  • 123
  • 157
12

The answer is that his precondition-check is incomplete. Even though it is too restrictive.
He failed to include a check that the product can be represented instead of resulting in UB:

int area(int length, int width) {
    // calculate area of a rectangle
    assert(length >= 0 && width >= 0 && (!width
        || std::numeric_limits<int>::max() / width >= length));
    int a = length * width;
    assert(a >= 0); // Not strictly neccessary - the math is easy enough
    return a;
}
Deduplicator
  • 44,692
  • 7
  • 66
  • 118
5

What comes to my mind is a signed overflow. It is undefined behavior but might yield a negative value.
Try std::numeric_limits<int>::max() and 2.

cadaniluk
  • 15,027
  • 2
  • 39
  • 67
4

Yes if suppose you are using 16 bit computer so int = 2B Max value +32767 so in following

{
    length = 500, width = 100;
    if (length<=0 || width <=0) error("area() pre-condition");
    int a = length*width;   // a = 500 * 100 = 50000
    if (a<=0) error("area() post-condition");
    return a;
}

now final value will be a = -17233 because it gets into -ve value. so second condition gets false.

Its all depends on range.

3

INT_MAX will fail to fulfill the post-condition when used for both length and width for all conforming compilers.

One might be tempted to say that, since the standard guarantees that INT_MAX>=32767, then INT_MAX*INT_MAX will always be greater than INT_MAX and thus not representable in an int which is defined as being able to hold a maximun value of INT_MAX.
It is a good argument and it is actually what happens most often, you will get an overflow with most compilers.

But to cover all bases we need to be aware that the C++ standard states :

3.4.3
1 undefined behavior
behavior,upon use of a nonportable or erroneous program construct or of erroneous data,for which this International Standard imposes no requirements

2 NOTE Possible undefined behavior ranges from ignoring the situation completely with unpredictable results, to behaving during translation or program execution in a documented manner characteristic of the environment (with or without the issuance of a diagnostic message), to terminating a translation or execution (with the issuance of a diagnostic message).

3 EXAMPLE An example of undefined behavior is the behavior on integer overflow.

So it is a bit more serious than not getting the right value for the area. When INT_MAX is used for both length and width (or any other combination with a result which is not representable) there is no guarantee of what the compiled program will do. Anything can happen; from the likely ones like overflows or crashes to the unlikely ones like disk formats.

Anonymous Coward
  • 3,140
  • 22
  • 39
  • I agree it's bad practice to code this way, but it's rather easy to predict what current compilers will do in practice. An aggressive compiler still can't optimize away the `a<0` check, because it's needed to handle the normal negative * positive = negative case. A check like `(x+1) > x` [will optimize away completely for signed int, but not for unsigned int](http://goo.gl/sD53ka). Also @JimJim2000: see http://teaching.idallen.com/dat2343/10f/notes/040_overflow.txt for carry (unsigned wraparound) vs. overflow (signed wraparound) terminology – Peter Cordes Jan 01 '16 at 13:55
  • http://blog.llvm.org/2011/05/what-every-c-programmer-should-know.html explains *why* it's helpful for compilers to do this: more efficient code for loop counters, for example. – Peter Cordes Jan 01 '16 at 13:56
3

Multiplication of values that overflow the bit representation of value type is undefined because the number of bits overflowed could be more than 1. Thus you could end up with a positive or negative sign bit and the number of lost bits is variable.

Example 1: INT_MAX * 2 : result is correct but since the high bit represents the sign bit it is not corrected represented for its type.

Example 2: INT_MAX * 4 : 1 bit is lost to overflow and the the sign bit is incorrect like in the previous example.

Example 3: (INT_MAX + 1) * 2 = 0 : due to overflow of all set bits but sign is correct.

I am using a 8 bit binary representation to make it easier to read, to show why this happens.

0111 1111              // Max positive signed value
+1
1000 0000              // Sign bit set but binary value is correct
*2
0000 0000              // Upper bit is lost due to overflow

In this case there is both soft overflow, no lost information but the representation is incorrect. And hard overflow where the bit is no longer present in the result.

The difference in the overflows is how the overflow can be detected. Typically hard overflows will be detected by the hardware and require very little work for the software to handle. However software overflows may require the software to explicitly test for the overflow condition because the hardware typically does not recognize a sign bit in integer math operations.

How the run-time library handles the overflow is up to the library. Most will ignore it because it is faster to do so, while others may throw an error. Undefined behavior does not mean it might format your disk. The result of a math operation does not alter the flow of code in any way except as the logic of the code dictates. It can ignore the overflow or try to handle it in some way. The standard does not dictate what method to employ if the code or the hardware tries to handle the problem.

Basically three there are 3 possible things that can happen.
1. The overflow is ignore and the returned value in invalid.
2. The overflow is ignored by the run-time library but the hardware throws an error that is also ignored, resulting in a hard failure of the running code. In this situation it is completely up to the OS to determine what happens next. Going nuts and destroying data would to a poor design decision.
3. The overflow is handled by the run-time library which must determine the best way to proceed. Typically this means giving the code a chance to catch the error and handle it, or by shutting down the code as graceful as possible.

Mykola
  • 3,343
  • 6
  • 23
  • 39
3

Since C++11 there is a boolean value you can test:

std::numeric_limits<int>::is_modulo

If this value is true then signed arithmetic behaves in a wraparound fashion, and there is no undefined behaviour in the original code. A negative value could indeed be produced and so the test in the original code is meaningful.

For further discussion of is_modulo see here

Community
  • 1
  • 1
M.M
  • 138,810
  • 21
  • 208
  • 365
0

So basically, positive values in multiplication ... result in Positive values but these may not actually fit the result type .

Your precondition is not complete, and you postcondition is also invalid. Not only you can get negative values but also positive values that are just smaller than the input value, all you need is sufficiently large values as input such that the wrap around goes beyond zero, i.e. a long-wrap-around .

You can use this :

bool multiplication_is_safe(uint32_t a, uint32_t b) {
    size_t a_bits=highestOneBitPosition(a), b_bits=highestOneBitPosition(b);
    return (a_bits+b_bits<=32);
}

to guard against overflow, but then you would want to employ additional checks for FALSE-Positives .

Alternatively if performance is not that much of an issue you can use MPZ library. If performance is an issue and you want to write assembly for a CPU that has an overflow flag, then you can do just that. It is possible that your compiler also can do the checks for you e.g. G++ has fno-strict-overflow or maybe cast to unsigned int after the precondition check.

At any rate, most of these solutions do not actually solve your problem that results will be foo, that is that you might get smaller area than the actual result.

So your only safe choice is to allow only safe multiplications as shown herein, doing that you miss something, but not that much.

Community
  • 1
  • 1
g24l
  • 3,055
  • 15
  • 28