1

The following piece of code has been causing me some issues, specifically the evaluation of diff:

#include <iostream>

template<typename T>
class array2d
{
protected:
    T* arr;
    int w, h;

public:
array2d() : arr(nullptr), w(0), h(0) {}

bool init(int w, int h)
{
    free();
    arr = new (std::nothrow) T[w*h];
    if (arr == nullptr)
    {
        std::cerr << "array2d::init(): Failed to allocate: " << w * h * sizeof(T) << " bytes.\n";
        return false;
    }
    this->w = w;
    this->h = h;
    return true;
}

void fill(T v)
{
    for (int i = 0; i < w*h; ++i)
        arr[i] = v;
}

void free()
{
    if (arr != nullptr)
    {
        delete[] arr;
        arr = nullptr;
        w = 0;
        h = 0;
    }
}

template<typename U>
bool copy(const array2d<U>& v)
{
    if (!v)
    {
        std::cerr << "array2d::copy(): Input array is empty.\n";
        return false;
    }
    if (w*h != v.width()*v.height())
    {
        if (!init(v.width(), v.height())) return false;
    }
    for (int i = 0; i < w*h; ++i)
        arr[i] = static_cast<T>(v(i));
    return true;
}

inline T operator()(int i) const { return arr[i]; }
inline T& operator()(int i) { return arr[i]; }
inline T operator[](int i) const { return arr[i]; }
inline T& operator[](int i) { return arr[i]; }
inline T operator()(int x, int y) const { return arr[x + y*w]; }
inline T& operator()(int x, int y) { return arr[x + y*w]; }

inline int width() const { return w; }
inline int height() const { return h; }
inline const T* get() const { return arr; }
inline T* get() { return arr; }

operator bool() const { return arr != nullptr; }

array2d<T>& operator*=(T v)
{
    for (int i = 0; i < w*h; ++i)
        arr[i] *= v;
    return *this;
}

~array2d() { free(); }
};

typedef unsigned long long uint64;

uint64 computeErrors(const array2d<uint64>& a, const array2d<uint64>& b)
{
    uint64 MNE = 0;
    for (int i = 0; i < a.height()*a.width(); ++i)
    {
        uint64 diff = ((a[i] >= b[i]) ? a[i] - b[i] : b[i] - a[i]);
        if (a[i] != b[i])
        {
            std::cout << "Diff: " << diff << "\n";
            std::cout << "What diff should equal to: " << ((a[i] >= b[i]) ? a[i] - b[i] : b[i] - a[i]) << "\n";
        }
        if (i == 0)
        {
            std::cout << "Diff should not be 0, but it is: " << diff << "\n";
        }
        if (MNE < diff)
        {
            MNE = diff;
            std::cout << "We should reach this, but we don't.\n";
        }
    }
    return MNE;
}



int main()
{
    int w = 1;
    int h = 1;
    array2d<uint64> a;
    if (!a.init(w,h)) return 1;
    a.fill(0);
    array2d<uint64> b;
    if (!b.init(w,h)) return 1;
    b.fill(0);
    a[0] = 0ull;
    b[0] = 1;
    std::cout << a[0] << " " << b[0] << "\n";
    auto e = computeErrors(a, b);
    std::cout << "MNE: " << e << "\n";

    return 0;
}

If I compile it with /O1, or /O2, or /Ox; with /Ot and /Ob2, then diff is always 0, when it shouldn't be. If I compile without /Ot or without /Ob2 everything works as intended. Is there anything ill-formed in this code? I have attached the whole thing here: http://coliru.stacked-crooked.com/a/245f23f05df39418. If I change the array's [] operator to return const T& rather than T, however, everything works fine. Is this a compiler bug? Or is it my bad returning T rather than const T& in the operator[] (I am returning T since I read somewhere that passing smaller PODs by value is faster, in my case I have some arrays with uint32 where speed is very important). I am currently using Visual C++ 2017.

EDIT: As per user4581301's advice I added the whole code here too.

EDIT2: This is my _MSC_FULL_VER: 191125547.

lightxbulb
  • 1,251
  • 12
  • 29
  • 1
    OT: if you `#include `, you automatically get `std::uint64_t`. – Justin Feb 28 '18 at 18:51
  • I prefer not using underscores. – lightxbulb Feb 28 '18 at 18:54
  • Can you include your definition of array2d, if it's not too long? – Cubic Feb 28 '18 at 18:54
  • Check the link, it is included there. – lightxbulb Feb 28 '18 at 18:54
  • 6
    @lightxbulb That's not the point, `std::uint64_t` is defined to be exactly 64 bits wide, `unsigned long long` has no such guarantee. – Cubic Feb 28 '18 at 18:55
  • Thanks, I'll be sure to change all my typedefs to use the __int64 and __int32 types. – lightxbulb Feb 28 '18 at 18:57
  • 2
    @lightxbulb How do you see `std::uint64_t` and read `__int64`? – Cubic Feb 28 '18 at 19:01
  • uint64_t seems to be a typedef of __int64, but maybe that's only on my compiler. – lightxbulb Feb 28 '18 at 19:03
  • 1
    That's your compiler. Note: `std::uint64_t` is optional in the C++ standard. If the system does not support 64 unsigned integers for some reason, probably because it's a 16 bit CPU, you get a nice, pretty compiler error rather than a nasty runtime surprise. Very helpful when porting software to other platforms. – user4581301 Feb 28 '18 at 19:07
  • I've been trying your code locally with g++ and online visual studio here: http://rextester.com/l/cpp_online_compiler_visual - didn't get the issue you were talking about in either case. – Cubic Feb 28 '18 at 19:17
  • 2
    Unrelated: `Array2d` violates [the Rule of Three](https://stackoverflow.com/questions/4172722/what-is-the-rule-of-three). This is an unnecessary risk. – user4581301 Feb 28 '18 at 19:19
  • @user4581301 I put the whole code in. Yeah, I should have deleted the copy ctor, thanks for pointing it out. – lightxbulb Feb 28 '18 at 19:20
  • @Cubic: Did you enable the optimizations? I've tried it only in Visual Studio. – lightxbulb Feb 28 '18 at 19:21
  • With /O, /O2 and /O3. Voting to close as nonreproducible. – Cubic Feb 28 '18 at 19:24
  • I have VS2017, I used /Ot and /Ob2, and I cannot reproduce your error on any optimization level. – Benjamin Lindley Feb 28 '18 at 19:24
  • No VS17, but unable to reproduce in VS15 – user4581301 Feb 28 '18 at 19:25
  • @BenjaminLindley what's your _MSC_FULL_VER? – lightxbulb Feb 28 '18 at 19:32
  • @lightxbulb: 191025019 – Benjamin Lindley Feb 28 '18 at 19:35
  • @BenjaminLindley: I assume that's an earlier version than what I have: 191125547? – lightxbulb Feb 28 '18 at 19:37
  • Yes, I would assume so. – Benjamin Lindley Feb 28 '18 at 19:39
  • 1
    It is specific to x64 code generation. Applying the `__declspec(noinline)` attribute to computeErrors() is a workaround. Use Help > Send Feedback > Report a Problem to tell them about it. – Hans Passant Feb 28 '18 at 19:54
  • @HansPassant: Thanks a lot. I already sent a report. I got to the conclusion that inlining is the issue since it only happens if /Ob2 is enabled. How did you figure it was x64 specific? Did you just compile it with x86 and test it, or did you figure it out in another way? – lightxbulb Feb 28 '18 at 19:58
  • Sure, got no repro for x86 so tried x64 next. – Hans Passant Feb 28 '18 at 20:00
  • @Cubic -- re: "no such guarantee" -- well, to be absolutely clear: `unsigned long long` is guaranteed to be at least 64 bits wide. So, literally, no, it's not guaranteed to be exactly 64 bits wide. But it always exists. – Pete Becker Feb 28 '18 at 20:06
  • Pretty scary that a major compiler can have such a bug in simple integer arithmetic – M.M Feb 28 '18 at 20:20
  • ``191125547`` is VS 2017 (15.4). Have you tried VS 2017 (15.5 update) which would be 19.12.25835 fully updated? – Chuck Walbourn Feb 28 '18 at 22:25
  • RE "I prefer not using underscores." Why? ``uint64_t`` is part of both the C99 and C++11 standards and is supported on basically all compilers. – Chuck Walbourn Feb 28 '18 at 22:26
  • @ChuckWalbourn: I haven't tried the newest version, I have however reported the problem. If they have already solved it, I'll get a reply about that, otherwise I hope somebody will fix it, since I lost quite a bit of time trying to figure out why my code was producing incorrect error estimates. As for the underscores thing - I do appreciate the clarification from user4581301 about portability, so I would use a `typedef uint64_t uint64` in the future rather than `typedef unsigned long long uint64`, the fact that I don't like underscores is just a quirk. – lightxbulb Feb 28 '18 at 23:19
  • underscores are a way of life in C/C++. They get used everywhere... About the only sensible stance to take is "Don't use leading underscores on symbols because those are reserved for compiler implementers." In any case, the C++11 way would be to do: ``using uint64 = uint64_t;``. Overuse of ``typedef`` makes your C++ code so 90s. – Chuck Walbourn Mar 01 '18 at 00:15

1 Answers1

1

The report was submitted (I assume by @lightxbulb) to Visual Studio Developer Community on Feb 28, 2018.

On May 8, 2018, it was reported (same link) that the issue was resolved in the most recent Visual Studio Release, which was 2017 15.7 (as per the information in the post header).

Anton Menshov
  • 2,266
  • 14
  • 34
  • 55