12

The following code

#include <string>

struct Foo {
    operator double() {
        return 1;
    }

    int operator[](std::string x) {
        return 1;
    }
};

int main() {
    Foo()["abcd"];
}

Compiles fine with g++ but fails with clang and intel compilers because of an ambiguity between the declared method and native operator [].

It would be clear for me if Foo had an implicit conversion to int, but here the conversion is to double. Doesn't that solve the ambiguity?

6502
  • 112,025
  • 15
  • 165
  • 265
  • 3
    What happens if you remove the conversion operator to `double`? – R Sahu Aug 04 '14 at 07:37
  • Works fine on clang without the conversion operator for me. – Ray Toal Aug 04 '14 at 07:41
  • This is pretty much the same problem as http://stackoverflow.com/questions/8914986/should-this-compile-overload-resolution-and-implicit-conversions – T.C. Aug 04 '14 at 08:45
  • Does it compile with icc when you remove `Foo::operator[]`? – n. m. could be an AI Aug 04 '14 at 09:06
  • @n.m. Checked with icl (the Windows version of icc) on my computer, and it does compile. But then icl also wrongly compiles one of the examples in §13.3.1.2 [over.match.oper]/p7 that the standard says shouldn't compile, so... – T.C. Aug 04 '14 at 09:11
  • @T.C. so clang and icc have essentially the same bug and gcc has a different bug. – n. m. could be an AI Aug 04 '14 at 09:31
  • @n.m. Pretty much. The example in that paragraph has two separate errors. Clang accepts both, icc accepts one, gcc accepts neither but messes up operator overload resolution in a different way. – T.C. Aug 04 '14 at 09:34

1 Answers1

7

§13.3.3.1.2 [over.ics.user]/p1-2:

A user-defined conversion sequence consists of an initial standard conversion sequence followed by a user-defined conversion (12.3) followed by a second standard conversion sequence. If the user-defined conversion is specified by a constructor (12.3.1), the initial standard conversion sequence converts the source type to the type required by the argument of the constructor. If the user-defined conversion is specified by a conversion function (12.3.2), the initial standard conversion sequence converts the source type to the implicit object parameter of the conversion function.

The second standard conversion sequence converts the result of the user-defined conversion to the target type for the sequence.

In particular, there's an implicit conversion from floating point to integral type (§4.9 [conv.fpint]/p1):

A prvalue of a floating point type can be converted to a prvalue of an integer type. The conversion truncates; that is, the fractional part is discarded. The behavior is undefined if the truncated value cannot be represented in the destination type.

For overload resolution purposes, the applicable candidates are:

Foo::operator[](std::string x)              // overload
operator[](std::ptrdiff_t, const char *);   // built-in

Given an argument list of types (Foo, const char [5]).

To match the first operator function, the first argument is an exact match; the second requires a user-defined conversion.

To match the second built-in function, the first argument requires a user-defined conversion sequence (the user-defined conversion to double followed by a standard conversion to std::ptrdiff_t, a floating-integral conversion). The second argument requires a standard array-to-pointer conversion (still exact match rank), which is better than a user-defined conversion.

Thus for the first argument the first function is better; for the second argument the second function is better, we have a criss-cross situation, overload resolution fails, and the program is ill-formed.

Note that, while for the purposes of operator overload resolution, a user-defined conversion sequence can have two standard conversion sequences (one before and one after the user-defined conversion), and operands of non-class-type can be converted to match the candidates, if a built-in operator is selected, the second standard conversion sequence is not applied for operands of class type, and no conversion at all is applied for operands for non-class type before the operator is interpreted as a built-in (§13.3.1.2 [over.match.oper]/p7):

If a built-in candidate is selected by overload resolution, the operands of class type are converted to the types of the corresponding parameters of the selected operation function, except that the second standard conversion sequence of a user-defined conversion sequence (13.3.3.1.2) is not applied. Then the operator is treated as the corresponding built-in operator and interpreted according to Clause 5.

Thus if Foo::operator[](std::string x) is removed, the compiler should report an error, though clang doesn't. This is an obvious clang bug, as it fails to reject the example given in the standard.

T.C.
  • 133,968
  • 17
  • 288
  • 421
  • Yet clang rejects `1.0["foo"]` as invalid. – n. m. could be an AI Aug 04 '14 at 07:48
  • @n.m. That's correct as well - there's no overload resolution in that case. Moreover, if overload resolution selects a built-in, the result's not necessarily valid. – T.C. Aug 04 '14 at 07:53
  • On my machine clang complains about ambiguity between `Foo::operator[](std::string)` and the *built-in* `operator[](int, const char *)` (no ptrdiff_t involved). When you remove `Foo::operator[]` it happily selects a built-in with no error. But the built-in requires one of the arguments to be of integral type. How do you explain this? – n. m. could be an AI Aug 04 '14 at 08:10
  • @n.m. `std::ptrdiff_t` is a typedef to a signed integer type (apparently on your machine it's `int` - I assume it's 32-bit?). The second is a clang bug. It shouldn't convert arguments of non-class type or apply the second standard conversion sequence for arguments of class type, but it does. – T.C. Aug 04 '14 at 08:15
  • There are two possible explanations. (1) The built-in is not viable, therefore it should be dropped from the overload set, and `Foo::operator[]` selected as the only remaining element. Since clang erroneously thinks the built-in is viable, this doesn't happen. (2) The standard implies the built-in is viable, and requires it to be rejected *after* the overload resolution stage. If this is the case, it's a defect in the standard (an unexpected and illogical behaviour, not warranted by anything). – n. m. could be an AI Aug 04 '14 at 08:24
  • @n.m. The built-in is definitely viable for overload resolution purposes, since there's an implicit conversion sequence from `Foo` to `int` or `long` (via `double`). Only after it's selected, will it be rejected in the next step. – T.C. Aug 04 '14 at 08:33
  • Hm, it looks like 13.6/1 says this is by design, rather than by mistake. – n. m. could be an AI Aug 04 '14 at 09:00