The standard doesn't say; as you say, it doesn't even require reference
counting. On the other hand, there is (or was) a statement in the
standard (or at least in the C standard) that exceeding implementation
limits is undefined behavior. So that's almost certainly the official
answer.
In practice, I would expect most implementations to maintain the count
as a size_t
or a ptrdiff_t
. On machines with flat addressing, this
pretty much means that you cannot create enough references to cause an
overflow. (On such machines, a single object could occupy all of the
memory, and size_t
or ptrdiff_t
have the same size as a pointer.
Since every reference counted pointer has a distinct address, there can
never be more than would fit in a pointer.) On machines with segmented
architectures, however, overflow is quite conceivable.
As Jon points out, the standard also requires
std::shared_ptr::use_count()
to return a long
. I'm not sure what
the rationale is here: either size_t
or ptrdiff_t
would make more
sense here. But if the implementation uses a different type for the
reference count, presumably, the rules for conversion to long
would
apply: "the value is unchanged if it can be represented in the
destination type (and bit-field width); otherwise, the value is
implementation-defined." (The C standard makes this somewhat clearer:
the "implementation-defined value" can be a signal.)