Given these types:
interface P {
id: string
}
interface A extends P {
attrA: string
}
interface B extends P {
attrB?: string
}
type R = Exclude<A | B, B>; // never
The type R
is never
because A extends B
is perceived by the compiler to be true:
const a: A = { id: "a", attrA: "A" };
const b: B = a; // okay
You can see that a
is a valid A
, but it's also a valid B
. Object types in TypeScript are not exact; you can extend a type by adding properties (which is why A
is assignable to P
, even though A
has an extra property). From the type system's point of view, every value of type A
is also a value of type B
, and so, Exclude<A | B, B>
removes both A
and B
from the union and you are left with never
.
Of course, it isn't actually type safe to have A
assignable to B
. The compiler assumes that every value of type A
is also of type B
, but really it's more like almost every value. There are some specific values of type A
which are should not be assignable to type B
, namely anything with an attrB
property of an incompatible type:
const aButNotB = {id: "a", attrA: "A", attrB: 123}
const alsoA: A = aButNotB; // okay
const notB: B = aButNotB; // error, attrB is incompatible
const butWait: B = alsoA; // no error?! attrB is still incompatible, but compiler forgot!
But since TypeScript doesn't yet support negated types, there's no way in TypeScript to represent "Every A
which is not a B
" as a concrete type. And so, when comparing two object types, the compiler just ignores optional properties present in just one of them... leading to this blind spot.
So that's why it's happening. As for how to fix this, it depends on your use case.
Ideally if you really need to be able to discriminate values of a union of types from each other, you would use a discriminated union. This means the union should contain some property or properties which can be used to absolutely tell them apart from each other. For example, let's imagine adding a type
discriminant property:
interface Aʹ extends A {
type: "A";
}
interface Bʹ extends B {
type: "B";
}
type Rʹ = Exclude<Aʹ | Bʹ, Bʹ>; // Aʹ
Now there is no way for a value of type Aʹ
to be assignable to type Bʹ
or vice versa, and now Exclude
behaves as you expect.
Or, if you're just looking at how to do the Exclude
the way you expect, you can modify your A
and B
types before processing them, and then unmodify the result, like this:
type OptToUndef<T> = {
[K in keyof Required<T>]: T[K] | ({} extends Pick<T, K> ? undefined : never)
};
type UndefToOpt<T> = (Partial<
Pick<T, { [K in keyof T]: undefined extends T[K] ? K : never }[keyof T]>
> &
Pick<
T,
{ [K in keyof T]: undefined extends T[K] ? never : K }[keyof T]
>) extends infer O
? { [K in keyof O]: O[K] }
: never;
Basically OptToUndef<T>
takes an object type T
and makes all optional types required but their property types include undefined
. And UndefToOpt<T>
takes an object type T
and makes all properties whose types include undefined
into optional properties. These are more-or-less inverse operations of each other (as long as you don't have required types including undefined
). Then you can do this:
type UA = OptToUndef<A>; // {id: string; attrA: string }
type UB = OptToUndef<B>; // {id: string; attrB: string | undefined }
type UR = Exclude<UA | UB, UB>; // same as UA
type Rfixed = UndefToOpt<UR>; // same as A
Something like that might work for you, where you tweak B
to not be all-optional, and then un-tweak it when you're done.
Okay, hope that helps; good luck!
Link to code