The minimum reproducible example for your issue is probably something like this:
function f<T extends { limit: number, page: number }>(
t: T,
x: Omit<T, "page"> & { page: number; }
): T {
return x; // error!
}
The compiler is unable to verify that x
is of type T
. And actually x
might not be of type T
, as in the following example:
interface PageSeven {
limit: number,
page: 7; // only seven, that's it
}
const page7: PageSeven = { limit: 123, page: 7 };
const oopsPage7 = f(page7, { limit: 456, page: 100 });
console.log(oopsPage7.page); // 7 at compile time, 100 at runtime, oops
The compiler thinks that oopsPage7.page
is of the numeric literal type 7
instead of the wider number
type, but at runtime oopsPage7.page
will be 100
. Oops.
If you have an object type V
, then a type U extends V
can be a subtype of it either by adding new properties or by narrowing the types of existing properties. In this latter case, you can't recover V
by taking Pick<U, keyof V>
; you might still have a narrower type, as with Page7
above.
But you're on to something... it turns out that you can represent actually valid transformations similar to this that are perfectly type safe, but the compiler still won't be able to verify it:
function g<T extends { limit: number, page: number }>(
t: T,
x: Omit<T, "page"> & Pick<T, "page">
): T {
return x; // still error!
}
g(page7, { limit: 456, page: 100 }); // error at compile time
g(page7, page7); // okay
Here the compiler is still complaining about return x
; even though the type Omit<T, "page"> & Pick<T, "page">
really seems very definitely assignable to T
, since even narrowed page
properties will be preserved. This is a known issue in TypeScript, see microsoft/TypeScript#28884. The issue is that the compiler doesn't usually go through the effort of verifying that a non-generic object can be assignable to a type that depends on an unspecified generic type parameter. It's possible that this will eventually be addressed, but it's not clear; even if it can be done correctly, it needs to be done efficiently enough so as not to significantly worsen compiler performance. I wouldn't hold my breath here if I were you.
So, what can be done? If you are sure that a narrowing like I'm mention with PageSeven
won't happen, or you don't care about it, you can use a type assertion (expr as Type
). This is the usual recommendation for a situation in which you know that something you're doing is type safe and the compiler doesn't (with the caveat that you should triple-check the safety because you've taken on the responsibility of verifying it):
function f<T extends { limit: number, page: number }>(
t: T,
x: Omit<T, "page"> & { page: number; }
): T {
return x as T; // no error, technically unsafe
}
function g<T extends { limit: number, page: number }>(
t: T,
x: Omit<T, "page"> & Pick<T, "page">
): T {
return x as T; // no error, probably safe
}
On the other hand, you could annotate your functions so that they deal with Omit<T, "page"> & {page: number}
instead of T
:
function f<T extends { limit: number, page: number }>(
t: T,
x: Omit<T, "page"> & { page: number; }
): Omit<T, "page"> & { page: number; } {
return x;
}
const okayPage100 = f(page7, { limit: 456, page: 100 });
console.log(okayPage100.page); // number at compile time, 100 at runtime, okay
function g<T extends { limit: number, page: number }>(
t: T,
x: Omit<T, "page"> & Pick<T, "page">
): Omit<T, "page"> & Pick<T, "page"> {
return x;
}
const ret: PageSeven = g(page7, page7); // still okay
Either way should work. The assertion is probably the most convenient while being possibly (slightly) unsafe, while the reannotating might spread to your whole code base while being only slightly more safe than the assertion. It's up to you though.
Okay, hope that helps; good luck!
Playground link to code