Home > Software engineering >  Why are these generics being treated differently?
Why are these generics being treated differently?

Time:09-03

In the code below, why is it returning different results - when one defines a type, vs. entering the same type as a generic:

type StrictEqual<A1, A2> = [A1] extends [A2] ? ([A2] extends [A1] ? true : false) : false

const typesMatch = <A, B>(match: StrictEqual<A, B>) => match

type Pins<
  Node extends {
    Error: unknown
    Output: unknown
    ResultResolverController: unknown
  },
  AccumulatedTypes extends {
    AccumulatedErrorResolverControllers: unknown
  },
> = {
  (result: Node['Output']): Node['ResultResolverController']
  error: (error: Node['Error']) => AccumulatedTypes['AccumulatedErrorResolverControllers']
}

type Foo = Pins<
    {
      Output: 'OutputN2'
      Error: 'ErrorN2'
      ResultResolverController: 'ResResolverN2'
    },
    {
      AccumulatedErrorResolverControllers: 'ErrResolverN1' | 'ErrResolverN2'
    }
  >

type Bar = Pins<
    {
      Error: unknown
      Output: unknown
      ResultResolverController: unknown
      ErrorResolverController: unknown
    } & {
      Output: 'OutputN2'
      Error: 'ErrorN2'
      ResultResolverController: 'ResResolverN2'
      ErrorResolverController: 'ErrResolverN2'
    },
    {
      AccumulatedErrors: 'ErrorN1' | 'ErrorN2'
      AccumulatedOutputs: 'OutputN1' | 'OutputN2'
      AccumulatedResultResolverControllers: 'ResResolverN1' | 'ResResolverN2'
      AccumulatedErrorResolverControllers: 'ErrResolverN1' | 'ErrResolverN2'
    }
  >

typesMatch<Foo,Bar>(true) // works

typesMatch<
  Pins<
    {
      Output: 'OutputN2'
      Error: 'ErrorN2'
      ResultResolverController: 'ResResolverN2'
    },
    {
      AccumulatedErrorResolverControllers: 'ErrResolverN1' | 'ErrResolverN2'
    }
  >,  // should be the same as Foo
  Pins<
    {
      Error: unknown
      Output: unknown
      ResultResolverController: unknown
      ErrorResolverController: unknown
    } & {
      Output: 'OutputN2'
      Error: 'ErrorN2'
      ResultResolverController: 'ResResolverN2'
      ErrorResolverController: 'ErrResolverN2'
    },
    {
      AccumulatedErrors: 'ErrorN1' | 'ErrorN2'
      AccumulatedOutputs: 'OutputN1' | 'OutputN2'
      AccumulatedResultResolverControllers: 'ResResolverN1' | 'ResResolverN2'
      AccumulatedErrorResolverControllers: 'ErrResolverN1' | 'ErrResolverN2'
    }
  > // should be the same as Bar
>(true) // doesn't work!

code

CodePudding user response:

TL;DR TypeScript's type checker needs to make simplifying assumptions in order to be performant, and sometimes these assumptions are incorrect. The Pins type function is incorrectly assumed to have measurable variance (probably invariant).

When the compiler notices that it is comparing Pins<A, C> to Pins<B, D>, it simplifies this to a comparison of A to B and of C to D without evaluating Pins at all. On the other hand, when it doesn't notice, it fully evaluates Pins<A, B> and Pins<C, D> and compares the results.

These are different ways of comparing types and can have different results, and so there is a tradeoff between correctness and performance, and you've run into a situation where this is manifested.

For more information, see microsoft/TypeScript#39549, microsoft/TypeScript#48116, microsoft/TypeScript#49852 and probably many others.


Here is a minimal example of what you're seeing. The following code compiles with no problem:

type Foo<T extends { x: 0 }> = () => T['x'];

type Bar = Foo<{ x: 0 }> // type Bar = () => 0
declare let bar: Bar;

type Baz = Foo<{ x: 0, y: 1 }> // type Baz = () => 0
let baz: Baz = bar; // okay

Which makes sense because both Bar and Baz are equivalent to () => 0. But the following seemingly identical code has an error:

type Foo<T extends { x: 0 }> = () => T['x'];

declare let bar: Foo<{ x: 0 }>;

let baz: Foo<{ x: 0, y: 1 }> = bar; // error?!
// Type 'Foo<{ x: 0; }>' is not assignable to type 'Foo<{ x: 0; y: 1; }>'.

Why is Foo<{x: 0}> not assignable to Foo<{x: 0, y: 1}> if Bar is assignable to Baz? Shouldn't the type checker recognize all of those types as () => 0 which are compatible with each other?


The answer is that it should be the same, but the TypeScript type checker can't afford to be fully consistent. Let's say that type checker needs to determine if type X is assignable to type Y. The correct thing to do would be to perform what's known as a full structural check where each member of Y is fully structurally checked against to each member of X... a recursive procedure which in the general case can be very expensive (if the types are complex and deep) or even undecidable (if the types are themselves recursive). So sometimes, the compiler will need to skip the full check.

One way it does this is with so-called variance markers. If the compiler notices that X is of the form F<A> and Y is of the form F<B>, then it might be able to skip the full check if the type parameter in F<T> has a variance marker. For example, if F<T> is known to be covariant in T, then F<A> is assignable to F<B> if and only if A is assignable to B. Which means the compiler can just check A against B, which is potentially easier to do.

If all type functions could be accurately marked as covariant, contravariant, bivariant, or invariant, then we could improve type checking performance without losing correctness.

Unfortunately this is not possible. Many type functions are complex enough that their variance doesn't fall into those neat categories; it might be that for arbitrary A and B you have no idea if F<A> is or is not assignable to F<B> without doing a full structural check. In these cases, F<T> would need to also be marked as unreliable (you need to fall back to a structural check in cases where the variance check fails) or unmeasurable (you cannot rely on a variance check at all).

So now we have a trade-off. The spectrum runs from:

  • Mark all type functions as having unmeasurable variance. All type checks would be full structural checks, and we'd have consistent behavior from a compiler with abysmal performance, to:

  • Mark all type functions with a best-effort measurable variance. Full structural checks would rarely be performed, and we'd have a compiler that produces wrong results quickly.

Everywhere in the middle we have to perform some analysis to determine which type functions should have measurable variance, and this analysis would itself have a tradeoff between performance and correctness.


It's probably not too illuminating to do much digging to find out exactly where along this spectrum TypeScript is. In the above example, Foo<T> seems to have been incorrectly marked as covariant in T, which is often a reasonable thing to do when you're using indexed access types. But in this case it leads to problems. It would be great if the compiler had marked Foo<T> as unreliable or unmeasurable. But could that happen without destroying performance either by doing too many unnecessary structural checks or by spending too much time deciding whether or not to do one? I don't know. From the GitHub issues linked above and others, it seems like messing with variance markers can easily lead to performance problems, so they tend to avoid doing that if they can.

Playground link to code

  • Related