Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhance user experience in face of false-positive type errors and type loss #53977

Open
magwas opened this issue Apr 24, 2023 · 6 comments
Open
Labels
Needs Proposal This issue needs a plan that clarifies the finer details of how it could be implemented. Suggestion An idea for TypeScript

Comments

@magwas
Copy link

magwas commented Apr 24, 2023

Suggestion

A common source of frustration is false-positive type errors and cases when typing is lost. There are understandable reasons for limitations of the inference algorithm, but the sheer number of closed bug reports, features and stack overflow questions show that there is a big frustration around it.

I am referring to this: https://twitter.com/SeaRyanC/status/1544414378383925250

As a coder,

  • I would like to be able to figure out whether I ran into a limitation in the inference algorithm or a real type bug. That could be facilitated by including the complexity limits the algorithm ran into while coming up with the type error. As a bonus, the compiler could suggest to use typings if we are sure that it is a false positive.

  • I would like to be able to balance between the use of my brainpower spent on chasing false positives and the computing power spent on inference. That could be facilitated by making complexity limits configureable.

  • I would like to be able to provide utility functions to make life easier, to be able to express things more succintly, which requires complex typing. Loosing typing in those situations means that the resulting usage will be more verbose (like because type templates should be filled in by hand) than the expressions those functions meant to replace, making the effort useless. I see two kinds of helps in those cases:

    • increase complexity limits for specific type functions
    • make it possible to drive the inference algorithm for a type function in much the same way we drive the solver when doing formal verification.

I do understand that it is a lot of features, some of them might be insanely complicated to implement. I am more of talking about a change of approach than about features implemented right now. I believe that the current status where coders are frustrated, and the whole thing is blamed on Gödel is unsustainable.

🔍 Search Terms

a lot of closed issues and feature request, and also a lot of Stack Overflow questions show that this topic does create a lot of frustration.

I searched for "inference" in feature requests

✅ Viability Checklist

My suggestion meets these guidelines:

  • [ ✅ ] This wouldn't be a breaking change in existing TypeScript/JavaScript code
  • [ ✅] This wouldn't change the runtime behavior of existing JavaScript code
  • [ ✅ ] This could be implemented without emitting different JS based on the types of the expressions
  • [ ✅ ] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.)
  • [ ✅ ] This feature would agree with the rest of TypeScript's Design Goals.

⭐ Suggestion

  • configureable complexity of the inference algorithm
  • errors stating which inference limits they have ran into
  • ways to drive the inference algorithm exposed for type functions

📃 Motivating Example

When you run into a type error, the error message tells you what limitations the inference engine ran into, so it is easier to figure out whether it was a false positive or a real error.

You can set the limits of the inference engine in tsconfig.json, which you can increase to get rid of a false positive, and then it is your choice whether you wait more for the compiler to finish or use force-typing.

When you really need a type to be correct, you can drive the inference engine to achieve that without explosion of the decision space.

💻 Use Cases

My last false positive when Gödel got blamed. I do not believe it is that complex.
https://tsplay.dev/WJV6rW

A utility function (to work around the this idiocy of JS) with type loss:
https://tsplay.dev/NrQZ2w

@RyanCavanaugh RyanCavanaugh added Suggestion An idea for TypeScript Needs Proposal This issue needs a plan that clarifies the finer details of how it could be implemented. labels Apr 24, 2023
@RyanCavanaugh
Copy link
Member

There's really only a tiny handful of such limits, e.g. 24 for expanding literal unions. I find it really unlikely that those represent the majority of cases where you're getting false positives unless it's something super domain-specific.

I do not believe it is that complex.

This example has nothing to do with the linked tweet thread, which I think you've overinterpreted beyond its original intention.

This is complex. It requires an entirely new way of tracking types to see that field is the same variable with the same value in both locations. For example, from an analysis perspective, the code you have there is the same as

function foo(currentUser: CreateUserDto,user: CreateUserDto) {
  const field1 = allowedPatchFields[Math.floor(Math.random() * 3)];
  const field2 = allowedPatchFields[Math.floor(Math.random() * 3)];
  currentUser[field1] = user[field2];
}

which is clearly unsafe.

When you run into a type error, the error message tells you what limitations the inference engine ran into, so it is easier to figure out whether it was a false positive or a real error.

I think you've misapprehended how this works. One way to think of it is that when TS sees a S trying to be used as a T, it "tries to construct a proof" (*) that all valid S are valid Ts. If it can't do that, it might be because there truly are Ss that aren't Ts. It might also be that it's just not clever enough to construct such a proof. But, by definition, we don't know which of those is happening.

The same result is seen in other domains: We don't know if the Collatz conjecture is true or not. The lack of a counterexample doesn't mean that it is true, and it's not correct to say "The Collatz conjecture is true because we can't construct a counterexample". The only thing we can say is "We don't know how to construct a proof for this [yet]".

* It is not literally a proof solver and please don't overindex on this use of the phrase

@fatcerberus
Copy link

Devil’s advocate: We can (in theory, at least) tell the difference between “this is provably not assignable” vs. “we don’t have the tools to say whether it’s assignable or not”. Whether making this distinction in the compiler would be useful in practice is another matter (my money’s on probably not)

@magwas
Copy link
Author

magwas commented May 6, 2023

There's really only a tiny handful of such limits, e.g. 24 for expanding literal unions.

In limits I also mean the cases when the algorithm does not look into something as it is deemed to explode the search space.

@magwas
Copy link
Author

magwas commented May 6, 2023

The only thing we can say is "We don't know how to construct a proof for this [yet]".

My understanding that there is the above case, and the "this is clearly wrong" case. Distinguishing between them would be helpful.

@magwas
Copy link
Author

magwas commented May 6, 2023

For example, from an analysis perspective, the code you have there is the same as

Until the "completely new way of tracking fields" is implemented. (You can take this as an example of a constraint of the nature I was talking about above.) Which obviously a programming effort. Which you understandably do not put into it if you suspect that the use cases of the new algorithm will be prohibitively resource intensive.
And it may wery well turn out that in most cases it is just a waste of effort.
But in this particular case I believe it would not be prohibitive. If you look at the problem from the perspective I offered, it is easier to implement such things, as it is sure that they won't be a complete waste of effort. Even if they have to be turned off by default, they can be turned on for the particular case they are needed.
Or alternatively you can probably shortcut it by allowing something along the lines of

function foo(currentUser: CreateUserDto,user: CreateUserDto) {
  for (const field of allowedPatchFields) {
    if (field in user) {
      const value = user[field];
      currentUser[field as FieldOf<CreateUserDTO,for(typeof value)>] = value; // I do not intend to make a statement here about how the syntax should look like
    }
  }
}

Which i suspect that falls into the category of "driving the algorithm" I mentioned.

@darrylnoakes
Copy link

My last false positive when Gödel got blamed. I do not believe it is that complex.
tsplay.dev/WJV6rW

See @jcalz's answer to "Assert a value type is the same at both sides of the assignment" on StackOverflow for an explanation of why this code is invalid.

Relevant parts copied below for ease of reference and searchability:

Question (extract)

interface InterfaceA {
  transfer?: string[];
  deposit?: number[];
  test?: boolean[];
}

interface InterfaceB extends InterfaceA {
  moretest?: boolean[];
}

type Label = ("transfer" | "deposit" | "test")

const a: InterfaceA = {}
const b: InterfaceB = {}

const labels: Label[] = ["transfer" , "deposit" , "test"];

for (const label of labels){
    a[label] = b[label]
}

Type 'string' is not assignable to type 'number'.

It fails to compile because a[label] (left side of the assignment) can be one of string[], number[] or test[] while b[label] (right side of the assignment) can also be one of string[], number[] or test[].

It kinds of make sense, except that Typescript should know that if the left side is string[], the right side will be string[] too, same applies for the other types.

This is why replacing the assignment with a[label as "transfer"] = b[label as "transfer"] works.

~ Alvaro Flaño Larrondo

Answer (extract)

TypeScript can't handle "correlated union" types in which a single block of code is checked multiple types for each member of a union. This issue is described in detail in microsoft/TypeScript#30581.

From a pure type system perspective, it's unsafe, because the type of b[label] is the InterfaceB[Label] union, and the compiler is worried that when you try to assign it to a[label] of union type InterfaceA[Label], maybe the two keys are actually different? That is, it's ignoring the identity of label and just looking at the type. Put another way: imagine you had label1 and label2 both of type Label. Then a[label1] = b[label2] would clearly be unsafe. (See microsoft/TypeScript#30769 for the pull request implementing this safety check.) Of course you're not doing this, but the compiler is unable to see the difference because the types are the same for both situations.

~ jcalz

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Needs Proposal This issue needs a plan that clarifies the finer details of how it could be implemented. Suggestion An idea for TypeScript
Projects
None yet
Development

No branches or pull requests

4 participants