-
Notifications
You must be signed in to change notification settings - Fork 12.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question / Suggestion: Behaviour of unknown in distributive conditional type. #27418
Comments
👍 👍 👍
Consider something more concrete like this: type IsDog<T> = T extends Dog ? number : string;
type D = IsDog<Animal>; Today That logic might seem like it's still OK, but for an arbitrary // M: number | string because Dog & Mortgage is not a contradictory type
type M = IsDog<Mortgage>; If you think of types as Venn diagrams over all values (which I think is the most useful analogy), then it becomes more clear: it's "correct" to map over circles which are apparently disjointed, but it's not "correct" to carve up a contiguous circle into subpieces and distribute over them. |
In my head this wasn't the proposed behavior. I think my reasoning boils down to top-down vs bottom up interpretation of
which is bottom up in the sense you construct up from smaller disjoint sets. Under this interpretation then a conditional type would distribute over each element, and because it includes every type you would always get both branches. I like this reading because it's very dual to
So my 'proposal' isn't really anything to do with 'plausibility' of assignment: "some Animals are Dogs"., but the fact that unknown is (under my interpretation) the actual union expression of all types and that's how conditional types work. If I understand your view I think you're more top-down: you start with unknown as the set of everything, then carve up circles within that and say they are types. Under that approach then you wouldn't have that behavior. Similarly with So back to I'm not actually sure if I want my proposal, I was curious to see how others viewed the behavior and whether distribution of unknown might make some things possible. I Appreciate the feedback! This can definitely be marked down as a question and not a suggestion. |
Your observation about There was a long internal discussion over whether or not we even needed |
Agree. I'd also add in
I can't really disagree with empirical experience that suggests the current behavior is just functionally better. It might be possible to argue for something useful useful it it was much easier to work with and understand, but that really isn't the case here. Will close up this question to keep the issue tracker tidy. |
This is a fascinating discussion. I actually didn't know that non-null primitive types were assignable to I think one of the key points of confusion upon diving deep into this area of TypeScript's type system is that primitive types are disjoint from each other, can be finite sets, and can have finite intersections; whereas object types are necessarily infinite sets which must overlap infinitely (i.e. have infinite intersections) or else be completely disjoint. The type Dog = 'labrador' | 'husky' | 'pug' // ...
type Cat = 'persian' | 'siamese' | 'sphynx' // ...
type Animal = Dog | Cat
type Mortgage = 'my house' | 'your house' | 'the white house' Then actually, @jack-williams' proposal would work great:
They don't even have to themselves be primitives for this to work, due to TypeScript's brilliant way of allowing you to combine primitive and object types into discriminated unions. @jack-williams' proposal would work equally well with something like: class Dog {
readonly type = 'dog';
// ...
}
class Cat {
readonly type = 'cat';
// ...
}
type Animal = Dog | Cat
class Mortgage {
readonly type = 'mortgage';
// ...
} But then you can't use traditional inheritance ( |
@laughinghan seems like it exists now that you've made it 😃 |
@RyanCavanaugh haha great! Yeah I went ahead and wrote up a little guide to all the types in the diagram: https://gist.github.com/laughinghan/31e02b3f3b79a4b1d58138beff1a2a89 |
This isn't a direct suggestion, more a query that includes a proposed alternate approach (though I'm not sure I would even want the alternate).
Search Terms
distributive conditional type unknown
Suggestion / Question
Here is the current behavior of a distributive conditional types applied to distinguished types
never
andunknown
.A distributive conditional type maps over union elements, so the explanation for the first case has been described as:
A ::
never
is the empty union so we map over nothing, returningnever
.Following this intuition one might assume the following:
B ::
unknown
is a infinite union (union of all types) so distributing always matches both sides.Though this isn't how it actually works, and
unknown
is treated like a regular type and returns the false branch.My question is: how would I go about explaining the current behavior in a way that is consistent with what
never
,unknown
, and conditional types mean. To add to the confusion,any
also has its own wildcard behavior that matches both branches (except when the extends type isany
). Given thanunknown
has been describe as the type-safe counterpart ofany
, it seems like they should behave somewhat similarly in conditional types.My suggestion / prompt for discussion is: what should
unknown
do? Are there any practical advantages to having it act a certain way? The only real alternate design is to have it distribute to both branches, but I'm not sure if that is 'better'.Checklist
My suggestion meets these guidelines:
The text was updated successfully, but these errors were encountered: