Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix a precision issue in abstract_iteration #41839

Merged
merged 5 commits into from
Aug 30, 2021
Merged

Conversation

martinholters
Copy link
Member

The first loop is left before valtype and statetype are updated from the current stateordonet, so the second loop should first process it before obtaining a new one instead of vice versa.

Fixes #41022.

@martinholters martinholters added the compiler:inference Type inference label Aug 9, 2021
@martinholters
Copy link
Member Author

Ah, no, this isn't it. We need to compute a new stateordonet based on the const-widened statetype first. We just need to be careful in case the first loop never sets a statetype to not use Union{} as the argument type then.

@martinholters
Copy link
Member Author

Bump. @Keno, mind taking another look?

@martinholters
Copy link
Member Author

Bump. Any objections against merging this?

@vtjnash vtjnash added backport 1.7 kind:bugfix This change fixes an existing bug labels Aug 17, 2021
@vtjnash
Copy link
Sponsor Member

vtjnash commented Aug 17, 2021

Thanks for solving this. It looks like it is actually a bugfix so we should backport it, but that only the iterate function needs to be moved (the other changes just make it slower / less accurate)

@vtjnash vtjnash added the backport 1.6 Change should be backported to release-1.6 label Aug 17, 2021
@martinholters
Copy link
Member Author

I don't think this is a bugfix in the sense that inference was unsound before. At least I could not come up with a case, although that may not be too strong an indicator... On the other hand, the fact that you proposed an alternative which did end up being unsound is a strong indicator that this is quite tricky to get right, so I'm not sure backporting is a good idea.

@martinholters
Copy link
Member Author

but that only the iterate function needs to be moved (the other changes just make it slower / less accurate)

If you mean that we should not do the removal of statetype = widenconst(statetype) and use of instead of <:, then no, these changes are needed for soundness (see explanation in response to your inline comment above).

@martinholters
Copy link
Member Author

Prodding at this, I've found another case where we could consider improving precision:

julia> struct NoneOrFailOnSecond{T}
           itr::T
       end

julia> Base.iterate(itr::NoneOrFailOnSecond) = iterate(itr.itr)

julia> f_splat(x) = (x...,)
f_splat (generic function with 1 method)

julia> f_splat(NoneOrFailOnSecond([]))
()

julia> f_splat(NoneOrFailOnSecond([1]))
ERROR: MethodError: no method matching iterate(::NoneOrFailOnSecond{Vector{Int64}}, ::Int64)
[...]

julia> @code_typed f_splat(NoneOrFailOnSecond([1]))
CodeInfo(
1%1 = Core._apply_iterate(Base.iterate, Core.tuple, x)::Tuple
└──      return %1
) => Tuple

(Same for master and this PR in its current form.) Conceivably, this could be inferred as Tuple{}. It should also be possible to construct a case where it would be possible to infer that either a fixed (non-zero) number of elements is produced or the iteration fails. We would need to check nounion == Bottom and then act accordingly. But I doubt it's worth it. Such iterators look rather pathological and defensively producing too wide a result seems better than getting something wrong and returning too narrow a result, even if only for bizarre edge cases.

So I'd say leave that as is (for now, at least). Opinions?

@vtjnash
Copy link
Sponsor Member

vtjnash commented Aug 19, 2021

I think we can easily handle that, by choosing not to widen valtype, and avoiding adding Vararg{Bottom} to the array

Another case we can improve is to model the implicit typeassert call, by adding a tmeet(nounion, Tuple{Any, Any}) call before the checks for whether we have an Tuple of 2 fields

base/compiler/abstractinterpretation.jl Outdated Show resolved Hide resolved
valtype = Any
break
end
if nounion.parameters[1] <: valtype && nounion.parameters[2] <: statetype
# reached a fixpoint or iterator failed/gave invalid answer
if typeintersect(stateordonet, Nothing) === Union{}
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if typeintersect(stateordonet, Nothing) === Union{}
if !(Nothing <: stateordonet)

As Nothing is a leaf type, this should be equal but more efficient?

Copy link
Sponsor Member

@vtjnash vtjnash Aug 20, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should be almost identical (since typeintersect can check that property too), so not worth worrying about the difference too much

if nounion === Union{}
# handle iterator failing or giving an invalid answer below
nounion = Tuple{Union{}, Union{}}
elseif !isa(nounion, DataType) || !(nounion <: Tuple) || isvatuple(nounion) || length(nounion.parameters) != 2
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
elseif !isa(nounion, DataType) || !(nounion <: Tuple) || isvatuple(nounion) || length(nounion.parameters) != 2
elseif !isa(nounion, DataType)

Should be sufficient with the typeintersect above, but in corner cases it might give funnily too wide types which then might make these extra checks necessary?

If the first loop exits in the first iteration, the `statetype` is still
`Bottom`. In that case, the new `stateordonet` needs to be determined
with the two-arg version of `iterate` again.

Fixes #41022.
Explicitly test that inference produces a sound (and reasonably precise)
result when splatting an iterator (in this case a long range) that
allows constant-propagation up to the `MAX_TUPLE_SPLAT` limit.
Avoid recomputation of a `widenconst` and initialize `valtype` and
`statetype` to `Bottom` before second loop to simplify a `⊑`to a `<:`
there.
@martinholters
Copy link
Member Author

@vtjnash anything left I should address or good to go?

@vtjnash vtjnash merged commit 92337b5 into master Aug 30, 2021
@vtjnash vtjnash deleted the mh/fix_abstract_iteration branch August 30, 2021 20:44
KristofferC pushed a commit that referenced this pull request Aug 31, 2021
If the first loop exits in the first iteration, the `statetype` is still
`Bottom`. In that case, the new `stateordonet` needs to be determined
with the two-arg version of `iterate` again.

Explicitly test that inference produces a sound (and reasonably precise)
result when splatting an iterator (in this case a long range) that
allows constant-propagation up to the `MAX_TUPLE_SPLAT` limit.

Fixes #41022

Co-authored-by: Jameson Nash <vtjnash@gmail.com>
(cherry picked from commit 92337b5)
KristofferC pushed a commit that referenced this pull request Sep 3, 2021
If the first loop exits in the first iteration, the `statetype` is still
`Bottom`. In that case, the new `stateordonet` needs to be determined
with the two-arg version of `iterate` again.

Explicitly test that inference produces a sound (and reasonably precise)
result when splatting an iterator (in this case a long range) that
allows constant-propagation up to the `MAX_TUPLE_SPLAT` limit.

Fixes #41022

Co-authored-by: Jameson Nash <vtjnash@gmail.com>
(cherry picked from commit 92337b5)
KristofferC pushed a commit that referenced this pull request Sep 3, 2021
If the first loop exits in the first iteration, the `statetype` is still
`Bottom`. In that case, the new `stateordonet` needs to be determined
with the two-arg version of `iterate` again.

Explicitly test that inference produces a sound (and reasonably precise)
result when splatting an iterator (in this case a long range) that
allows constant-propagation up to the `MAX_TUPLE_SPLAT` limit.

Fixes #41022

Co-authored-by: Jameson Nash <vtjnash@gmail.com>
(cherry picked from commit 92337b5)
@KristofferC KristofferC removed backport 1.6 Change should be backported to release-1.6 backport 1.7 labels Sep 7, 2021
LilithHafner pushed a commit to LilithHafner/julia that referenced this pull request Feb 22, 2022
If the first loop exits in the first iteration, the `statetype` is still
`Bottom`. In that case, the new `stateordonet` needs to be determined
with the two-arg version of `iterate` again.

Explicitly test that inference produces a sound (and reasonably precise)
result when splatting an iterator (in this case a long range) that
allows constant-propagation up to the `MAX_TUPLE_SPLAT` limit.

Fixes JuliaLang#41022

Co-authored-by: Jameson Nash <vtjnash@gmail.com>
LilithHafner pushed a commit to LilithHafner/julia that referenced this pull request Mar 8, 2022
If the first loop exits in the first iteration, the `statetype` is still
`Bottom`. In that case, the new `stateordonet` needs to be determined
with the two-arg version of `iterate` again.

Explicitly test that inference produces a sound (and reasonably precise)
result when splatting an iterator (in this case a long range) that
allows constant-propagation up to the `MAX_TUPLE_SPLAT` limit.

Fixes JuliaLang#41022

Co-authored-by: Jameson Nash <vtjnash@gmail.com>
staticfloat pushed a commit that referenced this pull request Dec 23, 2022
If the first loop exits in the first iteration, the `statetype` is still
`Bottom`. In that case, the new `stateordonet` needs to be determined
with the two-arg version of `iterate` again.

Explicitly test that inference produces a sound (and reasonably precise)
result when splatting an iterator (in this case a long range) that
allows constant-propagation up to the `MAX_TUPLE_SPLAT` limit.

Fixes #41022

Co-authored-by: Jameson Nash <vtjnash@gmail.com>
(cherry picked from commit 92337b5)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
compiler:inference Type inference kind:bugfix This change fixes an existing bug
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Possible inference improvement in iterated assignment where the RHS uses splatting
4 participants