Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

World age #131

Closed
timholy opened this issue Apr 13, 2017 · 14 comments
Closed

World age #131

timholy opened this issue Apr 13, 2017 · 14 comments

Comments

@timholy
Copy link
Member

timholy commented Apr 13, 2017

There are various discussions all over the place (JuliaGizmos/Interact.jl#161, JuliaGizmos/Interact.jl#158 (comment), JuliaLang/julia#21141, JuliaLang/julia#21307), but it makes sense to have some kind of issue filed here. For the record, the following simple test fails on 0.6:

julia> using Reactive

julia> s = Signal(1)
input: Signal{Int64}(1, nactions=0)

julia> m = map(val->val, s, init=0)
map(input): Signal{Int64}(0, nactions=0)

julia> value(m)
0

julia> push!(s, 2)
Failed to push!

julia> 
to node
    input: Signal{Int64}(2, nactions=1)

error at node: map(input): Signal{Int64}(0, nactions=0)
MethodError: no method matching (::##3#4)(::Int64)
The applicable method may be too new: running in world age 21667, while current world is 21668.
Closest candidates are:
  #3(::Any) at REPL[3]:1 (method too new to be called from this world context.)
(::Reactive.##15#16{##3#4,Tuple{Reactive.Signal{Int64}}})(::Reactive.Signal{Int64}, ::Int64) at /home/tim/.julia/v0.6/Reactive/src/operators.jl:40
do_action(::Reactive.Action, ::Int64) at /home/tim/.julia/v0.6/Reactive/src/core.jl:161
run(::Int64) at /home/tim/.julia/v0.6/Reactive/src/core.jl:239
macro expansion at /home/tim/.julia/v0.6/Reactive/src/core.jl:281 [inlined]
(::Reactive.##12#13)() at ./task.jl:335
@shashi
Copy link
Member

shashi commented Apr 13, 2017

@timholy Thanks for opening this issue and being on top of this problem!

@SimonDanisch
Copy link
Member

@timholy what where the possible fixes to this? I think it was something like this:

I think I'm missing one fix ;)

@SimonDanisch
Copy link
Member

Ah, @vtjnash suggested to restart the event queue when an action is added:

__init__() = @async backend_eventqueue()
function backend_eventqueue()
    restart[] = false
    while true
        dispatch_events()
        restart[] && (@async backend_eventqueue(); return)
    end
end

@timholy
Copy link
Member Author

timholy commented Apr 13, 2017

I like the third one, but I haven't played with it (and have other tasks I need to switch to for a bit).

@vtjnash
Copy link

vtjnash commented Apr 13, 2017

Possibly one more approach is to start an event-loop per map. The main event loop would just notify Condition variables (or push directly to a Channel?), and the consumer tasks would all be independent.

@timholy
Copy link
Member Author

timholy commented Apr 15, 2017

I was gearing up to tackle this, and happened to notice #129. It appears to solve all the problems with world age.

@JobJob
Copy link
Member

JobJob commented Apr 15, 2017

@timholy is that with Reactive.run_async(false) ?

I don't think it should work In the default async mode, it's still just starting the event loop after Reactive.__init__ is called, and then functions defined after that should have world age issues. E.g. this fails for me with that branch on today's 0.6 (at the REPL and in Jupyter) with a world age error:

using Reactive
s = Signal(1)
m = map((x)->x+1, s)
push!(s, 3)

Adding Reactive.run_async(false) after the using Reactive does make it work though.

I think restarting the event queue after adding the action sounds fine as per @SimonDanisch's comment above. This minor diff to the branch in #129 works, at least for the above case:

diff --git a/src/core.jl b/src/core.jl
index c3496b5..e9a8b34 100644
--- a/src/core.jl
+++ b/src/core.jl
@@ -174,6 +174,8 @@ function add_action!(f, recipient, root=nothing)
     for aroot in roots
         push!(action_queues[aroot], a)
     end
+    Reactive.stop()
+    start_queue_runner()
     a
 end
 
@@ -351,9 +353,13 @@ end
 # Run everything queued up till the instant of calling this function
 run_till_now() = run(Base.n_avail(_messages))
 
-# A decent default runner task
-function __init__()
+function start_queue_runner()
     global runner_task = @async begin
         Reactive.run()
     end
 end
+
+# A decent default runner task
+function __init__()
+    start_queue_runner()
+end

@JobJob
Copy link
Member

JobJob commented Apr 15, 2017

If we went with that, we'd need to take more care that there's not a race condition an issue with things being pushed to the wrong queue, during the stopping of the old and starting of the new queue. I'm not 100% sure about it pretty sure if the old queue contains a push!(a) and there's a map(x->push!(b,2x), a) that second push message will go on the old queue and not get run. Oh yeah there's only one message queue (_messages), just different (queue runner) tasks that take messages off it, so all good.

@JobJob
Copy link
Member

JobJob commented Apr 15, 2017

@vtjnash is the invokelatest proposal going to be merged? Would there be a performance penalty in using it?

@timholy
Copy link
Member Author

timholy commented Apr 16, 2017

Yeah, having now looked at the code in #129 I agree it can't solve the problem; I'm not sure why it seems less vulnerable in my tests (unless I had an unrelated change elsewhere).

Perhaps we should decide what to do about #129 before tackling this issue, though.

@JobJob
Copy link
Member

JobJob commented Apr 16, 2017

Yeah, also, if invokelatest is available, and works for this purpose, I think that it makes the most sense (it'd just be a one line change here I think), assuming there isn't some downside that I'm not aware of. I'm definitely rather unaware of the compiler details.

@timholy
Copy link
Member Author

timholy commented Apr 16, 2017

I suspect the issue is that invokelatest would exact a performance hit on each signal update, whereas stopping and starting the queue only exacts a performance hit when you change the signal graph.

@vtjnash
Copy link

vtjnash commented Apr 16, 2017

Yes, it's pretty much the same as eval. That's why using Channels instead of the single event loop may work better, or restarting the main loop. Although it isn't using the return value, do the inference barrier is perhaps not too significant.

@timholy
Copy link
Member Author

timholy commented Apr 18, 2017

Since #129 might take a little while to decide about, I decided to submit a variant of @JobJob's suggestion, see #132.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants