-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Threaded Dispatcher #142
Threaded Dispatcher #142
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## main #142 +/- ##
==========================================
+ Coverage 99.70% 99.71% +0.01%
==========================================
Files 9 9
Lines 667 704 +37
==========================================
+ Hits 665 702 +37
Misses 2 2 ☔ View full report in Codecov by Sentry. |
Initial overhead benchmarking shows threaded dispatcher at ~1.2x non-dispatcher case. Not a bad result at all given this is prior to any optimization. library(mirai)
library(future)
daemons(8, dispatcher = FALSE, .compute = "non-dispatcher")
#> [1] 8
daemons(8, dispatcher = NA, .compute = "threaded-dispatcher")
#> [1] 8
daemons(8, dispatcher = TRUE, .compute = "dispatcher")
#> [1] 8
plan("multisession", workers = 8)
bench::mark(
mirai(1, .compute = "non-dispatcher")[],
mirai(1, .compute = "threaded-dispatcher")[],
mirai(1, .compute = "dispatcher")[],
value(future(1)),
relative = TRUE
)
#> # A tibble: 4 × 6
#> expression min median `itr/sec` mem_alloc `gc/sec`
#> <bch:expr> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 "mirai(1, .compute = \"non-dispatc… 1 e0 1 782. 5.37 Inf
#> 2 "mirai(1, .compute = \"threaded-di… 1.26e0 1.16 699. 1 Inf
#> 3 "mirai(1, .compute = \"dispatcher\… 1.57e0 1.57 470. 1 Inf
#> 4 "value(future(1))" 1.08e3 893. 1 278. NaN Created on 2024-09-03 with reprex v2.1.1 |
953ed72
to
2fe3c8a
Compare
FYI @wlandau this is now ready to merge from my side as experimental. I still have to add documentation and tests. It doesn't have the same interfaces to support autoscaling, but otherwise works similarly to existing dispatcher. I'll open another thread so we can track {crew}-related discussions - it may be we find more efficient ways of doing things. |
Very exciting work! Looking forward to digging into specifics. |
4bc0a09
to
7a411ec
Compare
Dispatcher branch is merged at |
d9222a3
to
661a8b4
Compare
For the record, at point of merge, threaded dispatcher overhead at ~15%. This is really quite good as it has kept up with improvements in base performance over this period. library(mirai)
library(future)
daemons(8, dispatcher = FALSE, .compute = "non-dispatcher")
#> [1] 8
daemons(8, dispatcher = NA, .compute = "threaded-dispatcher")
#> [1] 8
daemons(8, dispatcher = TRUE, .compute = "dispatcher")
#> [1] 8
plan("multisession", workers = 8)
bench::mark(
mirai(1, .compute = "non-dispatcher")[],
mirai(1, .compute = "threaded-dispatcher")[],
mirai(1, .compute = "dispatcher")[],
value(future(1)),
relative = TRUE
)
#> # A tibble: 4 × 6
#> expression min median `itr/sec` mem_alloc `gc/sec`
#> <bch:expr> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 "mirai(1, .compute = \"non-dispatc… 1 e0 1 791. 8.66 Inf
#> 2 "mirai(1, .compute = \"threaded-di… 1.21e0 1.15 700. 1 Inf
#> 3 "mirai(1, .compute = \"dispatcher\… 1.53e0 1.59 506. 1 NaN
#> 4 "value(future(1))" 1.02e3 945. 1 278. NaN Created on 2024-09-11 with reprex v2.1.1 |
If we use library(mirai)
library(future)
daemons(8, dispatcher = "none", .compute = "none")
#> [1] 8
daemons(8, dispatcher = "thread", .compute = "thread")
#> [1] 8
daemons(8, dispatcher = "process", .compute = "process")
#> [1] 8
plan("multisession", workers = 8)
bench::mark(
collect_mirai(mirai(1, .compute = "none")),
collect_mirai(mirai(1, .compute = "thread")),
collect_mirai(mirai(1, .compute = "process")),
value(future(1)),
relative = TRUE
)
#> # A tibble: 4 × 6
#> expression min median `itr/sec` mem_alloc `gc/sec`
#> <bch:expr> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 "collect_mirai(mirai(1, .compute =… 1 e0 1 894. 8.07 NaN
#> 2 "collect_mirai(mirai(1, .compute =… 1.14e0 1.13 788. 1 NaN
#> 3 "collect_mirai(mirai(1, .compute =… 1.53e0 1.54 539. 1 Inf
#> 4 "value(future(1))" 1.08e3 999. 1 278. NaN Created on 2024-11-05 with reprex v2.1.1 |
Closes #74. Addresses key remaining issue of #97.
To use:
Requires the
dispatcher
branch of devnanonext
.Implementation currently benefits from:
This is a straight port of existing dispatcher using
inproc
sockets, only somewhat optimised. To keep things as light as possible, there is currently limited information retrieval from the dispatcher thread.Resultant performance is good, and is closer to the non-dispatcher rather than dispatcher case.
Does not yet support the interfaces required for
crew
autoscaling.A re-design is yet possible prior to merge.