-
Notifications
You must be signed in to change notification settings - Fork 71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add resource pool #566
Add resource pool #566
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @darrenldl!
I've added a few comments, generally on the API rather than the actual implementation for now. I think some MDX tests would help us decide the API too :))
|
||
let start_monitor ~sw (t : 'a t) : unit = | ||
let rec aux () = | ||
match Stream.take t.waiting with |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With this current API, a pool must be explicitly shutdown
in order for the forked monitoring fiber to finish and for the program to move past whatever switch is used to create the pool. I hit this problem right away when using the pool:
open Eio
let () =
Eio_main.run @@ fun _ ->
let test_pool () =
Eio.Switch.run @@ fun sw ->
let p = Pool.create ~sw ~alloc:(fun () -> (), Pool.noop_handlers) 2 in
let use () =
Eio.Fiber.fork ~sw @@ fun () ->
Pool.use p (fun () -> Eio.traceln "Run!"; Eio_unix.sleep 2.)
in
let runs = [ (); (); (); (); (); () ] in
List.iter use runs;
(* Pool.shutdown p *)
in
test_pool ();
Eio.traceln "All done"
This program hangs without explicitly shutting down the pool. I think this might surprise some people, is there a way to avoid this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This actually caught myself off guard as well, so I agree it is not very good design.
I think shutdown
can be replaced by polling from Eio switch? I'll have to look at other Eio modules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if the pool itself needs to spawn the monitoring fiber, or if it just needs to return a resource when a user calls use
which would yield when the pool is out of resources. This would avoid needing to pass in this switch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think my original intention was to make analysis simple, especially in anticipation of faults in allocation, release etc. I don't know how faithful I've stuck to that intention, and it could be the case that we can leave the responsibilities to use
.
I'm leaning toward having a monitor for simpler code (in my attempts thus far), but would not mind swapping around approaches.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You might be able to keep most things the same and just use Fiber.fork_daemon
instead and this wouldn't (I think) be an issue anymore https://ocaml.org/p/eio/latest/doc/Eio/Fiber/index.html#val-fork_daemon
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah right, it carries the exact behaviour we want for monitor. Thanks!
|
||
val use : 'a t -> ('a -> 'b) -> 'b | ||
|
||
val async : sw:Switch.t -> 'a t -> ('a -> unit) -> unit |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think these are necessary (i.e. do async
and async_promise
need to be exposed to the user, I realise they're used internally) ? In my mind, the use
function is the main one and I can decide to fork the fiber that calls it should I need to. Otherwise we've kinda added fork
and fork_promise
with two slightly different names (using async
) which feels a little cluttered ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah good point. Leaving the forking responsibilities to users gives a cleaner picture yes. They did feel a bit sloppy and redundant when I was writing it.
|
||
val async_promise : sw:Switch.t -> 'a t -> ('a -> 'b) -> 'b Promise.or_exn | ||
|
||
val clear : 'a t -> unit |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't quite understand when clear
would be used, this calls the dispose
function for the resource, how would people use this in practice? I think a few test cases would help here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I just copied this from Lwt_pool. Indeed not very obvious how to use this in practice - I think removing it might be best.
Thanks for feedback! I'll revisit this PR in a few weeks. |
@darrenldl: In #602 I've made a lock-free pool using the Cells module. I think the extra features of this PR (e.g. the monitor fiber) could probably be built on top of that - would that work for your use? |
@talex5 The monitor is just there to make sure if some acquired resources have become unusable after use or an exception occurred during use, the pool is filled back in. But I think the I think your version (along with the simpler API) is good - this PR mainly tried to match Lwt pool features, some of which might not be that usable in practice to begin with. Thanks for adding |
Split from #517 to focus on just resource pool