-
Notifications
You must be signed in to change notification settings - Fork 784
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
shared python object access across threads and lazy_static deadlocks #973
Comments
Honestly, now I have no good solution for this. Possible solutions in user codes are:
|
I'll try to sketch out a draft PR with my idea tomorrow. I think there can be a pretty elegant solution using the |
See #975 |
Amazing, thank you. I will try to check the branch and see if it resolves my specific issue. |
👍 note that we could probably wrap it with a macro similar to |
Hi, I finally had the time to try it out. Using the OnceCell seem to do the trick, my tests no longer freeze when running in parallel :) Being the rust noob that I tried to instantiate the object in a similar way to the datetime object: static SLAVE_MANAGER_ONCE_CELL: OnceCell<PyObject> = OnceCell::new();
struct SlaveManagerApi {
}
static SLAVE_MANAGER : SlaveManagerApi = SlaveManagerApi {
};
impl Deref for SlaveManagerApi {
type Target = PyObject;
fn deref(&self) -> &'static PyObject {
let py = unsafe {Python::assume_gil_acquired()};
SLAVE_MANAGER_ONCE_CELL.get_or_init(py, || {
let cls = || -> PyResult<PyObject> {
let ctx: pyo3::PyObject = py
.import("pyfmu.fmi2.slaveContext")
.expect("Unable to import module declaring slave manager. Ensure that PyFMU is installed inside your current envrioment.")
.get("Fmi2SlaveContext")?
.call0()?
.extract()?;
println!("{:?}", ctx);
Ok(ctx)
};
match cls() {
Err(e) => {
e.print_and_set_sys_last_vars(py);
panic!("Unable to instantiate slave manager");
}
Ok(o) => o,
}
})
}
} One thing that is bothering me is that it seems that multiple threads are accessing the python code, despite having acquired the GIL. Should this be possible? |
Mmm the use of Also it's not unexpected that even with GIL acquired properly, multiple threads might start the initialization because I suggest modifying your code to be this:
If you prefer |
Thanks for the feedback, I really appreciate it :)
I guess i overlooked the fact that the GIL may be released by the Python code being executed. fn foo() {
let gil = Python::acquire_gil();
let py = gil.python();
get_slave_manager(py).call_method0("function_which_should_not_be_interrupted");
}
fn bar() {
let gil = Python::acquire_gil();
let py = gil.python();
get_slave_manager(py).call_method0("function_which_should_not_be_interrupted");
} So the options would be rewriting the Python code being called to ensure that it does not release the GIL at an inopportune time, or using a mutex in Rust? |
I'm not sure of the exact mechanics of what bytecodes the Python interpreter may choose to switch thread on.
It depends exactly why you don't want the interpreter to switch threads. If you need to guarantee only one thread runs the whole critical section, you should use a Mutex, yes. If you just need to guarantee ordering, a channel might be enough? |
Rewriting the Python code such that the GIL is not released in a critical section solved the problem. Again thank you for all the help you have provided. |
When wrapping Python libraries in Rust it may be useful to share a static instance of an object across threads. For example, python's datetime library has a one-time initialization of an API object. Using lazy_static for accessing the object can cause a deadlock as described below:
rust-lang-nursery/lazy-static.rs#116
@davidhewitt mentioned that he has put some thought into general solutions for internal use.
The text was updated successfully, but these errors were encountered: