Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better node cleanup #204

Merged
merged 4 commits into from
Nov 24, 2024
Merged

Better node cleanup #204

merged 4 commits into from
Nov 24, 2024

Conversation

Carter12s
Copy link
Collaborator

Description

ROS1 Node Clears any and all Advertises, Subscribes, and Services with the ROS master when dropped. Goal here is that we don't leave junk in rosnode list / rosnode info after our node shuts down.

I don't love this design and it should be redundant if Publishers / Subscriptions and Service Servers cleanup after themselves correctly, but there is some good redundancy in doing this.

Fixes

Closes: number

Checklist

  • Update CHANGELOG.md

@lucasw
Copy link
Contributor

lucasw commented Oct 21, 2024

As a part of this PR can the examples show off least-worst ctrl-c handling that also results in cleanup/unregistering?

As it is it always takes a little time after the drop to finish unregistering, because it's async- so an unhandled ctrl-c or something like this always happens too quickly:

    tokio::spawn(async move {
        tokio::signal::ctrl_c().await.unwrap();
        std::process::exit(0);
    });

What I've did is this:

...
    {
        let nh = NodeHandle::new("http://localhost:11311", "listener_rs").await?;
        let mut subscriber = nh.subscribe::<std_msgs::String>("/chatter", 1).await?;

        loop {
            tokio::select! {
                _ = tokio::signal::ctrl_c() => {
                    log::warn!("ctrl-c, exiting");
                    break;
                }
                msg = subscriber.next() => {
                    if let Some(Ok(msg)) = msg {
                        log::info!("[/listener_rs] Got message: {}", msg.data);
                    }
                }
            }
        }
    }
    log::info!("done with subscribing, letting subscription unregister");
    tokio::time::sleep(std::time::Duration::from_millis(50)).await;

    Ok(())
}

The sleep at the end is ugly though.

(You said a while back that the cancel from select! isn't going to be handled properly in the subscriber recv(), but is that ok since it's shutting down anyway?)

Another method I've tried:

    {
        let mut nh = nh.clone();
        tokio::spawn(async move {
            tokio::signal::ctrl_c().await.unwrap();
            println!("ctrl-c, exiting");
            // force any subscriber waiting on a recv to error out, then this will exit
            nh.unregister_all_subscribers().await;
            // if anything else in this application needs to be signaled to exit do it here and wait for it to complete
            // something external will have to escalate if this fails to result in an exit
        });
    }

(where I have a public unregister_all_subscribers() method)- this causes the subscriber recvs in another tokio task to return None, which causes a break in their loops, so the program exiting as quickly as it can and without that extra sleep at the end.

@Carter12s
Copy link
Collaborator Author

That is a really good call and something we should 100% handle better.

I'll try to add that to this MR and see if I can concoct an integration test that proves out the correct behavior.

In the worst case we can provide a little utility for "register_roslibrust_ctrl_c_handler" I think bevy has something like this.

@Carter12s Carter12s merged commit 074f0d5 into master Nov 24, 2024
5 checks passed
@Carter12s
Copy link
Collaborator Author

I decided to merge this MR today, it had sat long enough that I'm going to go ahead and stick these changes in and then try for more cleanups in subsequent MRs. Don't love that, but doing my best.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants