Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Share change ticks between systems #9857

Open
hymm opened this issue Sep 19, 2023 · 5 comments
Open

Share change ticks between systems #9857

hymm opened this issue Sep 19, 2023 · 5 comments
Labels
A-ECS Entities, components, systems, and events C-Feature A new feature, making something new possible

Comments

@hymm
Copy link
Contributor

hymm commented Sep 19, 2023

What problem does this solve or what need does it fill?

Sometimes you want to add the transform propagation systems to the schedule in multiple places. The transform systems use change detection to not propagate transforms that haven't changed. This will do extra work because the propagation will run for any transforms changed since the current system has run instead of transforms changed since the last time any transform propagation has run.

What solution would you like?

Probably some type of AtomicTicks and AtomicSystem that are passed in System::run. This is a little tricky since you'd have to clone instances of the system when adding them. For Transform propagation we could potentially store the systems on the Plugin.

What alternative(s) have you considered?

I was also thinking of an ArcSystem, but I don't think that can work without locking since we take mutable access to a system to update it's archetypes.

@hymm hymm added C-Feature A new feature, making something new possible S-Needs-Triage This issue needs to be labelled labels Sep 19, 2023
@alice-i-cecile alice-i-cecile added A-ECS Entities, components, systems, and events and removed S-Needs-Triage This issue needs to be labelled labels Sep 19, 2023
@alice-i-cecile
Copy link
Member

Previous thinking about atomic systems: bevyengine/rfcs#46

@hymm
Copy link
Contributor Author

hymm commented Sep 20, 2023

This is different feature from that rfc. This is atomic in the same sense as AtomicBool types that are safe to use from multi threaded contexts

@JoJoJet
Copy link
Member

JoJoJet commented Sep 20, 2023

Instead of using atomics, another option would be to store the shared ticks in a resource.

@maniwani
Copy link
Contributor

maniwani commented Sep 22, 2023

Let's not mess with system internals for this? There are few ways this could be resolved without making all systems more complicated forever...

  • The executor already has special handling for apply_deferred, we could do the same for propagate_transforms (and friends). It could let propagate_transforms run, save its change tick, and write that to the next instance of propagate_transforms.
  • This problem likely just goes away if systems become entities. There could just be one instance of propagate_transforms that all schedules use.

@alice-i-cecile
Copy link
Member

alice-i-cecile commented Sep 23, 2023

I'd similarly like to avoid complicating system internals here if we can.

I think that the systems-as-entities (or other "unified storage") solution is by far my favorite here. I think you also want to be able to share and reuse other local / cached data. Make an app.reuse_system(schedule_label, system) API that does map-like lookup.

Special-casing this is both a questionable architectural choice, and not feasible for this case due to circular dependencies.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-ECS Entities, components, systems, and events C-Feature A new feature, making something new possible
Projects
None yet
Development

No branches or pull requests

4 participants