Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Slow performance on M1 in large projects #609

Closed
grilme99 opened this issue Aug 5, 2022 · 12 comments · Fixed by #830
Closed

Slow performance on M1 in large projects #609

grilme99 opened this issue Aug 5, 2022 · 12 comments · Fixed by #830
Labels
impact: medium Moderate issue for Rojo users or a large issue with a reasonable workaround. type: bug Something happens that shouldn't happen type: tech debt Internal work that needs to happen

Comments

@grilme99
Copy link

grilme99 commented Aug 5, 2022

Sometime in the past couple of months, we have noticed significant performance regressions on the BedWars and Rift Royale codebases during syncing. We have been using Rojo 7.0.0 for over seven months, but this issue persists until the latest version. This makes me think it's related to some change in our codebase or assets.

This issue only occurs to people with an M1 Mac in our team. It can take up to 2 minutes to start the sync server for me, and syncing just sometimes stops working altogether (requiring a Rojo restart). It happens to people running through both Foreman and Rojo directly. Rojo has always been slow for our engineers on M1, but there has been a significant regression sometime recently.

Running rojo build only takes a few seconds.

Is there any way to profile what's happening behind the scenes to help diagnose the issue? This is having a significant impact on our workflows.

@grilme99 grilme99 changed the title Slow performance in large projects Slow performance on M1 in large projects Aug 5, 2022
@grilme99
Copy link
Author

grilme99 commented Aug 5, 2022

For some extra info, BedWars has ~30MB of binary model files and ~8MB of Lua files.

@LPGhatguy
Copy link
Contributor

Which build of Rojo are you running? Starting in 7.1.1, we started shipping native AArch64 (Apple Silicon) builds. I don't know if Foreman recognizes these builds, but it might be worth poking your system to see if it's downloading the right ones.

7.2.0 added a bunch of performance improvements gained from profiling on an M1 Mac. In theory, there should've been a big performance boost, not a regression! 😢

You can run Rojo with -v, -vv, or -vvv to get more info about what Rojo is doing. That might provide some insight. Otherwise, you can download Tracy and build Rojo using

cargo build --release --features profile-with-tracy

to get a build that you can profile on your system. That should tell you fairly precisely what's taking up all that time. This is the same tooling we used to profile and optimize a few bad cases recently.

@lextoumbourou
Copy link

I'm trying to debug what sounds like a similar issue today. I am running Rojo 7.2.1 using an m1. I don't think I've updated Rojo, but all of a sudden Rojo takes about 5 - 10 minutes to start. Previously it was starting in 10-20 seconds.

When I run with -vvv it seems to be hanging on this line:

[TRACE librojo::serve_session] Generating snapshot of instances from VFS

@Boegie19
Copy link
Contributor

Boegie19 commented Nov 7, 2022

@lextoumbourou I think I know what the issue is it is that it starts watching the files before the first creation and I have a fix ready for it it only needs to be tested since I am not 100% sure it fixes the issue

@lextoumbourou
Copy link

I'm happy to test. I can reproduce the error quite reliably.

@Boegie19
Copy link
Contributor

Boegie19 commented Nov 8, 2022

@lextoumbourou I made a branch for you to test https://github.com/Boegie19/rojo/tree/M1-Test you would only need to build the file yourself

If you can confirm this fixes your issue then I will open a pull request with it

@Boegie19
Copy link
Contributor

@lextoumbourou When are you planning to test it?

@lextoumbourou
Copy link

Hi. I'm sorry it took so long to get back to you. I think my issue turned out to be related to having a foreman and an aftman version of Rojo in the same project. After removing foreman completely from our repo, the issue appears fixed.

With that in mind, would you still like me to test @Boegie19 ?

@Boegie19
Copy link
Contributor

Boegie19 commented Nov 17, 2022

Okay I will still like you to test it if you get a speed increase on the inital serve with the new version VS the old one since I want to know what the impect is from watching the files for inital serve time
@lextoumbourou

@thunn
Copy link

thunn commented Mar 29, 2023

@Boegie19 Just tested your branch on my project. On M1 Mac it was taking multiple minutes to start up, using a build of your branch it started within 5 seconds

@Dekkonot
Copy link
Member

@grilme99 Hi, sorry for the long followup on this. The fix linked above won't solve the problem because the cost of registering file watchers is what actually matters. The fix makes starting the server faster, but actually syncing will still be slow.

One of the only fixes we can try is to upgrade the file watcher library we use and hope that it works out. Given the version we use is 3 years out of date and predates Apple silicon altogether, it's time for an upgrade anyway, but I just wanted to touch base on this.

@Dekkonot Dekkonot added type: bug Something happens that shouldn't happen impact: medium Moderate issue for Rojo users or a large issue with a reasonable workaround. type: tech debt Internal work that needs to happen labels Jul 12, 2023
@grilme99
Copy link
Author

Thank you! I can't provide much input anymore since I no longer have access to the codebase this issue occurred on. Nothing I work on right now quite reaches the same scale and don't have the same performance issues.

Dekkonot pushed a commit that referenced this issue Dec 28, 2023
Right now, serve tests will fail when Rojo is built with the FSEvent
backend. The cause is essentially due to the fact that `/var` (where
temporary directories for serve tests are located) on macOS is actually
a symlink to `/private/var`. Paths coming from FSEvent always have
symlinks expanded, but Rojo never expands symlinks. So, Rojo's paths
during these tests look like `/var/*` while the FSEvent paths look like
`/private/var/*`. When Rojo's change processor receives these events, it
considers them outside the project and does not apply any changes,
causing serve tests to time out.

To work around this, we can call `Path::canonicalize` before passing the
project path to `rojo serve` during serve tests. Rojo does need to
better support symlinks (which would also solve the problem), but I
think that can be left for another day because it's larger in scope and
I mostly just want working tests before addressing #609.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
impact: medium Moderate issue for Rojo users or a large issue with a reasonable workaround. type: bug Something happens that shouldn't happen type: tech debt Internal work that needs to happen
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants