You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We introduced patterns inserter everywhere in this PR: #28459
Context
Before inserting a pattern, it checks whether the destination block supports all the root-level blocks from the patterns or not. This means we need to parse all the patterns before showing the inserter. This would cause the inserter to open slower. To avoid this, the PR above parses and memorizes all the parsed patterns on editor load. This increases the load time of the editor, but it doesn't affect the inserter's open time.
The problem
As I mentioned above, this PR increased the load time of the editor. Just for reference, here is the perf comparison from the CI checks:
As you can see 9253 vs 9008 (~2.72%) and 2142 vs 2004 (~6%) it doesn't seem much. Roughly it takes 100ms in the CI checks to parse the patterns. This isn't much when loading the editor. But it would be a lot if it took 100ms longer to load the inserter for the first time. Don't forget, these tests were run using tt1-blocks which only contains 8 patterns. Let's assume it takes 15ms to parse 1 pattern. 10 patterns will take 150ms. 100 patterns will take 1.5 seconds. This will take even more time on a low-mid end computer and mobile.
limit this functionality only to Site Editor for now. If we encounter performance problems in some cases the impact will be lower.
save parsing results into IndexedDB so we don't need to do it on each load. Invalidation might be tricky here.
something similar to (2) but doing this on backend by hooking into pattern registration, parsing it, persisting the value in a transient, and passing it to editor on init.
lazy loading equivalent - just checking the patterns that are currently in view when inserter is open. This would likely be a bad UX since we might end up with inserter open and all of the patterns disabled, so I don't think it's a great approach.
The text was updated successfully, but these errors were encountered:
save parsing results into IndexedDB so we don't need to do it on each load. Invalidation might be tricky here.
something similar to (2) but doing this on backend by hooking into pattern registration, parsing it, persisting the value in a transient, and passing it to editor on init.
This feels overkill for a ~100 ms-long computation. At the same time, systematically adding those 100 ms to the initial load, regardless of a user's editing goals, isn't great.
A less-than-ideal but very simple improvement would be to cache it upon request, when the inserter is opened for the first time. A subsequent improvement might be to tap into requestIdleCallback to build that cache beforehand.
This is more of a long term brainstorming. Right now as you said, it's overkill. But as I mentioned, it's 100ms on my powerful MacBook pro, and we are only talking about only 8 patterns. As soon as we think about low-mid end laptops and a couple of dozen patterns then the performance impact quickly adds up.
We introduced patterns inserter everywhere in this PR: #28459
Context
Before inserting a pattern, it checks whether the destination block supports all the root-level blocks from the patterns or not. This means we need to
parse
all the patterns before showing the inserter. This would cause the inserter to open slower. To avoid this, the PR above parses and memorizes all the parsed patterns on editor load. This increases the load time of the editor, but it doesn't affect the inserter's open time.The problem
As I mentioned above, this PR increased the load time of the editor. Just for reference, here is the perf comparison from the CI checks:
As you can see 9253 vs 9008 (~2.72%) and 2142 vs 2004 (~6%) it doesn't seem much. Roughly it takes 100ms in the CI checks to parse the patterns. This isn't much when loading the editor. But it would be a lot if it took 100ms longer to load the inserter for the first time. Don't forget, these tests were run using
tt1-blocks
which only contains 8 patterns. Let's assume it takes 15ms to parse 1 pattern. 10 patterns will take 150ms. 100 patterns will take 1.5 seconds. This will take even more time on a low-mid end computer and mobile.Ideas
@vindl came up with a few ideas #28459 (comment)
The text was updated successfully, but these errors were encountered: