-
Notifications
You must be signed in to change notification settings - Fork 765
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Saving .py files delayed by Getting code actions from ''Python' #4263
Comments
@armenzg Can you share the logs from Output > Python Language Server ? Code Actions come from language server. |
Hi @karthiknadig thanks for the prompt response Here's the output:
|
Thanks for the issue. The root cause of the problem is that it takes 3 seconds to analyze your file. That's a bit out of the ordinary. If we could get access to your source, we might be able to fix that problem. Secondly, I'm not sure why VS code is asking for code actions on file save. We might be able to shortcut this problem. I'll look into if that's possible or not. |
It seems there is a setting to control if code actions run on save or not. You must have this setting set:
|
Okay. Let me add it to the user settings and report back. |
if you click by the way, by default, we don't do anything on you can disable us participating in |
I am reproing the issue using the source code provided. It's not quite as slow, but I can get the dialog to show up. This setting has to be set for me to repro the problem though:
|
I believe we might be able to fix this problem by not cloning the service during a save. That seems to be what takes so long. It has to reparse everything every time you hit save. At least that's what the log looks like when I repro (log output from just saving) |
that is only when you have 1 fix all in the setting. when there are multiple ones, you can't avoid cloning. we could optimize a case where there is only one fix all in the setting if we think that's worth it. that said, the root cause of the issue, where executing one or multiple any other extension's |
Hi @ heejaechang
I believe hitting the cancel button works. I will pay closer attention. This is what I would notice:
I believe it's worked as mentioned above but I will check again.
I see this and this in our repo's workspace settings. Would that do it?
Like this? |
For anyone following along. I had this setting from the workspace's setting:
Yesterday I added the following as suggested and the issue still happened this morning:
I now have:
|
Is this useful output?
|
I have verified that adding Here's my config. diff --git a/.vscode/settings.json b/.vscode/settings.json
index 4e6108da16..c7f254b2d3 100644
--- a/.vscode/settings.json
+++ b/.vscode/settings.json
@@ -59,9 +59,11 @@
"[python]": {
"editor.formatOnSave": true,
"editor.codeActionsOnSave": {
- "source.organizeImports": true
+ "source.organizeImports": true,
+ "source.fixAll": true
}
},
+ // "python.analysis.fixAll": [],
"[html]": {
"editor.formatOnSave": false``` |
I'm also upgrading to the M1 build rather than the Universal just in case there are some perf improvements.
|
Isn't the source of the problem 4542 source files and index hitting the limit of 2000?
|
I feel like that's still our problem to solve. The real problem here is we reindex after every save because of the cloning of the service to handle the fix all command. |
Yeah. I'm still hitting the issue. Any other setting I should try? |
I thought removing Otherwise, this would be expected to happen. We do a bunch of work if We might be able to do less work, but there is not a way other than that setting to prevent the work from happening. |
I thought so too. Am I setting it correctly as this? It stills happens sporadically with this configuration: diff --git a/.vscode/settings.json b/.vscode/settings.json
index 4e6108da16..d82e921adb 100644
--- a/.vscode/settings.json
+++ b/.vscode/settings.json
@@ -59,9 +59,11 @@
"[python]": {
"editor.formatOnSave": true,
"editor.codeActionsOnSave": {
- "source.organizeImports": true
+ "source.organizeImports": false,
+ "source.fixAll": false
}
},
+ "python.analysis.fixAll": [], |
then, it might not even us. it could be some other extension running. turn off everything that is supposed to run automatically when save such as by the way, so I assume the |
hmm... we have this code const workspace = (await this._ls.getWorkspaceForFile(filePath)) as PylanceWorkspace;
if (!workspace || !workspace.fixAll || workspace.fixAll.length === 0) {
// No fix all is configured.
return;
}
cloning shouldn't affect re-indexing. that should be only affected by
if we are hitting this from |
So, I just debugged our code and found that our cloneService doesn't create a bg, which means the indexer can't run for the cloned service since it only runs when there's a bg. Additionally, the cloned service is marked as not scan folder, so it can't be discovering files and taking a long time to do so. That being said, I did find a bug. When we reach the 2000 file limit for the workspace, it appears that the limit only applies to how many files we process in one chunk. This means that if there are 4000 files, we will hit the 2000 file limit twice and end up processing all 4000 files. It seems like this occurs whenever a file is saved/edited because that's when the next 2000 chunk is processed. In this particular case where there are 4xxx files, the process will run 3 times: 2 times for the 2000 files and once for the remaining files. After that, it will stop processing indices for workspace files. I will fix this bug. However, it's worth noting that it shouldn't affect the fix all issue, as it should be unrelated. It's just that the repo is large enough to reveal another bug we weren't previously aware of. |
@lobsterkatie You can look at the logs for Output > Black, it should give some clue on how long black actually took to format. The Black formatter extension does only one thing different from running black in terminal; it has the modules pre-loaded for formatting and does not have to spawn a new process to actually do formatting. The speed up comes from not having python run a separate process on each save. That being said, black formatter extension does not contribute Code Actions, so if you are seeing a notification for this then please file a bug https://github.com/microsoft/vscode-black-formatter |
Ah, right - I saw that the very similar issue Armen linked in the original issue description above was from the
Thanks, that's helpful to know. And yeah, I do think that the popup I see mentioning the Cheers! |
@heejaechang and @erictraut I tried this with a deep copy of the program and the problem goes away in my repro. The save is almost instant then. I'm going to look into have one source of truth for document parse trees. |
since plan is only sharing text and tree, but not binding info and type cache, once you did deep copy, you could remove all binding info using cleantree call and create new type eval and run |
You mean don't create a 'DocumentManager' but instead keep it the same (with a deep copy) and then prune the deep copy? |
I meant just for prototyping. |
@rchiodo, I propose we do this in stages. We can start with having one source of truth for the document text. That will address this bug. Then we can separate the bind information from the parse tree, which will allow us to treat the parse tree as immutable. Then we can share the parse trees without the need to do any deep cloning. Does that sound like a reasonable plan? |
Sounds good to me. I'm still verifying using Heejae's idea at the moment, but after that I'll create some sort of 'DocumentManager' that is the place to get SourceFiles? or maybe just parse results. |
Cloning and rebinding is about the same time as the old algorithm (so parsing is not the slow part here). Old code:
Clone with clear all bindings
Clone service, but keep same parse tree with same bindings (don't rebind or copy anything)
Copying the different parse trees and rebinding takes just as long as creating them from scratch. So this will only really get us a gain if we bind less. (Difference between the old code and the clone is parsing, so parsing is only like 100ms) Not sure it would be worth it to just share the parse information then. What about a copy on write scheme were we only rebind when the edit is applied (just like when typing), but instead of rebinding the current source file, we copy it to another source file and bind that one? |
Oh I guess the clone could be taking a portion of the time too. I didn't factor that time out. |
Are you doing a shallow or deep clone of the parse tree? A shallow clone should be almost free. A deep clone won't be free, but my guess is that it's negligible. I typically find that the time it takes to parse and bind files is similar. That is, if parsing all of the files in your project takes 1000ms, the total bind time will be roughly the same. The file load time varies greatly depending on whether it's in the OS's file system cache (and if not, how fast the storage device is). Here's a typical breakdown. If I understand your results correctly, you're saying that eliminating the file reads (which you get for free with the change I made yesterday) and eliminating parsing (which you get by cloning and cleaning the parse tree) drops the time by only 100ms (from 2200 to 2100ms), but eliminating the binding drops it by 1300ms (2100 to 800ms). That's not what I'd expect to see, so I'm somewhat suspicious of these results. What about import resolution? Are you cloning that information as well, or are you recalculating that? Import resolution can be quite costly for some projects. I'm guessing that you're cloning it because the "clean parse tree" call doesn't clear it. |
Yeah computing the deep clone only takes 10ms on average. It's negligible. (Just tracked it now). Where is the import resolution data stored? I'm creating a new backgroundAnalysisProgram, a new program, sourceFileInfos are shallow copies, sourceFiles are deep copies with the parse tree recreated by deep coping it. If it's on the sourceFileInfo, then it should save the import resolution information. |
Perhaps it's the checker? There's no background thread for this clone of the service, so the checker might be taking CPU cycles? |
The import info is stored in the parse tree (on nodes the correspond to The checker shouldn't be running in this use case, but the type evaluator will be invoked if you're requesting semantic information (types) for symbols in the file. As you know, the type evaluator will recursively evaluate types as needed to provide the requested answer. If the information it needs is not cached, it will need to evaluate it from scratch. |
I think I need to do more analysis to determine which part is taking so long. It's too broad at the moment. |
we have a way to see how much time we spent on parsing, and binding accumulated. you can use that to see how much time is spent. |
sent PR that can track perf - https://github.com/microsoft/pyrx/pull/3443 |
This issue has been fixed in prerelease version 2023.5.21, which we've just released. You can find the changelog here: CHANGELOG.md |
Thank you so much @rchiodo! I will update to the prerelease! |
Still unbearably slow in my code base: https://github.com/scverse/scanpy Saving files takes multiple seconds. VS Code waits and eventually shows a popup “Getting code actions from ''Python"”. When setting Why will VS Code wait for Pylance to save things? I have only {
"[python]": {
"editor.formatOnSave": true,
"editor.defaultFormatter": "ms-python.black-formatter",
"editor.codeActionsOnSave": {
"source.fixAll.ruff": true,
},
},
"python.testing.pytestArgs": ["scanpy/tests"],
"python.testing.unittestEnabled": false,
"python.testing.pytestEnabled": true,
"python.languageServer": "Pylance",
} |
@flying-sheep, I'm opening a new issue for your problem. It might be the black formatter that is also taking a long time as you're formatting on every save. |
As said: it’s not black.
I’ll follow up there, thank you! |
It’s faster now, but there’s still a perceptible delay. Thinking about this, it’s actually not relevant how slow or fast Pyright (parsing & analysis) is. Pylance (the VS Code addon) should not delay saving a document for a single nanosecond if I don’t have VS Code configured to run “on save” actions from Pylance. Any kind of parsing and analysis should happen asynchronously without delaying the file being saved. Could you please reopen this until Pylance doesn’t delay saving anymore? The issue described in the title here is not fixed:
|
This request would be a VS code ask. They still ask us for code actions if there's anything indicated for |
OK, thank you for clarifying! So you did stop running analysis synchronously before returning the save actions, that’s great! |
Type: Bug
Behaviour
Expected vs. Actual
I expect to save a Python file and not have to wait over 15+ seconds to save.
Instead I get
Getting code actions from ''Python'"
. Before, it used to be Jupyter getting on the way but after I uninstalled it, the problem came directly from the "Python" extension.This is similar to microsoft/vscode-python#19808, however, I can't make any comments.
I think I've noticed this issue in the last month or two.
It does not happen all the time but when it does, it happens often.
Here's a screenshot:
Steps to reproduce:
Diagnostic data
python.languageServer
setting: DefaultOutput for
Python
in theOutput
panel (View
→Output
, change the drop-down the upper-right of theOutput
panel toPython
)User Settings
Extension version: 2023.6.1
VS Code version: Code 1.77.3 (Universal) (704ed70d4fd1c6bd6342c436f1ede30d1cff4710, 2023-04-12T09:19:37.325Z)
OS version: Darwin arm64 22.3.0
Modes:
Sandboxed: No
System Info
canvas_oop_rasterization: disabled_off
direct_rendering_display_compositor: disabled_off_ok
gpu_compositing: enabled
metal: disabled_off
multiple_raster_threads: enabled_on
opengl: enabled_on
rasterization: enabled
raw_draw: disabled_off_ok
skia_renderer: enabled_on
video_decode: enabled
video_encode: enabled
vulkan: disabled_off
webgl: enabled
webgl2: enabled
webgpu: disabled_off
A/B Experiments
The text was updated successfully, but these errors were encountered: