Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance issue: Vscoq2 very slow on a file with numerous lemmas #657

Closed
charguer opened this issue Oct 11, 2023 · 3 comments · Fixed by #744
Closed

Performance issue: Vscoq2 very slow on a file with numerous lemmas #657

charguer opened this issue Oct 11, 2023 · 3 comments · Fixed by #744
Labels
bug Something isn't working

Comments

@charguer
Copy link

Consider the file LibList.v from the TLC library (part of the Coq CI).
This compiles in batch mode (coqc) in 3.7 seconds (with a single core).
VSCoq1 processes it in about 8 seconds, which is not perfect but acceptable.
VSCoq2 processes it in more than 180 seconds, which is not manageable.

@rtetley rtetley added the bug Something isn't working label Oct 12, 2023
@rtetley
Copy link
Collaborator

rtetley commented Oct 12, 2023

Not sure this is a duplicate of #454 as suggested by @Blaisorblade because the phrasing suggests the problem lies in the execution. I will try and reproduce and see.
There has also been conversations on zulip with regards to mathcomp facing similar issues.

@gares
Copy link
Member

gares commented Oct 13, 2023

I'm testing 2.0.1 on mathcomp and I noticed some issues.
It seems that after executing each sentence (manual mode) the lang server sends diagnostics about executed, being executed, but also parsed. The file is large and the compression algorithm for ranges seems to break at comments.
Screenshot from 2023-10-13 10-19-41
This breaks the ranges.

LibList.v has about 200 occurrences of (*, so every command sends 200 ranges for the parsed diagnostic.

@rtetley
Copy link
Collaborator

rtetley commented Oct 20, 2023

So in fact the language server was sending the full file as "parsed ranges" at each send_highlights call. On such a big file it is indeed problematic. Working on it in #679
Note that even after removing this behavior, when files are large and heavily commented we still have performance issues (although really not as bad). We need to handle comments in our execution state in order to not cut the ranges each time. Any ideas @gares @maximedenes ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants