-
Notifications
You must be signed in to change notification settings - Fork 185
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimize performance of handling huge completion payloads #2190
Conversation
Hmm, I'm not sure I should be super optimistic about this change since I've done some higher level testing now and it might not do that much for the actual problem of ST lagging. Kinda hard to tell without numbers but I did some silly test where I've started "ActivityIndicator" from sublime_lib and observed how much it will stutter before completions are shown and the results are pretty similar. Screen.Recording.2023-02-07.at.21.29.35.movI'm sure it's better but not sure by how much. This is on M2 Pro so it's not the best machine for performance testing ;) |
This is measured in seconds, and not in microseconds, right? Because you've set Just for comparison, how many characters are in the JSON payload, for the 50MB, and how long does it take for the server to respond? I would have thought that the time of the Does your video show an actual real-world example, how long it really takes for the completions to show up with LSP-typescript? Being several seconds, that looks completely unusable to me. Besides that the PR itself looks good to me on a quick look. |
Yes, seconds.
The payload I'm testing on is formatted so it's bigger due to extra whitespace. The original one is ~36MB (so about 36 million characters):
Yes, true, but everything adds up. The whole server response (including parsing) in this case is ~2.9s and JSON parsing takes 0.293s of that.
It is very slow in some cases like the one I've shown where I trigger completions within a JSDoc context. It depends where completions are triggered. For example in the same document but not in a JSDoc context the response is small and quick:
The main offender in many of the projects is the |
In import statements the payload is even bigger:
In this case JSON parsing takes 0.684~0.866. |
Though those responses being slow is not the main issue in my opinion. It's the part where we pass finished |
I will test this tonight |
main takes around 3.26 (best time), this branch 1.98s (best time) FWIW, commenting out only these 3 lines makes it even faster...
|
the Here is one cheaper alternative
but we would need to tweak the lsp_resolve_docs command to accept a view_id if we want to keep the old behavior. |
I experimented a bit, so I modified the
and the time was |
Good find. I've looked inside it (the def format_command(cmd, args=None):
if args is None:
return cmd
else:
arg_str = json.dumps(
args,
ensure_ascii=False,
check_circular=False,
separators=(',', ':')
)
return '{} {}'.format(cmd, arg_str) Probably good to avoid |
Added optimized function for creating documentation links. The test time goes down from |
And down to |
I found this article about performance of JSON parsing: https://pythonspeed.com/articles/faster-python-json-parsing/ There are libraries that would likely half the time of built-in parser but those require Python 3.7+ and contain native code. So maybe in the future. |
The |
A low-hanging fruit optimization that allows us to create
CompletionList
with drastically smaller payload.Instead of serializing complete completion items and attaching those to
CompletionItem.args
, only attach an index of the item and lookup the item within the global completions list attached to theLspSelectCompletionCommand
class (previously it was attached to theLspResolveDocsCommand
class but moved to match the current use).We did that already for the purpose of of accessing them for the
completionItem/resolve
request but there is no reason not to do that for the completion insertion also.I've created a separate package at https://github.com/sublimelsp/LSP-performance-test which currently includes one performance test for testing the
format_completion
function. Comparing themain
branch and this branch gives:main:
Best of 3: 2.038291124999887 s
this:
Best of 3: 1.1438355416667036 s
so 56% improvement. (This is on a 50MB completions payload)
This is just testing the performance of this single function and that function is called on the async thread anyway so its performance is not that crucial. The main bottleneck that causes ST to lag is the act of returning that list to ST and transferring over process boundaries. That itself I haven't measured but it is also drastically improved since the payload of completion list is a lot smaller.
Notes:
LspSelectCompletionItemCommand
renamed toLspSelectCompletionCommand
. The purpose was to just make it shorter to micro-optimize the payload size but I'm not very attached to that and could revert it. Though since it should be only used internally, it shouldn't be an issue.get_insert_replace_support_html
inlined withinformat_completion
to avoid gettingtextEdit
twice (again, micro optimization). But there is not really a good reason for it being separate anyway.