-
-
Notifications
You must be signed in to change notification settings - Fork 426
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The command line is too long(windows) #147
Comments
Good one! I think you could solve it by hacking windows. I don't think I can offer a solution. You should probably skip the pre-commit hook for this one. If you find a better solution please let me know. |
I did skip it for that one. However, might be worthwhile to fix it though. Would it be possible to split up the task and spawn multiple child processes if the number of files exceeds a certain limit? |
Feel free to create a PR and we can discuss it. I'm open for any improvements. |
This is what I am thinking. In Few questions:
|
|
@okonet I am facing issue with EDIT: I was able to successfully run lint-staged on 147 files with my local copy of lint-staged(using npm link) |
Create a PR and let's discuss there. |
package.json - Add lodash.chunk, p-map dependency calcChunkSize Add helper function for bullet proof chunk size calculation Add calcChunkSize.spec.js with tests findBin Modify signature, remove paths from input Return {bin, args}, file paths should be appended before passing to execa Modify tests to match new signature index Use readConfigOption for reading option concurrent Pass config to runScript options for reading subTaskConcurrency, chunkSize readConfigOption Add generic helper function for reading config option Add readConfigOption.spec.js with tests runScript Use p-map to run runnables returned from findBin with configured concurrency Use readConfigOption for reading config, subTaskConcurrency, chunkSize Default chunkSize to Number.MAX_SAFE_INTEGER to minimise number of chunks Default subTaskConcurrency to 2 Use lodash.chunk to generate file path chunks Cache execaOptions and use Object.assign for cloning Combine findBin.args and file paths before passing to execa Update runScript.spec.js to always pass files as an array to `runScript` Closes lint-staged#147
package.json - Add lodash.chunk, p-map dependency calcChunkSize Add helper function for bullet proof chunk size calculation Add calcChunkSize.spec.js with tests findBin Modify signature, remove paths from input Return {bin, args}, file paths should be appended before passing to execa Modify tests to match new signature index Use readConfigOption for reading option concurrent Pass config to runScript options for reading subTaskConcurrency, chunkSize readConfigOption Add generic helper function for reading config option Add readConfigOption.spec.js with tests runScript Use p-map to run runnables returned from findBin with configured concurrency Use readConfigOption for reading config, subTaskConcurrency, chunkSize Default chunkSize to Number.MAX_SAFE_INTEGER to minimise number of chunks Default subTaskConcurrency to 2 Use lodash.chunk to generate file path chunks Cache execaOptions and use Object.assign for cloning Combine findBin.args and file paths before passing to execa Update runScript.spec.js to always pass files as an array to `runScript` Setup babel-preset-env + transform-runtime for async in node 4 Closes lint-staged#147 Closes lint-staged#147
If the amount of staged files is big, passing them as arguments can exceed a max string limit (on Windows it's `8191`). This PR addresses it by splitting work into chunks based on this limit and running linters in parallel. Closes #147
Uhh, this seems to have been removed again? |
Afaik we have a different check now. |
The current functionality is to check the length of the string containing all staged filenames, and print a warning: https://github.com/okonet/lint-staged/blob/403f80b74468dd5682f27bfbeabdd779d492f0e8/src/runAll.js#L61 Now that I look at it, it might be inaccurate as the that file list is relative to the git root, but by default the paths are resolved as absolute. It would be better to move this check a bit further in the code. A function linter can be implemented with custom logic to circumvent too long commands: https://github.com/okonet/lint-staged#example-run-eslint-on-entire-repo-if-more-than-10-staged-files |
Yeah I saw that one after I posted it, but I don't really think it's ideal, I feel like it would be more user friendly for lint-staged to handle this as it did before Had this commit with 288 files Didn't see a warning, only thing I saw after running it 5 times (didn't notice it before that) is the message in the last line (after 2 newlines) "The command line is too long." |
Just got bit by this one as well. Large commit with 200+ changes on Windows. Never had this happen before, when was this change made? |
The previous behavior was problematic for functional linter configurations. The change was introduced in v9.0.0. I supposed it would be possible to chunk the file list and create multiple runs from the function linters as well, but it would be take some refactoring. PR is welcome, otherwise I can fix this when I have the time. Since I don’t use Windows, a repo reproducing a fail would be convenient. |
I think chunking which we had before introduces a lot of complexity to the code base and it was quite hard to maintain because of that. I'd say, since committing big amount of files usually happens after a merge, there is no need to run lint-staged on it usually and users can just skip doing so. We could improve warning message and print better instructions, though. |
There are plenty of situations where one might accidentally hit the command line length limit on non-trivial projects, so having builds fail, identify the failure as part of the staged linter, and then having to circumvent it for larger commits is simply not a viable option for us on a growing team of developers. Our use case is that we're running eslint with airbnb, prettier, and a few custom rules. I reverted to 8.2.1 and the commit was handled fine. I would think that this is one of the most common usage scenarios for lint-staged but I could be wrong. |
I am willing to make a PR that ”reverts” this behaviour and goes back to chunking. Since now we generate a list of task from a large array of files, we could instead split the array into chunks and generate tasks for all the chunks. This is, as I can see, the only way to support both the function linters and chunking. For listr instead of the ”running tasks...” step we could show multiple ”running tasks (chunk 1)...”. What do you think, @okonet? |
Having the same issue here. It does not take a lot of files to hit that windows cmd length limit. It has become a nuisance for our developers. Will revert back to 8.2.1 for now. |
My team just hit this as well, after only enabling lint-staged for 3 days. It's taken quite a while to track it through various levels and repos to finally find the issue here. Since this issue is closed, should we open a new issue? |
I added this ”back” to the |
One specific thing I’d like to know though: is this also an issue with WSL, or only on native Windows shells? |
It was an issue with Cygwin Bash and PowerShell (😬). We resolved it for now by using |
We are using git-bash that came with https://conemu.github.io/ |
🎉 This issue has been resolved in version 10.0.0-beta.4 🎉 The release is available on: Your semantic-release bot 📦🚀 |
Maybe someone could find the time to test a commit with a large number of staged files on Windows using |
Yup, seems to have worked fine |
Also yes, ran into this problem today, tested the beta and everything worked great. |
I confirm that 10.0.0-beta.4 works fine on Windows with a large monorepo and a big staged files list. |
Thanks a lot testing! I think v10 is soon ready to merge, but I’m a bit busy myself until next week. |
@iiroj, I'm afraid the issue may still persist, unless I'm messing something up majorly. Trying to commit 180 files at once, I get this even after updating to
Is there anything more useful I can give you for looking into this? |
@gaborluk Please post your config, and debug logs by running with |
@iiroj, here's the relevant part of
And this is
I sent you the debug log via email, hope that's alright. Let me know if you need anything else. |
Thanks for the logs, @gaborluk! I think you hit a case where the chunks' length was so close to the max arg length that when combined with your actual command, the resulting string was too long. I improved debug logging and halved the maxArgLength so that in your case it now creates four instead of two chunks. Let me know if it helps! EDIT: This is in v10.0.0-beta.5 |
Works like a charm now, @iiroj, thanks a lot for the quick turnaround. Much appreciated. |
Cool. I will leave this open until v10 releases. |
🎉 This issue has been resolved in version 10.0.0 🎉 The release is available on: Your semantic-release bot 📦🚀 |
|
So, at the moment it was introduced (V10) there was an argument maxArgLength . But it didn't matter what you assign to this argument cause it was changed internally right in the beginning of the algorithm. Then the argument disappeared (around 10.4.2) to avoid the unnecessary confusion I suppose. Now, I'm on 10.5.4 and I'm still getting this bug time to time on big merges (not the end of the world, honestly I'd opt for --no-verify in case of merges I think it's fine 🤷♂️). Am I doing something wrong? Just wanted to clarify 🙂 |
For anyone who has this issue, I solved it by splitting the files into chunks, it works on my projects. Hope it helps. .lintstagedrc.js
|
I ran my project through
prettier
and ended up with 99+ modified files. Am usingcmder
on windows 8 and I got this while trying to commit:Stack Trace
The text was updated successfully, but these errors were encountered: