-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
range end index out of range for slice of length... #6467
Comments
We wish to dig this down, but as you may be aware it is extremely non trivial to debug these kind of intermittent issue without having an isolated repro codes. We'll try to look into time to time, but probably will be blocked by no repro. |
Understandable. I'm wondering if there is a way to get more debug output out of swc. I could setup our CI/CD pipeline to run with more verbose output if it is available. (I'm not familiar enough with rust to go poking around and adding debug statements, but if a verbose option doesn't currently exist I may attempt that) |
It is not easy, but if you could run this with debug build it maybe helpful. But it means
|
Since it is happening daily, it is worth it for me to go down that path. I'll take a look at the build directions for swc and get a custom debug build setup, I'll update this issue with any findings. Thanks, |
Some additional output:
|
Can you try |
This issue has been automatically closed because it received no activity for a month and had no reproduction to investigate. If you think this was closed by accident, please leave a comment. If you are running into a similar issue, please open a new issue with a reproduction. Thank you. |
🤔 this seems not correct |
@kdy1 some ideas? Issue is opened a wee ago, and last comment was 6 days ago so there's physically no way this issue can be a |
The word in comment is wrong |
Finally was able to get some more time to look at this, and got an error with backtrace available:
re-running the job and it passes just fine. And it is not always the same test or even the same package that errors |
Dropping in to link swc-project/plugins#42, which I believe has a similar root cause. I was not able to get the debug info previously (thanks for doing the hard work on that @siosphere!), but I'm seeing a little more detail on my end after updating to the latest swc:
Note the |
And yes, this happens if |
@kdy1 thanks for verifying the source. Is this something the SWC team will be able to look into, or should we upstream the issue to wasmer? |
It's not a something we can look into |
is there any solution we can do for this issue? I'm facing the same problem
always with jest, not sure why!
|
I had originally been under the impression this was only happening on some specific docker images (much older images), and that the errors weren't present on |
I can relate to that. I tried using the GitHub Actions runner and everything went smoothly. However, when using our self-hosted runner in parallel mode, we encountered some issues. To mitigate the problem, I tried running it with the |
Also getting this issue intermittently, sometimes resolved by deleting |
Has anyone got any resolution for this issue? @ezpuzz @dgreif @HashemKhalifa @kdy1 Do we have any updates/workaround on this. I have a very large monorepo which fails the CI pipes intermittently, giving the following error: These are the current versions we are using "@swc/core": "^1.3.46",
"@swc/jest": "^0.2.24", |
@samar-1601 which jest version are you using and It was a memory leak that happened while using an older version of jest and |
here is what I updated and adjust my tests to match the new updates.
|
@HashemKhalifa We are using the following versions. "jest": "26.6.3",
"jest_workaround": "^0.72.6",
Is this related in any way to these node+jest version issues.
We have tried using various combinations of maxWorkers=(some percentage), -w=(some number of threads). But none of the memory management steps solves the problem in a concrete manner i.e. this I have found possible solutions(if the problem is related to jest+node version issues) which includes upgrading node to 21.1 and using workerIdleMemoryLimit(mentioned in the official jest documentation). The problem is I am stuck in a 2 way here, cause the solutions
Both will take a huge effort considering the size of our monorepo and currently, we have our pipes failing intermittently on a daily basis. |
I feel you, been in your shoes, and that's correct with the memory leak issues you mentioned, I had to upgrade both, since then all good, and currently planning to move away from jest to vitest as well. |
@HashemKhalifa Did you use the Will update the thread with our findings. |
I have tried |
@HashemKhalifa React version is |
@samar-1601 no, was one of the reasons that caused memory leaks, as well in our test cases there were so many tests that I had to adjust to fix the leaks, some of which had a relation with I hope that answers your question |
Removing the milestone as there's no repro. |
@HashemKhalifa Another observation, using flags
|
We too see this consistently. Are there any other diagnostic steps you can recommend? We're running these builds on C7s (xl I believe) so it's hard to believe the issue is related to memory exhaustion. Any info you can give about the error that might help us understand the issue? Is this reading compiled code from a cache? Or reading the compilation response? It's a long shot but considering the worker model of jest is it possible that multiple workers are requesting the same file/module to be compiled at the same time? |
Yeah, this may be related to any issue when jest runs with multiple workers cause the error doesn't come when we use |
|
@dgreif Can you let us know the node, jest versions you are using. We have upgraded to |
@HashemKhalifa can you guide a bit as to how you detected leaks and then fixed them.
|
I can share my findings while working on this issue last year maybe it could help: Updating Node reduced the leaks but didn't solve the problem completely jestjs/jest#11956 Updating Jest still didn't solve the problem entirely but reduced how often it happened. I'm not sure if it's related to SWC, but it definitely exacerbates the issue in Jest, I was hoping As @dgreif mentioned, there's only one option, which is runInBand, because Jest instances with memory leaks are not able to shut down and keep running forever until SWC complains. Our solution: Run tests in-band (sequentially) within each Jest instance. |
Describe the bug
This is an intermittent bug with @swc/jest and/or @swc/core.
We are running our test suite with turbo repo, utilizing @swc/jest, and intermittently we get a failure like this:
Retrying the ci/cd job and it will succeed just fine.
This does not happen very often, but ~1/50 runs. That error message appears to be from rust, which is pointing me more towards it being an @swc issue, than a jest specific issue.
Input code
No response
Config
Playground link
No response
Expected behavior
Test suites should not intermittently fail with no changes
Actual behavior
Fails intermittently
Version
1.2.128
Additional context
"@swc/cli": "^0.1.57",
"@swc/core": "^1.2.128",
"@swc/jest": "^0.2.15",
The text was updated successfully, but these errors were encountered: