-
Notifications
You must be signed in to change notification settings - Fork 990
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
hashFiles() couldn't finish within 120 seconds. #1840
Comments
I have the same problem when using the cache action to cache Maven artifacts. I call the action that way:
And I'm running on a self-hosted Windows Server 2016. |
We're also getting this error in our pipelines when using |
@MikulasMascautanu I want to say that my issue and what you describe are not necessarily of the same nature. In my case, it makes a lot of sense that hashing all the files would indeed take longer than two minutes: It is a rather large Unity game project. My issue is that I would like it to enable to take longer, also see the PR mentioned above. Your case seems to be a bit different: Often it finishes in under two minutes, sometimes it does not. In such a case, there might be a different underlying reason, potentially with the mechanic that hashes the files itself. |
@Carsten-MaD you're right. Though I still think this optional timeout setting could be benefitial to |
@Carsten-MaD did you manage to sort this? |
Besides waiting for the PR to be merged (or even checked out), can anyone think of a workaround for this? |
@JJ What I personally did, was to remove entirely the |
@shtefanilie my issue with this also started with Podfile.lock issues then I realized that GHA was using the cached version of my Podfile. I had to force the cache to get busted by changing the hash name like "-pods-v2" so GHA would see the cache doesn't exist and then recreate the Podfile.lock file. Just leaving it here as it could help others too 😊 |
Yep, makes sense. That's what I have done too. And yes, that's sad. |
@pedpess yeah, that might be a way to do it. Might be worth it see if I add back the caching, wether if that fixes the issue |
also experiencing this issue as of 7/29. key: ${{ runner.os }}-pods-${{ inputs.cache_version }}-${{ hashFiles('**/Podfile.lock') }} is timing out. current work around is to remove cache which is not ideal. |
Also seeing this while hashing a small conda environment file |
I see this almost everyday on my environment, and i have (more or less) 3 pom.xml files on the repo... |
I had that exact error message, too, only on Windows. But in my case it was happening not in the Which was counter-intuitive to me: given that it computed the hashFiles in the beginning, I was not expecting it to compute it a second time. And what happens is that I My solution was therefore just to make sure that my job does not write anything into <some_folder>, so that the hash stays easy to compute (and the same between the beginning and the end of the job). I'm fairly sure that other platforms don't recompute the hashFiles for the |
@JJ: Assuming that the runtime is performing hashFiles() at step evaluation, you should be able to perform your own hashing on disk in a step... Old: - name: Cache Maven dependencies
uses: actions/cache@v2
with:
path: ~/.m2/repository
key: ${{ runner.os }}-maven-${{ hashFiles('**/pom.xml') }}
restore-keys: |
${{ runner.os }}-maven- Replacement: - name: Workaround hashFiles timeout
run: |
find ~/.m2/repository -name pom.xml -print0 | xargs -0 shasum > ~/.m2/repository/pom.xml.shasum
- name: Cache Maven dependencies
uses: actions/cache@v2
with:
path: ~/.m2/repository
key: ${{ runner.os }}-maven-${{ hashFiles('pom.xml.shasum') }}
restore-keys: |
${{ runner.os }}-maven-
...
- name: Workaround hashFiles timeout update
run: |
find ~/.m2/repository -name pom.xml -print0 | xargs -0 shasum > ~/.m2/repository/pom.xml.shasum Disclaimers:
|
any update? |
Still waiting for an update on this, this more often than not provides a false positive in our runs, would love to have an option to simply increase or remove the 120 second limit. |
Bump. |
Both Poetry and ccache manage their own caches properly without needing external assistance, so there's actually no point in keying the cache on poetry.lock. Plus the riscv64 runner is apparently not able to `hashFile('poetry.lock')` within 120 seconds [1], causing jobs to fail needlessly. Getting rid of the hashFile call also works around the problem. [1]: actions/runner#1840
Alternative (key based on Git commit hash)If you're ok with using the last Git commit hash modifying a certain directory, you can avoid the expensive operation of hashing all files - the Configure your name: Skip cache restoration on changes in directory
on: [push]
jobs:
build:
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
# Fetch all commits in all branches and tags
fetch-depth: 0
- name: Get last Git commit hash modifying packages/abc
run: |
echo "ABC_HASH=$(git log -1 --pretty=format:%H -- packages/abc)" >> $GITHUB_ENV
- name: Cache packages/abc
uses: actions/cache@v4
with:
path: packages/abc
key: abc-build-cache-${{ env.ABC_HASH }}
- name: Build packages/abc
run: |
pnpm --filter=abc build Also added this to |
(filing here since I understand that
hashFile()
is owned by https://github.com/actions/runner)Hey!
We have migrated a semi-large Unity project from SVN to Git / Github and I am working on setting up Actions for a build pipeline.
Unfortunately, a step to create a cache for Unity-specific files is giving me a headache. If I understand the error message correctly, I am running into a 120 second timeout for that step.
As I see it, the timeout for
hasFiles()
is set here and there is no way to increase it, is that correct?runner/src/Runner.Worker/Expressions/HashFilesFunction.cs
Line 16 in 100c99f
Expected behavior
I imagine it makes a lot of sense that the job has a timeout. It would be helpful if it could be set from the step configuration for scenarios where a lot of files have to be hashed.
Runner Version and Platform
Runner version: 2.290.1
OS: Windows 10 64bit self hosted runner
Action configuration:
Job Log Output
The text was updated successfully, but these errors were encountered: