You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As I've been iterating on Garden, I tried to improve our .gardenignore to skip some types of builds. At some point, I accidentally ignored a file that we include in our Docker image build. I was able to build the image successfully, but it failed for the next person with a fresh checkout.
This is a bit hard to reproduce since it occurred over a few weeks and with uncommitted changes on my end, but I tried my best to build an example of what might've happened. Here's a .gardenignore contents with the problem file commented out (not ignored):
[eronarn@Jamess-MacBook-Pro qexec2]$ cat .gardenignore
# Contains files that shouldn't even be watched for changes.
ansible
#.buildkite
.github
If I run a garden build, this builds correctly, and the folder is present.
If I un-comment the .gardenignore line:
[eronarn@Jamess-MacBook-Pro qexec2]$ cat .gardenignore
# Contains files that shouldn't even be watched for changes.
ansible
.buildkite
.github
I get a message that goes from Preparing build (1909 files)... to Preparing build (1905 files)..., and the local .garden/build directory removes the folder:
[eronarn@Jamess-MacBook-Pro qexec2]$ ls -lath .garden/build/qexec/.buildkite
ls: cannot access '.garden/build/qexec/.buildkite': No such file or directory
However, the Docker image is not rebuilt on that garden build command:
[eronarn@Jamess-MacBook-Pro qexec2]$ garden build --log-level=3
Build 🔨
ℹ providers → Getting status...
Resource(s) PersistentVolumeClaim/garden-sync-garden-system-nfs-v2 missing from cluster
All resources missing from cluster
kubectl diff indicates all resources match the deployed resources.
Comparing expected and deployed resources...
Resource garden-docker-data is not a superset of deployed resource
Comparing expected and deployed resources...
Resource garden-docker-registry is not a superset of deployed resource
ℹ providers → Preparing environment...
ℹ kubernetes → Configuring...
✔ kubernetes → Configuring... → Ready
✔ providers → Preparing environment... → Done
ℹ qexec → Preparing build (1905 files)...
ℹ qexec → Getting build status for v-eb5be7d90e...
✔ qexec → Getting build status for v-eb5be7d90e... → Done (took 9.2 sec)
Done! ✔️
even though the now-missing file is a requirement for that image:
(n.b. we have the .gardenignore put in our .dockerignore, which is why changing it wasn't enough to force a rebuild)
If I add a new file with cat "BAR" >> FOO that isn't covered by our .dockerignore, then this does trigger a Docker rebuild, which then fails due to the lack of the image.
But from my perspective, the incorrect change to the .gardenignore would've looked like a valid, safe-to-commit change even though it was only succeeding due to stale state (presumably in the rsync folder?).
Suggested solution(s)
Improve logging of syncing. It'd be great to have a view of what files changed since the last sync, or what triggered a sync, on a high log level (but no so high that it lists all sync targets).
Your environment
[eronarn@Jamess-MacBook-Pro qexec2]$ garden version && kubectl version && docker version
0.10.13
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:44:30Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.2", GitCommit:"f6278300bebbb750328ac16ee6dd3aa7d3549568", GitTreeState:"clean", BuildDate:"2019-08-05T09:15:22Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Client:
Version: 18.06.0-ce
API version: 1.38
Go version: go1.10.3
Git commit: 0ffa825
Built: Wed Jul 18 19:05:26 2018
OS/Arch: darwin/amd64
Experimental: false
Server:
Engine:
Version: 18.06.0-ce
API version: 1.38 (minimum version 1.12)
Go version: go1.10.3
Git commit: 0ffa825
Built: Wed Jul 18 19:13:46 2018
OS/Arch: linux/amd64
Experimental: true
The text was updated successfully, but these errors were encountered:
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
stalebot
added
the
stale
Label that's automatically set by stalebot. Stale issues get closed after 14 days of inactivity.
label
Jan 15, 2020
Bug
Current Behavior
As I've been iterating on Garden, I tried to improve our
.gardenignore
to skip some types of builds. At some point, I accidentally ignored a file that we include in our Docker image build. I was able to build the image successfully, but it failed for the next person with a fresh checkout.This is a bit hard to reproduce since it occurred over a few weeks and with uncommitted changes on my end, but I tried my best to build an example of what might've happened. Here's a
.gardenignore
contents with the problem file commented out (not ignored):And here's the
.garden/build
folder at the time:If I run a
garden build
, this builds correctly, and the folder is present.If I un-comment the
.gardenignore
line:I get a message that goes from
Preparing build (1909 files)...
toPreparing build (1905 files)...
, and the local.garden/build
directory removes the folder:However, the Docker image is not rebuilt on that
garden build
command:even though the now-missing file is a requirement for that image:
(n.b. we have the
.gardenignore
put in our.dockerignore
, which is why changing it wasn't enough to force a rebuild)If I add a new file with
cat "BAR" >> FOO
that isn't covered by our.dockerignore
, then this does trigger a Docker rebuild, which then fails due to the lack of the image.But from my perspective, the incorrect change to the
.gardenignore
would've looked like a valid, safe-to-commit change even though it was only succeeding due to stale state (presumably in the rsync folder?).Suggested solution(s)
Improve logging of syncing. It'd be great to have a view of what files changed since the last sync, or what triggered a sync, on a high log level (but no so high that it lists all sync targets).
Your environment
The text was updated successfully, but these errors were encountered: