Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix multiple image builds when starting docker #760

Merged
merged 3 commits into from
Oct 1, 2020

Conversation

indiebrain
Copy link
Collaborator

@indiebrain indiebrain commented Oct 1, 2020

Why

The current Docker-based development setup essentially does the same things multiple times to produce images for the app and webpacker services. The goal of this body of work is to produce images for these services and allow the to reuse existing, cached artifacts to prevent the build phase from doing unnecessary work.

Pre-Merge Checklist

  • All new features have been described in the pull request
  • Security & accessibility have been considered
  • All outstanding questions and concerns have been resolved
  • Any next steps that seem like good ideas have been created as issues for future discussion & implementation
  • High quality tests have been added, or an explanation has been given why the features cannot be tested
  • New features have been documented, and the code is understandable and well commented
  • Entry added to CHANGELOG.md if appropriate

What

  • Share ruby gems cache between app and webpacker services
  • Reuse cached Docker images layers between app and webpacker services
  • Pre-build all local development images as part of the development environment setup

How

Share ruby gems cache between app and webpacker services

The docker-compose setup includes a persistent volume intended to serve as a cache for bundler and rubygems. However, the volume mount contains a small error which is causing gems to be installed on the ephemeral filesystem - IE the gem cache is lost when a container exits. This means that every request to start an app container needs to re-install rubygems - and therefore re-run every step that comes after in the Dockerfile. This leads to longer than necessary wait times for the image to come up since the bundle install instruction of the Docerfile will never produce a cache-able layer.

This fixes the path so that gems are stored on the persistent volume and significantly decreases the amount of time one needs to wait for app / webapcker images and containers to become available.

Reuse cached Docker images layers between app and webpacker services

In development, the app and webpacker services run in the same context. Instead of building an image for each service, we now build one image and share it - then let docker-compose figure out what to run, which ports to expose, etc.

This uses Docker multi-stage builds to create the "base" image, then the app and webpacker services both use the "base" image as a starting point for their containers.

Pre-build all local development images as part of the development environment setup

Instead of deferring image builds until the last possible moment, this changes the bin/dev/bootstrap script to pre-build all locally produced images for development. A separate body of work to address this problem was completed and merged into main as THIS body of work was still evolving. Instead, focus for this change set shifted to providing a small reorganization to the work done in #757.

Testing

  1. Run bin/dev/bootstrap, one should see that bundle install is performed only once between the app and webpacker as they build their images.
  2. Run bin/dev/serve notice that the docker-compose up --build step reuses layer caches for the app, webpacker, and email services

The docker-compose setup includes a persistent volume intended to
serve as a cache for bundler and rubygems. However, the volume mount
contains a small error which is causing gems to be installed on the
ephemeral filesyste - IE the gem cache is lost when a container
exits. This means that every request to start an app container needs
to re-install rubygems - and therefore re-run every step that comes
after in the Dockerfile. This leads to longer than necessary wait
times for the image to come up since the `bundle install` instruction
of the Docerfile will never produce a cache-able layer.

This fixes the path so that gems are stored on the persistent volume
and significantly decreases the amount of time one needs to wait for
app images and containers to become available.
In development, the app and webpacker services run in the same
context. Instead of building an image for each service, we now build one
image and share it - then let docker-compose figure out what to run,
which ports to expose, etc.

This uses Docker multi-stage builds to create the "base" image, then
the app and webpacker services both use the "base" image as a starting
point for their containers.

References
----------

- https://docs.docker.com/develop/develop-images/multistage-build/
@indiebrain indiebrain self-assigned this Oct 1, 2020
@solebared
Copy link
Collaborator

Looking great @indiebrain ! 💯 . I see its a draft PR; what's left to do?

@indiebrain
Copy link
Collaborator Author

indiebrain commented Oct 1, 2020

Thanks for the kind words, my friend.

I see its a draft PR; what's left to do?

I got to a sleepy Code Complete last night. Wanted to give it a once over with rested eyes before I subjected other human beings to it - specifically the PR write up; I tend to ramble, even more so when I'm sleepy. ;-)

@indiebrain indiebrain marked this pull request as ready for review October 1, 2020 11:13
Copy link
Collaborator

@solebared solebared left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🐳

@solebared solebared merged commit 0eda1ab into main Oct 1, 2020
@solebared solebared deleted the fix-multiple-image-builds-when-starting-docker branch October 1, 2020 18:17
@indiebrain
Copy link
Collaborator Author

Thanks, mate!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants