Tools
-
- Used for building the disk image of the orchestrator client and server
-
Terraform (v1.5.x)
- We ask for v1.5.x because starting from v1.6 Terraform switched their license from Mozilla Public License to Business Source License.
- The last version of Terraform that supports Mozilla Public License is v1.5.7
-
- Used for database migrations
- We don't use Atlas's hosted service, only their open-source CLI tool which unfortunatelly requires you to login via
atlas login
. - We're in the process of removing this dependency.
-
- Used for managing the infrastructure on Google Cloud
Accounts
- Cloudflare account
- Domain on Cloudflare
- GCP account + project
- PostgreSQL database (Supabase's DB only supported for now)
Optional
Recommended for monitoring and logging
- Grafana Account & Stack (see Step 15 for detailed notes)
- Posthog Account
Check if you can use config for terraform state management
- Go to
console.cloud.google.com
and create a new GCP project - Run
make login-gcloud
to login togcloud
- Get Cloudflare API Token: go to the Cloudflare dashboard -> Manage Account -> API Tokens -> Create Token -> Edit Zone DNS -> in "Zone Resources" select your domain and generate the token
- Get Postgres database connection string from Supabase: Create a new project in Supabase and go to your project in Supabase -> Settings -> Database -> Connection Strings -> Postgres -> Direct
- Create a storage bucket in Google Cloud. This is the source of truth for the terraform state: Go to
console.cloud.google.com
-> Storage -> Create Bucket -> Bucket name:e2b-terraform-state
-> Location:US
-> Default storage class:Standard
-> Location type:Multi-region
-> Bucket location:US
-> Create - Create
.env.prod
,.env.staging
, or.env.dev
from.env.template
. You can pick any of them. Make sure to fill in the values. All are required except the# Tests
section - Run
make switch-env ENV={prod,staging,dev}
- Login to the Atlas CLI:
atlas login
- Run
make migrate
: This step will fail--that's okay. After you get the error message, you will need to createatlas_schema_revisions.atlas_schema_revisions
, just copied frompublic.atlas_schema_revisions
. This can be done with the following statement in the Supabase visual SQL Editor:
CREATE TABLE atlas_schema_revisions.atlas_schema_revisions (LIKE public.atlas_schema_revisions INCLUDING ALL);
- Run
make migrate
again - Run
make init
. If this errors, run it a second time--it's due to a race condition on Terraform enabling API access for the various GCP services; this can take several seconds. A full list of services that will be enabled for API access:
- Secret Manager API
- Certificate Manager API
- Compute Engine API
- Artifact Registry API
- OS Config API
- Stackdriver Monitoring API
- Stackdriver Logging API
- Run
make build-and-upload
- Run
make copy-public-builds
. This will copy kernel and rootfs builds for Firecracker to your bucket. You can build your own kernel and Firecracker roots. - Secrets are created and stored in GCP Secrets Manager. Once created, that is the source of truth--you will need to update values there to make changes. Create a secret value for the following secrets:
- e2b-cloudflare-api-token
- e2b-postgres-connection-string
- Grafana secrets (optional)
- Posthog API keys for monitoring (optional)
- Run
make plan-without-jobs
and thenmake apply
- Run
make plan
and thenmake apply
. Note: provisioning of the TLS certificates can take some time; you can check the status in the Google Cloud Console - To access the nomad web UI, go to nomad.<your-domain.com>. Go to sign in, and when prompted for an API token, you can find this in GCP Secrets Manager. From here, you can see nomad jobs and tasks for both client and server, including logging.
- Look inside packages/nomad for config files for your logging and monitoring agents.
- If any problems arise, open a Github Issue on the repo and we'll look into it.
E2B is using Firecracker for Sandboxes.
You can build your own kernel and Firecracker version from source by running make build-and-upload-fc-components
- Note: This needs to be done on a Linux machine due to case-sensitive requirements for the file system--you'll error out during the automated git section with a complaint about unsaved changes. Kernel and versions could alternatively be sourced elsewhere.
- You will still have to copy
envd-v0.0.1
from public bucket by running the command bellow or you can build it from this commit
gsutil cp -r gs://e2b-prod-public-builds/envd-v0.0.1 gs://$(GCP_PROJECT_ID)-fc-env-pipeline/envd-v0.0.1
make init
- setup the terraform environmentmake plan
- plans the terraform changesmake apply
- applies the terraform changes, you have to runmake plan
before this onemake plan-without-jobs
- plans the terraform changes without provisioning nomad jobsmake destroy
- destroys the clustermake version
- increments the repo versionmake build-and-upload
- builds and uploads the docker images, binaries, and cluster disk imagemake copy-public-builds
- copies the old envd binary, kernels, and firecracker versions from the public bucket to your bucketmake migrate
- runs the migrations for your databasemake update-api
- updates the API docker imagemake switch-env ENV={prod,staging,dev}
- switches the environmentmake import TARGET={resource} ID={resource_id}
- imports the already created resources into the terraform statemake setup-ssh
- sets up the ssh key for the environment (useful for remote-debugging)