-
Notifications
You must be signed in to change notification settings - Fork 133
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update images to match official Supabase Docker compose #48
Conversation
* version update: * db: 15.1.0.117 * studio: 20230921-d657f29 * auth: v2.99.0 * rest: v11.2.0 * realtime: v2.10.1 * meta: v0.68.0 * storage: v0.40.4 * kong: 2.8.1 * major changes: * add imgproxy service * add dashboard basic auth * removal of all ingress except for db and kong * db port & database are included in secret and used by other services * db storage renamed as volume following other services * db init-scripts are now independent from image init-scripts * add db migration config * set internal URLs programmatically in deployments respectively * minor changes: * update all services environment variable default values * add useful value comments about the new major changes * bugfixes: * fix kong container command to bash * fix kong.yaml location due to invalid path on image * fix obsolete storage values and replaced with persistence instead * fix storage mount path by setting the default image persistence path * fix auth default mailer URL paths * fix kong paths using a possible different DNS search name * fix db SSL mode allowed by default * fix README instruction applying asset to specific namespace
* add analytic secret * add analytic kong path * fix imgproxy service port
* bump chart version
* add functions path to kong * fix kong services dynamic ports instead of static ports * fix analyic & imgproxy volumes & volumeMounts generation
* secrets are now built-in inside values.yaml * previous secrets can still be used by removing the secret values * bump minor project version for major structure changes * update README to reflect all of the major changes
* add simple health check for supported services * fix auth env DB_PORT value
* add cluster role and cluster role binding for vector log collection * add node name as vector env * disable vector own log from being collected * change vector secret to config map * remove useless copy vector if statement
* update ingress TLS secret name and hosts
* fix migration copy script bash syntax
* add detailed `kubectl get pod` instruction * add missing `db` persistence default values * remove deprecated `storage.storage` field
* update `db`, `realtime` environment variables * update README uninstall instructions * further simplify example values * fix password not URL-encoded for db-uri * fix `db_user` default value for related services * fix `kong` config depending on enabled services * fix `db` liveliness command * remove `PG_USER` from `db`
* add chart icon based on supabase Github avatar * fix kong deployment malformed if dashboard is disabled * fix chart YAML lint errors
@milanvanschaik Added linting & testing Github Actions during PR based on helm chart-testing-action. But to enable automatic deployment to Github Pages, an environment and a token must be set up for the name: Release Charts
on:
push:
branches:
- main
jobs:
release:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Configure Git
run: |
git config user.name "$GITHUB_ACTOR"
git config user.email "$GITHUB_ACTOR@users.noreply.github.com"
- name: Run chart-releaser
uses: helm/chart-releaser-action@v1.5.0
env:
CR_TOKEN: "${{ secrets.GITHUB_TOKEN }}" |
Any plans to merge this PR? We really need these fixes and improvements for Supabase on Kubernetes |
I tip my hat to you, sir. Awesome stuff right here. |
@milanvanschaik @bigbitbus @kiwicopple @devudopw @pieveee @icrotz Sorry to bother you, but would you have some time to review and merge this? Thanks. |
@kiwicopple I could volunteer, although I still need to figure out how the GitHub Actions Helm deployment, coming from GitLab CI/CD workflow. |
Hello there @kiwicopple I'd like to offer my help as a volunteer maintainer for this repository. I believe I can contribute effectively based on my experience and skills. Here's a brief overview of my qualifications:
I'm particularly interested in understanding the community's needs and objectives. While maintaining Helm Charts via GitHub Actions is one potential area, I'm open to exploring various ways to assist the project and contribute to its growth. Let's initiate a discussion within the community to align our goals and determine how I can best support our shared objectives. Feel free to reach out if you have any questions or would like to discuss this further. I'm here to collaborate and work together with the community. Thank you for considering my involvement, and I look forward to being part of this collaborative effort. |
thanks @arpagon - I've also seen you in the community for a long time now. I've added you to the SRE team. Also feel free to reach out to me on twitter to discuss the maintainer role more |
* add JWT expiry * update `db` JWT schema * update `db` arguments based on docker-compose * update README with local testing suite * fix testing conditions * remove `db` ingress
@drpsyko101 I've get an error when use db:
enabled: true
persistence:
enabled: true
size: 100Gi
You may want to check spec:
{{- if .Values.db.persistence.storageClassName }}
storageClassName: {{ .Values.db.persistence.storageClassName }}
{{- end }}
accessModes:
{{- range .Values.db.persistence.accessModes }}
- {{ . | quote }}
{{- end }}
resources:
requests:
- db: {{ .Values.db.persistence.size | quote }}
+ storage: {{ .Values.db.persistence.size | quote }} |
* update analytics command and environment variables * fix persistent volume claim request schema
@mul14 Good catch! This commit drpsyko101@c4b8116 should fix the issue affecting |
Hi @drpsyko101 there is a small typo in the values.example.yaml file. The text 'repo' should be changed to 'tag'. There is no latest tag so the example will not work without this.
|
Thanks @n84ck! It's a typo from drpsyko101@70a3545. Not sure how it got through the chart testing script though. It seems like they've removed the |
Hello, I have trouble with analytics.
|
@nabi-chan It is a known issue at this moment.
I'm still investigating the source of the problem. It might be related to Kubernetes resources rather than the container itself. Edit: Adding missing
|
@drpsyko101 where are we on this now? Is it pretty much good, or still in dev? @arpagon are you still able to dedicate some time to this, or should someone else take over review duties? |
* update image tags
@AntonOfTheWoods I'm actively running it in prod, but only covers microk8s and minikube environment. More in-depth tests are needed for vector and analytics since I do not touch that part much in my code. @nabi-chan Fixed the analytics issue when using URL parameter in the JDBC. I can only assume that Edit: Changing the |
Nice @drpsyko101 . I see you have an extra commit in your repo for using existing secrets - is that intended for here also (yes please!)? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There looks like a bit of factorisation could reduce the number of lines of repeated code
securityContext: | ||
{{- toYaml .Values.analytics.podSecurityContext | nindent 8 }} | ||
initContainers: | ||
- name: init-db |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is repeated in quite a few places, and probably could be in a couple of other deployments that don't currently have an init container db check. Could this be extracted to the top _helpers.tpl
and include
d?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are right. However, there might be slight changes to analytics init-db
since changing POSTGRES_BACKEND_URL
after initial deployment may cause it to return 404. I'm still trying to figure out where does the initial value resides and maybe add an alteration script to the init-db
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you are interested, I can try and adapt the bitnami
way of doing things to what we have here. Personally, I think it would be worth exploring using the bitnami
postgresql
and minio
charts as sub-dependencies. While they can be horribly slow updating some stuff, they are pretty good on the highly used charts, like postgres & co. What do you think?
image: postgres:15-alpine | ||
imagePullPolicy: IfNotPresent | ||
env: | ||
- name: DB_HOST |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a fan of how bitnami does these things, because it allows to easily refer to external dbs, e.g, https://github.com/bitnami/charts/blob/main/bitnami/supabase/templates/_helpers.tpl#L413C1-L426C12 . I am a n00b but is there a benefit to having DB_HOST
as an explicit external env var for each submodule -> could there not be a externalDatabase
like bitnami ( https://github.com/bitnami/charts/blob/main/bitnami/supabase/values.yaml#L3142)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I could add external db values somewhere under db.external.host
. But as you can see, the main objective of this PR is to make sure all deployments are up-to-date with the official Supabase repo. Any QoL improvements may hopefully come after this PR is merged.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, have you had any signs of life from @arpagon though? It seems he might be too busy, and it might be better to just move forward and try and get things merged later. This already officially qualifies as a super-mega-merge PR, so a few extra large-scale changes probably wouldn't hurt :-). And bitnami
are a common source of inspiration for charts around the interwebs. So we could continue this on your repo and then try and get that merged later maybe?
@AntonOfTheWoods Using existing refs still need a little bit more work on handling custom keys and escaping characters. It probably will be in the next PR, I think. |
@drpsyko101
|
@nabi-chan Apparently changing |
Hello @drpsyko101, thanks for your help! anyway, I restarted all projects, but when I try got these logs in analytics. How can I fix it?
|
* fix storage fails to deploy if minio stub exists * update postgres-meta image tag
@drpsyko101 , I no longer seem to be getting any errors but I'm not actually getting any logs. With the docker I can see various different logs for the different services but I don't get any on kube. Can you confirm you are actually seeing logs? EDIT: it looks like maybe only the aggregator is being installed, and agents are not installed. Or is there some mode that they can work as both? |
@AntonOfTheWoods Same here. The request sent to analytics returns an empty array. I'll crosscheck the init script on db with the docker compose later. @nabi-chan The configuration might still be stuck somewhere, either in the ephemeral storage or in the db. Try clearing analytics schema from the db and then simply delete the analytics pod. |
@drpsyko101 I'm not sure how relevant the compose file is going to be, given straight docker compose, or at least the compose file provided by supabase, isn't meant to be multi-server, and Kubernetes definitely is. My reading of the Vector kube setup is that it requires an agent to be deployed on every host (at least that will be hosting the supabase pods). So that would mean ideally a daemonset or similar for agents. Or just studying or even directly using the vector.dev provided helm chart as a sub-chart. I will have a play and see if I can get anything working. |
I successfully adjusted and installed the helm chart, thanks for it. I have four remaining issues:
Btw.
I fixed that by bumping up the versions of all components to match https://github.com/supabase/cli/blob/c9a0c2972194d51004fe6849619339a17c85f1c8/internal/utils/misc.go (sorry, don't know which bump actually resolved the issue) |
@AntonOfTheWoods I've managed to get the Edit: |
* add helm install notes * add environment variable support to vector * update image tags * update README on vector volume * fix vector volume error
@drpsyko101 I can see the logs! Hurrah! From what I can tell from https://kubernetes.io/docs/concepts/storage/volumes/#hostpath, it looks like this is pretty much the only usecase where it is considered "valid". There seem to be a few very old tickets in the vector helm repo regarding vector trying to load several This is good news anyway! I unfortunately no longer have any multi-node clusters but I suspect that the fact it is using a |
Hello @drpsyko101 and @AntonOfTheWoods, I want to extend a sincere thank you to both of you for your incredible contributions. @drpsyko101, your initiative to align the Helm chart with the official Docker compose is incredibly valuable, and @AntonOfTheWoods, your detailed feedback and insights have been instrumental in shaping this update. I apologize for the delay in reviewing this PR. I recognize that the complexity of the changes and my own schedule have created an unexpected bottleneck in the process. Here's how we're going to move forward:
Thank you again for your dedication and patience. |
Thanks for your update @arpagon ! Just to preface the discussion, I think it is very clear that by FAR the most important issue for current There is one thing that I would also like to point out about trying to "make this chart like the upstream docker compose". The goals of kubernetes and docker compose (without storm but who would use that nowadays?) are fundamentally different, and the compose that is provided is definitely not meant to be multi-machine. As a SAAS provider, I can understand supabase wanting to provide a great compose experience - it works great on a dev machine and for hobbyists, but if you need any sort of reliability, scalability or performance then you'll need to go with their services. I hope we can craft a chart that enables for proper scaling across a reasonably-sized kube cluster with heterogenous services, though I definitely understand that supabase the company might not be hugely interested in that! There are many parts of the current chart that look like they are just trying to copy/paste from the compose. While I don't claim to be an expert, the Bitnami charts are typically pretty well respected for what they do. Bitnami is now fully corporate and obviously don't have a great demand from paying customers for Regarding the One thing that admittedly makes for more verbose charts but I think is much, much better for explicitly integrating the constraints of the underlying software is not having obligatory config defined in I have a project based on Anyway, if we can get something similar to this merged, then I will definitely be submitting plenty of PRs in the next month or two! |
I am totally. I had a bottleneck at the beginning of the year. But I think I am still able to maintain this repo (in regular base of <2 weeks-ish).
You are right, this chart should be work in multi-node cluster, and this should be the medium-term objective for this chart. For now, we can integrate this PR. and work on a roadmap together with everyone who has engaged in this discussion, wdyt?
I agree, I tell you that that was my main concern when talking to @kiwicopple. And for me, it was very satisfying to see that in Supabase's vision they really love self-hosting, but it's just not a priority. That is why the work falls on us community contributors.
To be honest, I'm not a big fan of bitnami charts. I mean, I love the work they do and the standardization they have achieved. But I like much more that each Application, in this case Supabase, follows its own path with its own challenges.
Let's discuss everything else in a roadmap thread, okay? |
Update images to match official Supabase Docker compose
What kind of change does this PR introduce?
A major update to sync the image tags to the official Docker compose, as well as adding more services such as
imgproxy
,analytics
,vector
, andfunctions
. Full changelog:db
-supabase/postgres:15.1.0.147
studio
-supabase/studio:20240326-5e5586d
auth
-supabase/gotrue:v2.143.0
rest
-postgrest/postgrest:v12.0.1
realtime
-supabase/realtime:v2.27.5
meta
-supabase/postgres-meta:v0.77.2
storage
-supabase/storage-api:v0.46.4
imgproxy
(new) -darthsim/imgproxy:v3.8.0
kong
-kong:2.8.1
analytics
(new) -supabase/logflare:1.4.0
vector
(new) -timberio/vector:0.34.0-alpine
functions
(new) -supabase/edge-runtime:v1.36.1
minio
(new, disabled) -minio/minio:latest
spec.template.spec.containers[0].env
instead of usingConfigMap
ConfigMap
is now reserved for non-sensitive file-based volume mountsdb
initialization scripts so as not to override built-in image scriptskong
.livelinessProbe
&readinessProbe
to all deploymentsPersistentVolumeClaim
is now handled by<service>.persistence
<service>.storage
has been removed to standardize PVC handling across deploymentsstorage
will fail if/var/lib/storage
doesn't existkong
shell doesn't supportpipefail
secrets/jwt
Version compatibility
0.0.x
to0.1.x
supabase/postgres
bumped from14.1
to15.1
, which warrants backing up all your data before proceeding to update the major versionsupabase/postgres
has been reworked and matched closely to the Docker Compose version. Further tweaks to the scripts are needed to ensure backward-compatibility.db.config
, which will be mounted at/docker-entrypoint-initdb.d/migrations/
. Copy your migration files from your local project'ssupabase/migration
and populate thedb.config
.kong
service. This is designed to limit entry to the stack through securekong
service.kong.yaml
has been modified to follow Docker kong.yaml template.supabase/storage
does not come with pre-populated/var/lib/storage
, therefore anemptyDir
will be created if persistence is disabled. This might be incompatible with the previous version if the persistent storage location is set to a location other than specified above.Known Issues
/
character in dashboard username/password may result inkong
service failed to initialize.POSTGRES_BACKEND_URL
value after initial deployment may cause analytics to return 404. A workaround is to simply restartdb
and/oranalytics
deployment./api/organizations/default-org-slug/billing/subscription
,/api/projects/default/billing/addons
,/api/integrations/default-org-slug?expand=true
return 404vector
requires read access to the/var/log/pods
directory. When run in a Kubernetes cluster this can be provided with a hostPath volume.