Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: On each redeploy new volume is created #2376

Open
VCasecnikovs opened this issue Jun 7, 2024 · 24 comments
Open

[Bug]: On each redeploy new volume is created #2376

VCasecnikovs opened this issue Jun 7, 2024 · 24 comments
Labels
🐞 Confirmed Bug Verified issues that have been reproduced by the team.

Comments

@VCasecnikovs
Copy link

Description

For each redeploy of an app - new volume is created.
I use docker compose app with mentioned volume

volumes:
  store:
    driver: local

It creates a new volume for each redeploy:
Screenshot 2024-06-07 at 14 53 28

It does not see storage in storages:
Screenshot 2024-06-07 at 14 54 37

Minimal Reproduction (if possible, example repository)

Create docker compose service with storage -> redeploy it

Exception or Error

No response

Version

v4.0.0-beta.294 Latest

@VCasecnikovs
Copy link
Author

This is how store is added
volumes:
- 'store:/store'

@andrasbacsai
Copy link
Member

I found the issue (and it will be fixed in the upcoming version).

If you define the top-level volumes like you did:

volumes:
  store:
    driver: local

Coolify does not modify this value (as it thinks that as you defined this part, so you know what you are doing). This is a "bug" as then Docker defines the volume name based on the current dir, which is different for each deployment. That is why you have different volumes.

In the next version, Coolify will check if you defined the name property, and if not, it will define it, so it won't be randomly generated.

The storage view should be empty as everything is hardcoded.

I have a plan to improve this part, to be similar as the services view.

@VCasecnikovs
Copy link
Author

Thank you, so, to fix it right now I should set name value to the store?

Copy link
Member

Yes.

@itsUndefined
Copy link

What if we deploy two docker compose projects with the same volume name? That shouldn't be an issue but it is.

@devjume
Copy link

devjume commented Jun 30, 2024

I noticed similar bug that kinda relates to this, so I won't open new issue yet.

Issue / bug:
Coolify is not adding dynamic name for the volumes.
This creates problems when deploying multiple services with identical Docker Compose files on a single server, as they both attempt to use the same volume.

In documentation it says that each volume should get dynamic name. But that is not the case. (At least when deploying with docker compose)
"To prevent storage overlapping between resources, Coolify automatically adds the resource’s UUID to the volume name."
Persistent Storage - Coolify Docs

Example fo docker-compose.yaml where this happens.
When deploying this twice, both pocketbase containers tries to use same volume pb_data

services:
  sveltekit:
    build:
      context: ./sveltekit
      dockerfile: Dockerfile
    environment:
      NODE_ENV: production
    depends_on:
      - pocketbase

  pocketbase:
    build:
      context: ./pocketbase
      dockerfile: Dockerfile
    environment:
      GO_ENV: production
    volumes:
      - pb_data:/home/nonroot/app/pb_data

When creating new resource using previous docker-compose file, coolify just adds this to it. No dynamic tag.

volumes:
  pb_data:
    name: pb_data

@OmkoBass
Copy link

OmkoBass commented Jul 7, 2024

I still can't fix this. The volumes are being lost on each redeploy.

My docker-compose.yaml volumes are:

volumes:
  project_db_data:
    name: project_db_data

@gardenbaum
Copy link

gardenbaum commented Jul 9, 2024

I have the same problem. Some templates like dragonfly let me set a volume name and the data is persistent across restarts. Other templates or pure docker compose deployments set a random string in front of the volume name. I tried setting volume names in the docker compose, but that did not work.

Volumes:
  gitea_data:
    drivers: local
  gitea_config:
    driver: local
  postgresql_data:
    driver: local

or

volumes:
  gitea-data:
    name: gitea_data
    driver: local
  gitea-timezone:
    name: gitea_timezone
    driver: local
  gitea-localtime:
    name: gitea_localtime
    driver: local
  postgresql-data:
    name: postgresql_data
    driver: local

When I pull a new image and restart the containers, I get a new random string in front of the volume names and the old volume is lost.

@OmkoBass
Copy link

OmkoBass commented Jul 10, 2024

This is a huge problem, docker compose builds cannot be used at all. I hope we find a solution soon. If I find anything I'll update here.

@andrasbacsai andrasbacsai added 🐛 Bug Reported issues that need to be reproduced by the team. 🚧 Next Issues and PRs planned for the next release. labels Jul 10, 2024 — with Linear
Copy link
Member

Can you please check again with the latest version? I have added a few fixes.

You need to create your application again to test it.

@andrasbacsai andrasbacsai removed the 🚧 Next Issues and PRs planned for the next release. label Jul 15, 2024
@andrasbacsai andrasbacsai added 💤 Waiting for feedback Issues awaiting a response from the author. 🚧 Next Issues and PRs planned for the next release. labels Jul 15, 2024 — with Linear
@OmkoBass
Copy link

Hey Andras,

  • I deleted the entire application and recreated it.
  • Added some data to the database
  • Went back to Coolify and pressed redeploy
  • The data is missing now

This is my database inside the docker-compose file

  db:
    image: bitnami/postgresql:latest
    platform: linux/amd64
    restart: always
    volumes:
      - db_data:/bitnami/postgresql
    ports:
      - ${POSTGRESQL_PORT}:5432
    environment:
      - POSTGRESQL_DATABASE=${POSTGRESQL_DATABASE}
      - POSTGRESQL_USERNAME=${POSTGRESQL_USERNAME}
      - POSTGRESQL_PASSWORD=${POSTGRESQL_PASSWORD}
    logging: *default-logging

volumes:
  db_data:
    name: 'db_data'
    driver: local

The problem still persists for me. Can someone else check just so we're sure it's not on my side.

Copy link
Member

I cannot replicate the issue on the latest v315 version.

@OmkoBass
Copy link

How did you test id?

Can you give me the docker-compose.yaml file that you used for testing?

@mukhtharcm
Copy link

@OmkoBass did you got any resolution?

@OmkoBass
Copy link

No, but I believe Andras fixed it, he probably knows his own software better than me 😅
I'm a bit busy now with other projects. I'll try again soon and get back to you if I find a solution.

@mukhtharcm
Copy link

mukhtharcm commented Jul 25, 2024 via email

@shashank-sharma
Copy link

Would it be possible to persist the volume name if I deploy a repository as docker compose?
Use case: I have an SSD attached to my Raspberry Pi, now I want the volume to being /home/<user>/ssd/services/data but every time it picks up docker compose file, it adds some UUID as a prefix.

Example: Docker compose which I am testing now, which also has volume name defined

Actual coolify: 'acks0gs-uptime-kuma:/app/data'
Expected: 'uptime-kuma:/app/data'

@VitalyKolesnikov
Copy link

VitalyKolesnikov commented Aug 23, 2024

I have the same problem. Not figured out how exactly it reproduces yet, but every day my data is being lost after some of redeploys. I use postgres and docker-compose with a volume.

@andrasbacsai andrasbacsai removed the 🚧 Next Issues and PRs planned for the next release. label Sep 12, 2024
@peaklabs-dev peaklabs-dev removed the 💤 Waiting for feedback Issues awaiting a response from the author. label Sep 26, 2024
@renatoaraujoc
Copy link

renatoaraujoc commented Oct 19, 2024

@andrasbacsai I can confirm that predefined volumes aren't working, I'm on version v4.0.0-beta.360.

I'm trying to deploy the predefined app Grafana with Prometheus, but I want to use my predefined volumes and not the auto-generated ones.

Print screens are as follows:
Screenshot 2024-10-19 at 11 49 42

Resolved docker-compose for grafana service:
Screenshot 2024-10-19 at 11 52 22

Resolved volumes for this docker-compose:
Screenshot 2024-10-19 at 11 50 58

external: true does not work as well
Screenshot 2024-10-19 at 12 00 36

UI:
Screenshot 2024-10-19 at 11 52 34

Am I doing something wrong?

EDIT

I found a hackish way to make this work! The problem is at this line:

https://github.com/coollabsio/coolify/blob/5d62a46a16f252e6ece4d25fe56838e99883d91b/bootstrap/helpers/shared.php#L1767C51-L1767C62

From that code, when Coolify sees "driver_opts.type" as "cifs", it just returns the volume as is and exclude any dynamic created volume, so this will work (also you must tell that the volume is external!):

volumes:
  internal-grafana:
    name: internal-grafana
    external: true
    driver_opts:
      type: cifs

Will resolve to:

services:
  grafana:
    image: grafana/grafana-oss
    volumes:
      - 'internal-grafana:/var/lib/grafana' # the abcde1234-internal-grafana:/var/lib/grafana will disappear!

@andnig
Copy link

andnig commented Oct 22, 2024

@renatoaraujoc Wow, thanks for this find. This solves the issue for me.

EDIT: Forget what I said, I was just stupid. Changed the postgres image of one of my services to make use of backups and in this process did not change the volume mapping path. Setting pg-data:/var/lib/postgresql/data solved the issue.
Layer 8 (user) problem 😆

@renatoaraujoc
Copy link

Hey @andnig,

Wouldn' Setting pg-data:/var/lib/postgresql/data in a docker-compose file in coolify forcefully create a dynamic volume? I thougt you wanted to use a custom defined volume, its kind of impossible today unless you coolify.managed=false which is not desirable.

@rouuuge
Copy link

rouuuge commented Oct 24, 2024

Just found a way to reproduce the issue for a postgres service:

If you add a volume mount like:
Image

the docker-composer.yml has only that entry (postgres-db is missing):

volumes:
            - '/data/coolify/postgresql/certs:/etc/postgresql/certs'

if i remove the mapping the volume is created correctly:

 volumes:
            - 'postgres-db:/var/lib/postgresql/data'

@peyloride
Copy link

peyloride commented Nov 27, 2024

I'm facing a very similar issue, I have a rails app (rails 8) which comes with sqlite as default for production. So in order to have automated sqlite backups, I'm trying to configure duplicati. But I need to tell duplicati container that my db files in another volume.

I can't do that because when I edit the compose file it adds project's prefix to the volume and I can never reach the desired volume. The cifs trick doesnt work, and since it's only a rails container I don't have backup tab either.

I think we need a way to tell that it shouldn't add prefixes with some kind of option. I can't find a solution to this for now.

@peaklabs-dev peaklabs-dev added 🐞 Confirmed Bug Verified issues that have been reproduced by the team. and removed 🐛 Bug Reported issues that need to be reproduced by the team. labels Dec 2, 2024
@peaklabs-dev peaklabs-dev added this to the v4.0.0 Stable Release milestone Dec 2, 2024
@peaklabs-dev
Copy link
Member

Developer note (for confirmed bugs).
Reproduction steps:

  1. Follower these steps: [Bug]: On each redeploy new volume is created #2376 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🐞 Confirmed Bug Verified issues that have been reproduced by the team.
Projects
None yet
Development

No branches or pull requests