Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"no such table: tasks" when adding a new item #666

Closed
Azhelor opened this issue Nov 16, 2024 · 17 comments
Closed

"no such table: tasks" when adding a new item #666

Azhelor opened this issue Nov 16, 2024 · 17 comments
Labels
bug Something isn't working

Comments

@Azhelor
Copy link

Azhelor commented Nov 16, 2024

Describe the Bug

Hi,

I just installed the last version of Hoarder (v 0.19.0) following the steps here : https://docs.hoarder.app/Installation/docker/

Installation, sign up and log in worked just fine but when I'm trying to add a new item from the home page, I get a red alert saying "no such table: tasks". But when I refresh the page, the item appears.

By the way, I got a similar mistake when trying to delete an item, with a red alert bow saying "Something went wrong. There was a problem with your request." Mention that because I assume both errors could be linked.

I found another bug report with the same error and found out that my queue.db is also empty this might come from a database migration error, but I have no idea how to fix that since I'm pretty new to all of this.

Thanks for your help.

Steps to Reproduce

  1. Install a brand new Hoarder with docker compose
  2. Create user and log in
  3. Try to add a new item

Expected Behaviour

I guess item should be immediately added and visible and no error should be shown.

Screenshots or Additional Context

No response

Device Details

No response

Exact Hoarder Version

v0.19.0

@MohamedBassem
Copy link
Collaborator

Hey, can you try deleting queue.db and restarting the container? That should recreate it

@MohamedBassem MohamedBassem added the question Further information is requested label Nov 16, 2024
@Azhelor
Copy link
Author

Azhelor commented Nov 17, 2024

Hi, thanks for your answer. I tried that fix, but unfortunately it did not work. queue.db was indeed recreated but is still empty. Don't know if it's important, but the database was not recreated during container restart but after I tried to add a new item.

@stayupthetree
Copy link

I am getting this error as well on a fresh install. I tried the removal of queue.db and result is the same

@MohamedBassem
Copy link
Collaborator

hmmm, that's unexpected. The queue.db file should be created on container startup. Can you share the logs immediately after starting the container?

@MohamedBassem
Copy link
Collaborator

Can you also share your compose file and redacted env file?

@Azhelor
Copy link
Author

Azhelor commented Nov 18, 2024

Sure. I deleted queue.db once again and here are the logs of "hoarder-web-1" and "hoarder-chrome-1" (I don't know which one is relevant here).

hoarder-web-1:

s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service init-db-migration: starting
Running db migration script
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service init-db-migration successfully started
s6-rc: info: service svc-workers: starting
s6-rc: info: service svc-web: starting
s6-rc: info: service svc-workers successfully started
s6-rc: info: service svc-web successfully started
s6-rc: info: service legacy-services: starting
s6-rc: info: service legacy-services successfully started
  ▲ Next.js 14.2.13
  - Local:        http://localhost:3000
  - Network:      http://0.0.0.0:3000

 ✓ Starting...
 ✓ Ready in 403ms

hoarder-chrome-1:

[1118/212522.742137:ERROR:bus.cc(407)] Failed to connect to the bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
[1118/212522.743160:ERROR:bus.cc(407)] Failed to connect to the bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
[1118/212522.743311:ERROR:bus.cc(407)] Failed to connect to the bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
[1118/212522.743528:WARNING:dns_config_service_linux.cc(427)] Failed to read DnsConfig.
[1118/212522.755910:INFO:policy_logger.cc(145)] :components/policy/core/common/config_dir_policy_loader.cc(118) Skipping mandatory platform policies because no policy file was found at: /etc/chromium/policies/managed
[1118/212522.755935:INFO:policy_logger.cc(145)] :components/policy/core/common/config_dir_policy_loader.cc(118) Skipping recommended platform policies because no policy file was found at: /etc/chromium/policies/recommended

DevTools listening on ws://0.0.0.0:9222/devtools/browser/bdfa091e-772d-4305-ac47-65a31c16276c
[1118/212522.760146:WARNING:bluez_dbus_manager.cc(248)] Floss manager not present, cannot set Floss enable/disable.
[1118/212522.781625:WARNING:sandbox_linux.cc(418)] InitializeSandbox() called with multiple threads in process gpu-process.
[1118/212522.845023:WARNING:dns_config_service_linux.cc(427)] Failed to read DnsConfig.

Once again, I don't if it's important, but a few seconds after restart I get this error several times in hoarder-web-1 container:

/usr/local/lib/node_modules/corepack/dist/lib/corepack.cjs:21609
    throw new Error(
          ^

Error: Error when performing the request to https://registry.npmjs.org/pnpm/latest; for troubleshooting help, see https://github.com/nodejs/corepack#troubleshooting
    at fetch (/usr/local/lib/node_modules/corepack/dist/lib/corepack.cjs:21609:11)
    at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
    at async fetchAsJson (/usr/local/lib/node_modules/corepack/dist/lib/corepack.cjs:21623:20)
    ... 4 lines matching cause stack trace ...
    at async Object.runMain (/usr/local/lib/node_modules/corepack/dist/lib/corepack.cjs:23096:5) {
  [cause]: TypeError: fetch failed
      at node:internal/deps/undici/undici:13392:13
      at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
      at async fetch (/usr/local/lib/node_modules/corepack/dist/lib/corepack.cjs:21603:16)
      at async fetchAsJson (/usr/local/lib/node_modules/corepack/dist/lib/corepack.cjs:21623:20)
      at async fetchLatestStableVersion (/usr/local/lib/node_modules/corepack/dist/lib/corepack.cjs:21550:20)
      at async fetchLatestStableVersion2 (/usr/local/lib/node_modules/corepack/dist/lib/corepack.cjs:21672:14)
      at async Engine.getDefaultVersion (/usr/local/lib/node_modules/corepack/dist/lib/corepack.cjs:22292:23)
      at async Engine.executePackageManagerRequest (/usr/local/lib/node_modules/corepack/dist/lib/corepack.cjs:22390:47)
      at async Object.runMain (/usr/local/lib/node_modules/corepack/dist/lib/corepack.cjs:23096:5) {
    [cause]: Error: getaddrinfo EAI_AGAIN registry.npmjs.org
        at GetAddrInfoReqWrap.onlookupall [as oncomplete] (node:dns:120:26) {
      errno: -3001,
      code: 'EAI_AGAIN',
      syscall: 'getaddrinfo',
      hostname: 'registry.npmjs.org'
    }
  }
}

Node.js v22.11.0

Here is docker-compose.yml (I didn't change anything from the file linked in installation doc):

version: "3.8"
services:
  web:
    image: ghcr.io/hoarder-app/hoarder:${HOARDER_VERSION:-release}
    restart: unless-stopped
    volumes:
      - data:/data
    ports:
      - 3000:3000
    env_file:
      - .env
    environment:
      MEILI_ADDR: http://meilisearch:7700
      BROWSER_WEB_URL: http://chrome:9222
      # OPENAI_API_KEY: ...
      DATA_DIR: /data
  chrome:
    image: gcr.io/zenika-hub/alpine-chrome:123
    restart: unless-stopped
    command:
      - --no-sandbox
      - --disable-gpu
      - --disable-dev-shm-usage
      - --remote-debugging-address=0.0.0.0
      - --remote-debugging-port=9222
      - --hide-scrollbars
  meilisearch:
    image: getmeili/meilisearch:v1.11.1
    restart: unless-stopped
    env_file:
      - .env
    environment:
      MEILI_NO_ANALYTICS: "true"
    volumes:
      - meilisearch:/meili_data

volumes:
  meilisearch:
  data:

And here is .env (I just removed next url and both secret keys):

HOARDER_VERSION=release
NEXTAUTH_SECRET=#secret1
MEILI_MASTER_KEY=#secret2
NEXTAUTH_URL=#nexturl

At that time, container is restarted since a few minutes, I did not try to connect to hoarder and queue.db is still missing in data repository.

@MohamedBassem
Copy link
Collaborator

yeah clearly there's something wrong here. There are no logs coming from the worker job which explains why the queue db was not getting initialized. The corepack errors does seem relevant yeah. I'll need to debug this further and think I have some idea of what might be going wrong here.

@MohamedBassem MohamedBassem added bug Something isn't working and removed question Further information is requested labels Nov 18, 2024
@MohamedBassem
Copy link
Collaborator

I have a repro, will send a fix.

@MohamedBassem
Copy link
Collaborator

Ok, managed to fix it in ae78ef5. If you can't wait for the next release, you can use the nightly image by changing HOARDER_VERSION from release to latest.

@Azhelor
Copy link
Author

Azhelor commented Nov 19, 2024

Just tried with the latest version and it works perfectly fine. Thank you very much!

@stayupthetree
Copy link

stayupthetree commented Nov 19, 2024

tested with latest and error persists. My docker compose is below. Worth noting I also attempted with the latest docker-compose.yml from here and wiping out everything and starting fresh.

services:
  web:
    image: ghcr.io/hoarder-app/hoarder-web:${HOARDER_VERSION:-release}
    container_name: web
    network_mode: speedforce
    restart: unless-stopped
    volumes:
      - /mnt/user/appdata/hoarder/data:/data
    ports:
      - 3111:3000
    env_file:
      - .env
    environment:
      REDIS_HOST: redis
      MEILI_ADDR: http://meilisearch:7700
      DATA_DIR: /data
    labels:
      traefik.enable: "true"
      traefik.http.routers.web.entrypoints: http
      traefik.http.routers.web.rule: Host(`[REDACTED]`)
      traefik.http.middlewares.web-https-redirect.redirectscheme.scheme: https
      traefik.http.routers.web.middlewares: web-https-redirect
      traefik.http.routers.web-secure.entrypoints: https
      traefik.http.routers.web-secure.rule: Host(`[REDACTED]`)
      traefik.http.routers.web-secure.tls: "true"
      traefik.http.routers.web-secure.service: web
      traefik.http.services.web.loadbalancer.server.port: "3000"
      traefik.docker.network: speedforce
      kuma.mygroup.group.name: Hoarder
      kuma.web.http.name: Web
      kuma.web.docker.docker_container: web
      kuma.web.docker.name: web
      kuma.web.docker.interval: 60
      kuma.web.docker_host: unix:///var/run/docker.sock
      kuma.web.http.url: http://web:3000
      kuma.web.http.interval: 60
      kuma.web.http.max_redirects: 5
      kuma.web.http.status_code: 200
  redis:
    image: redis:7.2-alpine
    container_name: redis
    network_mode: speedforce
    restart: unless-stopped
    volumes:
      - redis:/data
  chrome:
    image: gcr.io/zenika-hub/alpine-chrome:123
    container_name: chrome
    network_mode: speedforce
    restart: unless-stopped
    command:
      - --no-sandbox
      - --disable-gpu
      - --disable-dev-shm-usage
      - --remote-debugging-address=0.0.0.0
      - --remote-debugging-port=9222
      - --hide-scrollbars
  meilisearch:
    image: getmeili/meilisearch:v1.11.1
    container_name: meilisearch
    network_mode: speedforce
    restart: unless-stopped
    env_file:
      - .env
    environment:
      MEILI_NO_ANALYTICS: "true"
    volumes:
      - /mnt/user/appdata/hoarder/meilisearch:/meili_data
  workers:
    image: ghcr.io/hoarder-app/hoarder-workers:${HOARDER_VERSION:-release}
    container_name: workers
    network_mode: speedforce
    restart: unless-stopped
    volumes:
      - /mnt/user/appdata/hoarder/:/data
    env_file:
      - .env
    environment:
      REDIS_HOST: redis
      MEILI_ADDR: http://meilisearch:7700
      BROWSER_WEB_URL: http://chrome:9222
      DATA_DIR: /data
      OPENAI_API_KEY: [REDACTED]
    depends_on:
      web:
        condition: service_started
volumes:
  redis: null
  meilisearch: null
  data: null
networks:
  speedforce:
    external: true

@MohamedBassem
Copy link
Collaborator

@stayupthetree did you try the nightly build? Also can you share the logs of the web container on startup?

@stayupthetree
Copy link

web-2024-11-19T23-09-11.log

the error doesnt present until I try to add a bookmark or if I go to the admin settings
my .env

HOARDER_VERSION=latest
NEXTAUTH_SECRET=redacted
MEILI_MASTER_KEY=redacted
NEXTAUTH_URL=http://localhost:3000
CRAWLER_FULL_PAGE_SCREENSHOT=true
CRAWLER_FULL_PAGE_ARCHIVE=true
CRAWLER_VIDEO_DOWNLOAD=true
CRAWLER_VIDEO_DOWNLOAD_MAX_SIZE=-1

@MohamedBassem
Copy link
Collaborator

Ah found the problem. You're using the 'hoarder-app/hoarder-web' image while you should be using 'hoarder-app/hoarder' instead. The hoarder web images doesn't contain the worker container so it doesn't run the db migration

@MohamedBassem
Copy link
Collaborator

Actually, you see to be using the old docker compose and some things have changed since then, check the upgrade guide of version 0.16. We don't need redis anymore and we don't need a separate container for the workers.

@MohamedBassem
Copy link
Collaborator

And your main problem in the old docker compose is that both the web and worker containers had different mountpaths for /data while they should both have had the same one.

You have /mnt/user/appdata/hoarder/data:/data for web, and /mnt/user/appdata/hoarder/:/data for workers which is incorrect. Fixing that should be enough to fix your problem. However, I still recommend you upgrade to the new docker compose.

@stayupthetree
Copy link

Nailed it! Apologies for missing the upgrade guide! Fully upgraded to new docker compose and seems to working splendidly. Thank you sir!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants