Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error 502 after version upgrade #8041

Closed
5 tasks
plaidshirtakos opened this issue Aug 30, 2019 · 23 comments
Closed
5 tasks

Error 502 after version upgrade #8041

plaidshirtakos opened this issue Aug 30, 2019 · 23 comments
Labels
issue/needs-feedback For bugs, we need more details. For features, the feature must be described in more detail issue/stale

Comments

@plaidshirtakos
Copy link

  • Gitea version (or commit ref): Gitea version 1.9.2
  • Git version: git version 2.17.1
  • Operating system: Ubuntu 18.04.3
  • Database (use [x]):
    • PostgreSQL
    • [x ] MySQL
    • MSSQL
    • SQLite
  • Can you reproduce the bug at https://try.gitea.io:
    • Yes (provide example URL)
    • No
    • [x ] Not relevant
  • Log gist:

Description

I tried to upgrade from version 1.8.3 to 1.9.2. I downloaded it(https://dl.gitea.io/gitea/1.9.2/gitea-1.9.2-linux-amd64) and replaced binary in global location:
sudo cp gitea /usr/local/bin/gitea
Gitea is in running status, but got error 502 Bad Gateway nginx/1.14.0 (Ubuntu) when tried to reach it.

@plaidshirtakos
Copy link
Author

That is working, when I downgrade back to Gitea version 1.8.3 built with go1.12.5 : bindata, sqlite, sqlite_unlock_notify from the new Gitea version 1.9.2 built with GNU Make 4.1, go1.12.9 : bindata, sqlite, sqlite_unlock_notify.

@lunny
Copy link
Member

lunny commented Aug 30, 2019

Any error logs?

@lunny lunny added the issue/needs-feedback For bugs, we need more details. For features, the feature must be described in more detail label Aug 30, 2019
@plaidshirtakos
Copy link
Author

I can't see any errors in log folder. Where should I see it?

@lunny
Copy link
Member

lunny commented Aug 30, 2019

It should be <gitea_home_dir>/logs .

@benzkji
Copy link

benzkji commented Aug 31, 2019

I have the same problem. The process is still visible, but getting a 502. Fun fact: When restarting, it may work, for some minutes, or even an hour.

last lines from gitea.log:

2019/08/31 18:03:13 .../xorm/session_get.go:99:nocacheGet() [I] [SQL] SELECT "name" FROM "user" WHERE "id"=$1 LIMIT 1 []interface {}{2}
2019/08/31 18:03:13 .../xorm/session_get.go:99:nocacheGet() [I] [SQL] SELECT "name" FROM "user" WHERE "id"=$1 LIMIT 1 []interface {}{2}
2019/08/31 18:03:13 .../xorm/session_get.go:99:nocacheGet() [I] [SQL] SELECT "name" FROM "user" WHERE "id"=$1 LIMIT 1 []interface {}{4}
2019/08/31 18:03:13 routers/init.go:106:GlobalInit() [I] SQLite3 Supported
2019/08/31 18:03:13 routers/init.go:37:checkRunMode() [I] Run Mode: Production
2019/08/31 18:03:13 ...xorm/session_find.go:199:noCacheFind() [I] [SQL] SELECT "id", "type", "status", "conflicted_files", "issue_id", "index", "head_repo_id", "base_repo_id", "head_user_name", "head_branch", "base_branch", "merge_base", "has_merged", "merged_commit_id", "merger_id", "merged_unix" FROM "pull_request" WHERE (status = $1) []interface {}{1}
2019/08/31 18:03:13 routers/init.go:115:GlobalInit() [I] SSH server started on :61005. Cipher list ([aes128-ctr aes192-ctr aes256-ctr aes128-gcm@openssh.com arcfour256 arcfour128]), key exchange algorithms ([diffie-hellman-group1-sha1 diffie-hellman-group14-sha1 ecdh-sha2-nistp256 ecdh-sha2-nistp384 ecdh-sha2-nistp521 curve25519-sha256@libssh.org]), MACs ([hmac-sha2-256-etm@openssh.com hmac-sha2-256 hmac-sha1 hmac-sha1-96])
2019/08/31 18:03:13 ...xorm/session_find.go:199:noCacheFind() [I] [SQL] SELECT "id", "repo_id", "hook_id", "uuid", "type", "url", "signature", "payload_content", "http_method", "content_type", "event_type", "is_ssl", "is_delivered", "delivered", "is_succeed", "request_content", "response_content" FROM "hook_task" WHERE (is_delivered=$1) []interface {}{false}
2019/08/31 18:03:13 cmd/web.go:151:runWeb() [I] Listen: http://127.0.0.1:62070
2019/08/31 18:03:13 cmd/web.go:154:runWeb() [I] LFS server enabled
2019/08/31 18:03:13 ...ce/gracehttp/http.go:142:Serve() [I] Serving 127.0.0.1:62070 with pid 11277

@benzkji
Copy link

benzkji commented Sep 1, 2019

I cannot tell if this was like this before, but it seems like it's consuming much more resources:

bnzk     31104  3.5  0.2 2076512 186668 ?      Ssl  20:14   0:03  .../sites/gitea-live/gitea web -c .../sites/gitea-live/custom/conf/app.ini

We are two users on it, so very low traffic. Getting a constant 3.5% CPU usage on a huge server. One reason could be the server upgrade to Debian 10. But really no clue how?

@lunny how could we activate a debug log? I have RUN_MODE = dev, but no additional logs?

@plaidshirtakos
Copy link
Author

@lunny : I can't find any relevant log from that period of time.

@benzkji
Copy link

benzkji commented Sep 3, 2019

will dig into the LOG section, here: https://docs.gitea.io/en-us/config-cheat-sheet/

:-)=

@shimunn
Copy link

shimunn commented Sep 4, 2019

Same here using the image gitea:1.9 with mysql, http is returning 502 and ssh tells me

fatal: Could not read from remote repository.

and no log or error messages are produced

@benzkji
Copy link

benzkji commented Sep 4, 2019

it's all very vague. I understand that there is no diagnose possible, this way. all I can tell is that resource usage went quite up. are there any unit tests, that check for such scenarios? in my case, it looks like the problem is gone, after I deactivated another systemd service that has gone wild..still very vague.

@sapk
Copy link
Member

sapk commented Sep 4, 2019

@benzkji For high resource usage it may be the bleve index rebuilding for code search if enabled. You can try to disable it and restart or just wait that it end.

@shimunn
Copy link

shimunn commented Sep 4, 2019

Very strange, when I start gitea via docker & systemd like I for the last year it won't surpass starting the ssh server. But if I then enter the container via docker exec and run /app/gitea/gitea the web interface starts working too.

Looks like gitea web just keeps crashing

   20 root      0:00 s6-supervise gitea
 1180 git       0:00 /app/gitea/gitea web
bash-5.0# ps uax | grep git | grep -v grep
   20 root      0:00 s6-supervise gitea
 1199 git       0:00 /app/gitea/gitea web

@sapk
Copy link
Member

sapk commented Sep 4, 2019

@shimunn it doesn't seems to be the same issue to help you on your specific case please start another issue and give us more intel on your problem like:

  • previous version
  • new version
  • some logs
  • your docker run config (compose file ?)
  • your gitea config (app.ini)

Please remove any confidential informations from those details before sharing them.

The exec context is not the same as the startup one and you may not load the same configuration.

@benzkji
Copy link

benzkji commented Sep 5, 2019

@sapk is the indexing/code search something that was introduced in 1.9? thanks for the explanations.

@sapk
Copy link
Member

sapk commented Sep 5, 2019

I am not the most expert on the code indexer part but from what I know, it was present at least in 1.8.0 but it needed futher rework later and was limited.
The problem come from that the schema of data was changed in between 1.8.0 and 1.9.0. The indexer needed a full rebuild of the index at each schema change.
This may take time and ressource and can be problematic on some limited platform.
After the complete rebuild of the index it should be good to run even on limited platform.

For details, a PR exist to be able to not rebuild the index at migration but manually next time #7753

@benzkji
Copy link

benzkji commented Sep 5, 2019

ahh. thanks alot! now it's constant 0.3% cpu, that's ok for me. though I wonder where the 0.3% go - my django projects are all on 0.0% when idle? just if you got an idea - I'm more than happy now, everything works, and learnt some behind the scenes...:-)

@sapk
Copy link
Member

sapk commented Sep 5, 2019

I cann't say exactly but gitea can provide pprof cpu and mem profile. You could used that options to find what take cpu times. Thoses options are not well documented #6240 but you can found all the information from #4560. After the profile file are generated you can use standard go tool pprof to read them.

@benzkji
Copy link

benzkji commented Sep 5, 2019

thanks even more!

@guillep2k
Copy link
Member

I am not the most expert on the code indexer part but from what I know, it was present at least in 1.8.0 but it needed futher rework later and was limited.
The problem come from that the schema of data was changed in between 1.8.0 and 1.9.0. The indexer needed a full rebuild of the index at each schema change.
This may take time and ressource and can be problematic on some limited platform.
After the complete rebuild of the index it should be good to run even on limited platform.

For details, a PR exist to be able to not rebuild the index at migration but manually next time #7753

@sapk Yes, that's correct. And it will happen again when upgrading to 1.10.0. I think we should include some kind of warning in the upgrade guide?

@plaidshirtakos
Copy link
Author

Thanks for the explanation, but what is current solution for this?

@benzkji
Copy link

benzkji commented Sep 9, 2019

I dont exactly know. From what I can tell, rebuilding the index takes a lot of resources. If this leads to the 502 is not at all verified. Sorry, looks like I was taking over your issue...

@guillep2k
Copy link
Member

Thanks for the explanation, but what is current solution for this?

Just upgrade Gitea in a slow part of the day; perhaps blocking accesses, and leave it there rebuilding the index (the log should calm down after finishing). It shouldn't take too much time. In our medium sized VM (server at least 8 years old), it took around 5~10 seconds for every 100 MB of repo.

Alternatively, you could disable repository indexing while upgrading:

[indexer]
REPO_INDEXER_ENABLED = false

And remove the directory pointed to by REPO_INDEXER_PATH along with its contents. When you restart your instance there should not be any performance penalty associated with the repository indexing (of course, there will be no repository indexes either).
Then you can choose the right moment to enable them again.

@stale
Copy link

stale bot commented Nov 8, 2019

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs during the next 2 weeks. Thank you for your contributions.

@stale stale bot added the issue/stale label Nov 8, 2019
@go-gitea go-gitea locked and limited conversation to collaborators Nov 24, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
issue/needs-feedback For bugs, we need more details. For features, the feature must be described in more detail issue/stale
Projects
None yet
Development

No branches or pull requests

6 participants