Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upstream prematurely closed connection while reading response header from upstream #557

Closed
TC74 opened this issue Feb 6, 2024 · 23 comments
Labels
bug Something isn't working

Comments

@TC74
Copy link

TC74 commented Feb 6, 2024

What happened?

Need help, facing Nginx Error Log "...upstream prematurely closed connection while reading response header from upstream, client: 112.215.65.47, server: xxxxxxxxxx, request: "POST /livewire/update HTTP/2.0", upstream: "http://127.0.0.1:8000/livewire/update"

  • Deployed using PLOI
  • Ubuntu 22.04
  • Lavarel Octane Frankenphp
  • Cloudflare
  • Https only

Anyone experienced this? Or can point where to look for resolution...

Build Type

Custom (tell us more in the description)

Worker Mode

Yes

Operating System

GNU/Linux

CPU Architecture

x86_64

Relevant log output

...upstream prematurely closed connection while reading response header from upstream, client: 112.215.65.47, server: xxxxxxxxxx, request: "POST /livewire/update HTTP/2.0", upstream: "http://127.0.0.1:8000/livewire/update
@TC74 TC74 added the bug Something isn't working label Feb 6, 2024
@withinboredom
Copy link
Collaborator

Do you have logs from frankenphp?

@meeftah
Copy link

meeftah commented Feb 7, 2024

Do you have logs from frankenphp?

did you know where is frankenphp log located? I've googled and didn't find any clue...

@withinboredom
Copy link
Collaborator

It's usually in regular output, so it largely depends on how you are running it. If you are running it via systemd, then it will be with all your other logs (journalctl). If you are running it in a docker container, then it will be where logs are for docker containers.

@meeftah
Copy link

meeftah commented Feb 8, 2024

photo_2024-02-08_10-08-03

when I ran the laravel octane (frankenphp) with --log-level=debug I was stumbled with this info, is there any important information in there?

@indigoram89
Copy link

have the same error with laravel forge and octane (frankenphp)

@dunglas
Copy link
Owner

dunglas commented Feb 11, 2024

Is this endpoint a long running connection (SSE or something like that)? Of yes, this is likely a timeout issue. Is the PHP timeout disabled? Caddy timeout should be disabled by default.

@TC74
Copy link
Author

TC74 commented Feb 11, 2024 via email

@dunglas
Copy link
Owner

dunglas commented Feb 11, 2024

According to the logs, the error is on POST /livewire/update, it doesn't look like a login page.

@meeftah
Copy link

meeftah commented Feb 14, 2024

According to the logs, the error is on POST /livewire/update, it doesn't look like a login page.

yes the error is on 'POST /livewire/update' in on laravel using livewire, but I'm not really sure what is that really entail...I'm googling the error but do not find sameone has the same problem, in my local I've install the app on laradock which is inside a docker and the frankenphp is running well, but not on server (PLOI), I hope this information is useful for you...

The only error I could find was as described in the Relevant Log Output

@indigoram89
Copy link

I wrote to Forge support team. They tested my application on the server and it is their answer:

It appears like the FrankenPHP server crashes whenever it receives a request. It does so without writing anything to the log files. You'll need to follow the issue reported on the FrankenPHP repo.

@indigoram89
Copy link

I also use Livewire

@dunglas
Copy link
Owner

dunglas commented Feb 27, 2024

Would you be able to give me access to the crashing code, even privately?

@garinichsan
Copy link

I also encounter this problem when deploying my app using the frankenphp image as a base image in kubernetes. It keeps restarting after receiving a request. But, it works fine when I run the image using docker-compose on my server.

here is the log

2024-03-06 16:43:34.136 ICT
{level: info, msg: FrankenPHP started 🐘, php_version: 8.2.16, ts: 1709718214.1368062}
2024-03-06 16:43:34.137 ICT
{address: [::]:80, http3: false, level: debug, logger: http, msg: starting server loop, tls: false, ts: 1709718214.1371646}
2024-03-06 16:43:34.140 ICT
{level: info, logger: http.log, msg: server running, name: srv0, protocols: […], ts: 1709718214.140442}
2024-03-06 16:43:34.142 ICT
{file: /config/caddy/autosave.json, level: info, msg: autosaved config (load with --resume flag), ts: 1709718214.1424348}
2024-03-06 16:43:34.142 ICT
{level: info, msg: serving initial configuration, ts: 1709718214.142536}
2024-03-06 16:43:34.142 ICT
{level: info, logger: tls, msg: cleaning storage unit, storage: FileStorage:/data/caddy, ts: 1709718214.1426692}
2024-03-06 16:43:34.143 ICT
{level: info, logger: tls, msg: finished cleaning storage units, ts: 1709718214.1429553}
2024-03-06 16:50:15.402 ICT
{config_adapter: caddyfile, config_file: /etc/caddy/Caddyfile, level: info, msg: using provided configuration, ts: 1709718615.4026613}
2024-03-06 16:50:15.405 ICT
{adapter: caddyfile, file: /etc/caddy/Caddyfile, level: warn, line: 16, msg: Caddyfile input is not formatted; run 'caddy fmt --overwrite' to fix inconsistencies, ts: 1709718615.4050312}
2024-03-06 16:50:15.406 ICT
{address: localhost:2019, enforce_origin: false, level: info, logger: admin, msg: admin endpoint started, origins: […], ts: 1709718615.4067924}
2024-03-06 16:50:15.407 ICT
{http_port: 80, level: warn, logger: http.auto_https, msg: server is listening only on the HTTP port, so no automatic HTTPS will be applied to this server, server_name: srv0, ts: 1709718615.407489}
2024-03-06 16:50:15.408 ICT
{http: {…}, level: debug, logger: http.auto_https, msg: adjusted config, tls: {…}, ts: 1709718615.4078734}
2024-03-06 16:50:15.408 ICT
{level: info, msg: FrankenPHP started 🐘, php_version: 8.2.16, ts: 1709718615.4087079}
2024-03-06 16:50:15.409 ICT
{address: [::]:80, http3: false, level: debug, logger: http, msg: starting server loop, tls: false, ts: 1709718615.409097}
2024-03-06 16:50:15.410 ICT
{level: info, logger: http.log, msg: server running, name: srv0, protocols: […], ts: 1709718615.4101481}
2024-03-06 16:50:15.410 ICT
{file: /config/caddy/autosave.json, level: info, msg: autosaved config (load with --resume flag), ts: 1709718615.4107764}
2024-03-06 16:50:15.411 ICT
{level: info, msg: serving initial configuration, ts: 1709718615.4109845}
2024-03-06 16:50:15.411 ICT
{cache: 0xc00045f100, level: info, logger: tls.cache.maintenance, msg: started background certificate maintenance, ts: 1709718615.4077792}
2024-03-06 16:50:15.413 ICT
{level: info, logger: tls, msg: cleaning storage unit, storage: FileStorage:/data/caddy, ts: 1709718615.413479}
2024-03-06 16:50:15.413 ICT
{level: info, logger: tls, msg: finished cleaning storage units, ts: 1709718615.4138448}

@withinboredom
Copy link
Collaborator

this problem when deploying my app using the frankenphp image as a base image in kubernetes. It keeps restarting after receiving a request. But, it works fine when I run the image using docker-compose on my server.

This could be happening for any number of reasons, but your pod security policy is the most likely one. What does your deployment yaml file look like?

@meeftah
Copy link

meeftah commented Mar 10, 2024

I solve it, well not really...I created image docker for my project and frankenphp in it, after that I combine with nginx proxy manager

@garinichsan
Copy link

@withinboredom

here's my deployment yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
  name: api
  namespace: staging-api
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: api
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      annotations:
        kubectl.kubernetes.io/restartedAt: "2024-03-07T07:30:36Z"
      creationTimestamp: null
      labels:
        app: api
    spec:
      automountServiceAccountToken: false
      containers:
      - envFrom:
        - configMapRef:
            name: api-config
            optional: false
        - secretRef:
            name: api-secret
            optional: false
        image: <PRIVATE_REPO>/api-franken:staging
        imagePullPolicy: Always
        lifecycle:
          postStart:
            exec:
              command:
              - /bin/sh
              - -c
              - cp -Lr /api-storage-secret/* /app/storage/
        name: api
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /api-storage-secret
          mountPropagation: None
          name: api-storage-secret
          readOnly: true
      dnsPolicy: ClusterFirst
      enableServiceLinks: false
      nodeSelector:
        environment: development
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      shareProcessNamespace: false
      terminationGracePeriodSeconds: 30
      volumes:
      - name: api-storage-secret
        secret:
          defaultMode: 420
          optional: false
          secretName: api-storage-secret
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2024-03-06T09:43:35Z"
    lastUpdateTime: "2024-03-06T09:43:35Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2024-02-15T09:19:08Z"
    lastUpdateTime: "2024-03-07T07:30:40Z"
    message: ReplicaSet "api-597bbd9578" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 65
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

@nklmilojevic
Copy link

nklmilojevic commented Apr 6, 2024

I also encounter this problem when deploying my app using the frankenphp image as a base image in kubernetes. It keeps restarting after receiving a request. But, it works fine when I run the image using docker-compose on my server.

Having the same issue. It works with the same image in docker-compose, but on Kubernetes it fails.

@withinboredom could you maybe elaborate more on how podsecuritypolicy could affect this?

@withinboredom
Copy link
Collaborator

If you are running in k8s as a non-root user, you have to give it CAP_NET_BIND_SERVICE. Even if you aren't opening a lower port. I have no idea why.

@nklmilojevic
Copy link

Oh, got it. I already tried that and got the same issue. I'm going to work to make a reproducible proof of concept in another repo so anyone can try it.

@withinboredom
Copy link
Collaborator

Here's a policy that works on a production cluster

securityContext:
    runAsNonRoot: true
    allowPrivilegeEscalation: false
    capabilities:
      drop:
        - ALL
      add:
        - NET_BIND_SERVICE

@nklmilojevic
Copy link

Yup, that is a a thing that I have in my cluster and after 1 or 2 requests the pod errors out and restarts. Nothing usable in the logs with debug on, tho.

@nklmilojevic
Copy link

Ok, apologies, my bug is not directly related to this, as soon as I disabled latest datadog PHP tracer (0.99.1) everything works.

I think it is related to this #458 (comment)

@dunglas
Copy link
Owner

dunglas commented Apr 15, 2024

Closing as can't reproduce (and maybe already reported or even fixed). Feel free to reopen if you have more information.

@dunglas dunglas closed this as not planned Won't fix, can't repro, duplicate, stale Apr 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

7 participants