Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] for edge node, kubectl logs xxx error: Error from server (ServiceUnavailable): the server is currently unable to handle the request ( pods/log xxx) #984

Closed
windydayc opened this issue Sep 4, 2022 · 7 comments
Labels
kind/bug kind/bug

Comments

@windydayc
Copy link
Member

windydayc commented Sep 4, 2022

What happened:
I installed the openyurt cluster followed https://openyurt.io/zh/docs/installation/manually-setup. And used yurtadm join to join an edge node:

[root@iZf8z8lt4aao5n3cymq3rzZ ~]# kubectl get node -owide
NAME                      STATUS   ROLES                  AGE    VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE         KERNEL-VERSION                CONTAINER-RUNTIME
izf8z8lt4aao5n3cymq3rzz   Ready    control-plane,master   131m   v1.22.8   10.0.0.187    <none>        CentOS Linux 8   4.18.0-348.7.1.el8_5.x86_64   docker://19.3.14
izf8z8lt4aao5n3cymq3s0z   Ready    <none>                 85m    v1.22.8   10.0.0.188    <none>        CentOS Linux 8   4.18.0-348.7.1.el8_5.x86_64   docker://20.10.17

[root@iZf8z8lt4aao5n3cymq3rzZ ~]# kubectl get pod -n kube-system -owide
NAME                                              READY   STATUS    RESTARTS       AGE    IP            NODE                      NOMINATED NODE   READINESS GATES
coredns-dl7bz                                     1/1     Running   1 (11m ago)    119m   10.244.0.8    izf8z8lt4aao5n3cymq3rzz   <none>           <none>
coredns-jscq9                                     1/1     Running   1 (68m ago)    74m    10.244.2.64   izf8z8lt4aao5n3cymq3s0z   <none>           <none>
etcd-izf8z8lt4aao5n3cymq3rzz                      1/1     Running   11 (11m ago)   120m   10.0.0.187    izf8z8lt4aao5n3cymq3rzz   <none>           <none>
kube-apiserver-izf8z8lt4aao5n3cymq3rzz            1/1     Running   2 (11m ago)    119m   10.0.0.187    izf8z8lt4aao5n3cymq3rzz   <none>           <none>
kube-controller-manager-izf8z8lt4aao5n3cymq3rzz   1/1     Running   1 (11m ago)    119m   10.0.0.187    izf8z8lt4aao5n3cymq3rzz   <none>           <none>
kube-flannel-ds-6tlq2                             1/1     Running   1 (68m ago)    74m    10.0.0.188    izf8z8lt4aao5n3cymq3s0z   <none>           <none>
kube-flannel-ds-9bvkc                             1/1     Running   2 (10m ago)    119m   10.0.0.187    izf8z8lt4aao5n3cymq3rzz   <none>           <none>
kube-proxy-4fprk                                  1/1     Running   1 (11m ago)    119m   10.0.0.187    izf8z8lt4aao5n3cymq3rzz   <none>           <none>
kube-proxy-tl5sz                                  1/1     Running   0              68m    10.0.0.188    izf8z8lt4aao5n3cymq3s0z   <none>           <none>
kube-scheduler-izf8z8lt4aao5n3cymq3rzz            1/1     Running   11 (11m ago)   120m   10.0.0.187    izf8z8lt4aao5n3cymq3rzz   <none>           <none>
yurt-app-manager-66dffb5dc9-xwclb                 1/1     Running   2 (10m ago)    119m   10.244.0.9    izf8z8lt4aao5n3cymq3rzz   <none>           <none>
yurt-controller-manager-77b97fd47b-tdrvg          1/1     Running   1 (11m ago)    119m   10.0.0.187    izf8z8lt4aao5n3cymq3rzz   <none>           <none>
yurt-hub-izf8z8lt4aao5n3cymq3s0z                  1/1     Running   12 (68m ago)   73m    10.0.0.188    izf8z8lt4aao5n3cymq3s0z   <none>           <none>
yurt-tunnel-agent-x2rjz                           1/1     Running   0              68m    10.0.0.188    izf8z8lt4aao5n3cymq3s0z   <none>           <none>
yurt-tunnel-dns-9cbd69765-w4kvp                   1/1     Running   1 (11m ago)    119m   10.244.0.7    izf8z8lt4aao5n3cymq3rzz   <none>           <none>
yurt-tunnel-server-6fdb679789-gwfhq               1/1     Running   2 (11m ago)    119m   10.0.0.187    izf8z8lt4aao5n3cymq3rzz   <none>           <none>

However, for edge node, kubectl logs xxx will get an error: Error from server (ServiceUnavailable): the server is currently unable to handle the request ( pods/log xxx) :
such as:

[root@iZf8z8lt4aao5n3cymq3rzZ ~]# kubectl logs -n kube-system kube-flannel-ds-6tlq2
Error from server (ServiceUnavailable): the server is currently unable to handle the request ( pods/log kube-flannel-ds-6tlq2)
[root@iZf8z8lt4aao5n3cymq3rzZ ~]#
[root@iZf8z8lt4aao5n3cymq3rzZ ~]# kubectl logs -n kube-system yurt-tunnel-agent-x2rjz
Error from server (ServiceUnavailable): the server is currently unable to handle the request ( pods/log yurt-tunnel-agent-x2rjz)

What you expected to happen:
For edge node, kubectl logs xxx no errors.

Anything else we need to know?:

yurt-tunnel-server logs:

[root@iZf8z8lt4aao5n3cymq3rzZ ~]# kubectl logs -n kube-system yurt-tunnel-server-6fdb679789-gwfhq
I0904 14:58:54.498831       1 start.go:63] yurttunnel-server version: projectinfo.Info{GitVersion:"-e710112", GitCommit:"e710112", BuildDate:"2022-09-01T02:27:15Z", GoVersion:"go1.17.1", Compiler:"gc", Platform:"linux/amd64"}
W0904 14:58:54.498963       1 client_config.go:615] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0904 14:58:54.499705       1 options.go:168] yurttunnel server config: &config.Config{EgressSelectorEnabled:false, EnableIptables:true, EnableDNSController:true, IptablesSyncPeriod:60, IPFamily:0x1, DNSSyncPeriod:1800, CertDNSNames:[]string{}, CertIPs:[]net.IP{}, CertDir:"", ListenAddrForAgent:"10.0.0.187:10262", ListenAddrForMaster:"10.0.0.187:10263", ListenInsecureAddrForMaster:"10.0.0.187:10264", ListenMetaAddr:"10.0.0.187:10265", RootCert:(*x509.CertPool)(0xc00012cc30), Client:(*kubernetes.Clientset)(0xc00012e420), SharedInformerFactory:(*informers.sharedInformerFactory)(0xc0002ff400), ServerCount:1, ProxyStrategy:"destHost", InterceptorServerUDSFile:""}
I0904 14:58:54.500145       1 leaderelection.go:248] attempting to acquire leader lease kube-system/tunnel-dns-controller...
E0904 14:58:54.502174       1 iptables.go:216] failed to delete rule that nat chain OUTPUT jumps to TUNNEL-PORT: error checking rule: exit status 2: iptables v1.8.7 (legacy): Couldn't load target `TUNNEL-PORT':No such file or directory

Try `iptables -h' or 'iptables --help' for more information.
I0904 14:58:59.518004       1 certificate_store.go:130] Loading cert/key pair from "/var/lib/yurttunnel-server/pki/yurttunnel-server-current.pem".
I0904 14:58:59.518230       1 certificate_manager.go:270] kubernetes.io/kubelet-serving: Certificate rotation is enabled
I0904 14:58:59.518262       1 certificate_store.go:130] Loading cert/key pair from "/var/lib/yurttunnel-server/pki/yurttunnel-server-proxy-client-current.pem".
I0904 14:58:59.518358       1 certificate_manager.go:270] kubernetes.io/kube-apiserver-client: Certificate rotation is enabled
I0904 14:58:59.518356       1 certificate_manager.go:270] kubernetes.io/kubelet-serving: Certificate expiration is 2122-08-11 13:05:38 +0000 UTC, rotation deadline is 2105-11-29 20:00:54.765966272 +0000 UTC
I0904 14:58:59.518380       1 certificate_manager.go:270] kubernetes.io/kubelet-serving: Waiting 729629h1m55.247587976s for next certificate rotation
I0904 14:58:59.518428       1 certificate_manager.go:270] kubernetes.io/kube-apiserver-client: Certificate expiration is 2122-08-08 09:16:58 +0000 UTC, rotation deadline is 2101-05-15 16:48:48.626345312 +0000 UTC
I0904 14:58:59.518449       1 certificate_manager.go:270] kubernetes.io/kube-apiserver-client: Waiting 689809h49m49.107897674s for next certificate rotation
I0904 14:58:59.520745       1 handler.go:141] enqueue service add event for kube-system/x-tunnel-server-internal-svc
I0904 14:58:59.521863       1 handler.go:43] enqueue node add event for izf8z8lt4aao5n3cymq3rzz
I0904 14:58:59.521878       1 handler.go:43] enqueue node add event for izf8z8lt4aao5n3cymq3s0z
I0904 14:58:59.522035       1 handler.go:175] handle configmap add event for kube-system/yurt-tunnel-server-cfg to update localhost ports
I0904 14:58:59.522115       1 handler.go:93] enqueue configmap add event for kube-system/yurt-tunnel-server-cfg
I0904 14:58:59.625035       1 iptables.go:472] clear conntrack entries for ports ["10250" "10255"] and nodes ["10.0.0.187"]
E0904 14:58:59.627753       1 iptables.go:491] clear conntrack for 10.0.0.187:10250 failed: "conntrack v1.4.6 (conntrack-tools): 0 flow entries have been deleted.\n", error message: exit status 1
E0904 14:58:59.630325       1 iptables.go:491] clear conntrack for 10.0.0.187:10255 failed: "conntrack v1.4.6 (conntrack-tools): 0 flow entries have been deleted.\n", error message: exit status 1
I0904 14:58:59.630345       1 iptables.go:543] directly access nodes changed, [10.0.0.187] for ports [10250 10255]
I0904 14:59:04.519384       1 anpserver.go:107] start handling request from interceptor
I0904 14:59:04.519426       1 wraphandler.go:67] add localHostProxyMiddleware into wrap handler
I0904 14:59:04.519437       1 tracereq.go:80] 2 informer synced in traceReqMiddleware
I0904 14:59:04.519441       1 wraphandler.go:67] add TraceReqMiddleware into wrap handler
I0904 14:59:04.519466       1 anpserver.go:143] start handling https request from master at 10.0.0.187:10263
I0904 14:59:04.519512       1 anpserver.go:157] start handling http request from master at 10.0.0.187:10264
I0904 14:59:04.519562       1 anpserver.go:195] start handling connection from agents
I0904 14:59:04.519744       1 util.go:75] "start handling meta requests(metrics/pprof)" server endpoint="10.0.0.187:10265"
I0904 14:59:12.145959       1 leaderelection.go:258] successfully acquired lease kube-system/tunnel-dns-controller
I0904 14:59:12.148845       1 dns.go:177] starting tunnel dns controller
I0904 14:59:12.148863       1 shared_informer.go:240] Waiting for caches to sync for tunnel-dns-controller
I0904 14:59:12.148877       1 shared_informer.go:247] Caches are synced for tunnel-dns-controller
I0904 14:59:12.150757       1 dns.go:301] sync tunnel server service as whole
I0904 14:59:12.150839       1 dns.go:310] sync dns record as whole
I0904 14:59:12.150881       1 dns.go:310] sync dns record as whole
I0904 14:59:12.156450       1 handler.go:166] adding node dns record for izf8z8lt4aao5n3cymq3rzz
I0904 14:59:12.157590       1 handler.go:166] adding node dns record for izf8z8lt4aao5n3cymq3s0z
I0904 14:59:12.159165       1 dns.go:301] sync tunnel server service as whole
I0904 15:01:01.635356       1 tracereq.go:135] start handling request GET https://10.0.0.188:10250/containerLogs/kube-system/kube-flannel-ds-6tlq2/kube-flannel, from 10.103.110.173:51164 to 10.0.0.188:10250
E0904 15:01:01.635639       1 tunnel.go:74] "currently no tunnels available" err="No backend available"
E0904 15:01:01.635750       1 interceptor.go:136] fail to setup the tunnel: fail to setup TLS handshake through the Tunnel: write unix @->/tmp/interceptor-proxier.sock: write: broken pipe
I0904 15:01:01.635765       1 tracereq.go:139] stop handling request GET https://10.0.0.188:10250/containerLogs/kube-system/kube-flannel-ds-6tlq2/kube-flannel, request handling lasts 388.499µs
I0904 15:01:05.353882       1 tracereq.go:135] start handling request GET https://10.0.0.188:10250/containerLogs/kube-system/kube-flannel-ds-6tlq2/kube-flannel, from 10.103.110.173:51164 to 10.0.0.188:10250
E0904 15:01:05.354115       1 tunnel.go:74] "currently no tunnels available" err="No backend available"
E0904 15:01:05.354221       1 interceptor.go:136] fail to setup the tunnel: fail to setup TLS handshake through the Tunnel: write unix @->/tmp/interceptor-proxier.sock: write: broken pipe
I0904 15:01:05.354244       1 tracereq.go:139] stop handling request GET https://10.0.0.188:10250/containerLogs/kube-system/kube-flannel-ds-6tlq2/kube-flannel, request handling lasts 338.313µs
I0904 15:07:49.005417       1 tracereq.go:135] start handling request GET https://10.0.0.188:10250/containerLogs/kube-system/kube-flannel-ds-6tlq2/kube-flannel, from 10.103.110.173:54486 to 10.0.0.188:10250
E0904 15:07:49.005605       1 tunnel.go:74] "currently no tunnels available" err="No backend available"
E0904 15:07:49.005777       1 interceptor.go:136] fail to setup the tunnel: fail to setup TLS handshake through the Tunnel: write unix @->/tmp/interceptor-proxier.sock: write: broken pipe
I0904 15:07:49.005827       1 tracereq.go:139] stop handling request GET https://10.0.0.188:10250/containerLogs/kube-system/kube-flannel-ds-6tlq2/kube-flannel, request handling lasts 392.457µs
I0904 15:11:17.062869       1 tracereq.go:135] start handling request GET https://10.0.0.188:10250/containerLogs/kube-system/kube-flannel-ds-6tlq2/kube-flannel, from 10.103.110.173:56198 to 10.0.0.188:10250
E0904 15:11:17.063057       1 tunnel.go:74] "currently no tunnels available" err="No backend available"
E0904 15:11:17.063201       1 interceptor.go:136] fail to setup the tunnel: fail to setup TLS handshake through the Tunnel: write unix @->/tmp/interceptor-proxier.sock: write: broken pipe
I0904 15:11:17.063219       1 tracereq.go:139] stop handling request GET https://10.0.0.188:10250/containerLogs/kube-system/kube-flannel-ds-6tlq2/kube-flannel, request handling lasts 333.209µs
I0904 15:11:24.737302       1 tracereq.go:135] start handling request GET https://10.0.0.188:10250/containerLogs/kube-system/yurt-tunnel-agent-x2rjz/yurt-tunnel-agent, from 10.103.110.173:56198 to 10.0.0.188:10250
E0904 15:11:24.737544       1 tunnel.go:74] "currently no tunnels available" err="No backend available"
E0904 15:11:24.737646       1 interceptor.go:136] fail to setup the tunnel: fail to setup TLS handshake through the Tunnel: write unix @->/tmp/interceptor-proxier.sock: write: broken pipe
I0904 15:11:24.737665       1 tracereq.go:139] stop handling request GET https://10.0.0.188:10250/containerLogs/kube-system/yurt-tunnel-agent-x2rjz/yurt-tunnel-agent, request handling lasts 344.865µs

yurt-tunnel-agent logs:

[root@iZf8z8lt4aao5n3cymq3s0Z ~]# docker logs 09bc5ed94e6c
I0904 14:01:27.145540       1 start.go:50] yurttunnel-agent version: projectinfo.Info{GitVersion:"latest", GitCommit:"ef26d5c", BuildDate:"2021-12-03T07:50:34Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
I0904 14:01:27.145600       1 options.go:136] ipv4=10.0.0.188&host=izf8z8lt4aao5n3cymq3s0z is set for agent identifies
I0904 14:01:27.145606       1 options.go:141] neither --kube-config nor --apiserver-addr is set, will use /etc/kubernetes/kubelet.conf as the kubeconfig
I0904 14:01:27.145610       1 options.go:145] create the clientset based on the kubeconfig(/etc/kubernetes/kubelet.conf).
I0904 14:01:27.158357       1 start.go:86] yurttunnel-server address: 10.0.0.187:31838
W0904 14:01:27.158410       1 filestore_wrapper.go:49] unexpected error occurred when loading the certificate: no cert/key files read at "/var/lib/yurttunnel-agent/pki/yurttunnel-agent-current.pem", ("", "") or ("/var/lib/yurttunnel-agent/pki", "/var/lib/yurttunnel-agent/pki"), will regenerate it
E0904 14:01:27.168653       1 certificate_manager.go:434] Failed while requesting a signed certificate from the master: cannot create certificate signing request: the server could not find the requested resource
E0904 14:01:29.294594       1 certificate_manager.go:434] Failed while requesting a signed certificate from the master: cannot create certificate signing request: the server could not find the requested resource
I0904 14:01:32.158526       1 start.go:101] certificate yurttunnel-agent not signed, waiting...
E0904 14:01:33.676383       1 certificate_manager.go:434] Failed while requesting a signed certificate from the master: cannot create certificate signing request: the server could not find the requested resource
I0904 14:01:37.158539       1 start.go:101] certificate yurttunnel-agent not signed, waiting...
I0904 14:01:42.158508       1 start.go:101] certificate yurttunnel-agent not signed, waiting...
E0904 14:01:42.214762       1 certificate_manager.go:434] Failed while requesting a signed certificate from the master: cannot create certificate signing request: the server could not find the requested resource
I0904 14:01:47.158460       1 start.go:101] certificate yurttunnel-agent not signed, waiting...
I0904 14:01:52.158546       1 start.go:101] certificate yurttunnel-agent not signed, waiting...
I0904 14:01:57.158526       1 start.go:101] certificate yurttunnel-agent not signed, waiting...
E0904 14:01:58.922836       1 certificate_manager.go:434] Failed while requesting a signed certificate from the master: cannot create certificate signing request: the server could not find the requested resource
E0904 14:01:58.922853       1 certificate_manager.go:318] Reached backoff limit, still unable to rotate certs: timed out waiting for the condition
I0904 14:02:02.158549       1 start.go:101] certificate yurttunnel-agent not signed, waiting...
I0904 14:02:07.158505       1 start.go:101] certificate yurttunnel-agent not signed, waiting...
I0904 14:02:12.158546       1 start.go:101] certificate yurttunnel-agent not signed, waiting...
I0904 14:02:17.158560       1 start.go:101] certificate yurttunnel-agent not signed, waiting...
I0904 14:02:22.158555       1 start.go:101] certificate yurttunnel-agent not signed, waiting...
I0904 14:02:27.158547       1 start.go:101] certificate yurttunnel-agent not signed, waiting...
E0904 14:02:30.936598       1 certificate_manager.go:434] Failed while requesting a signed certificate from the master: cannot create certificate signing request: the server could not find the requested resource
I0904 14:02:32.158548       1 start.go:101] certificate yurttunnel-agent not signed, waiting...
I0904 14:02:37.158544       1 start.go:101] certificate yurttunnel-agent not signed, waiting...
I0904 14:02:42.158548       1 start.go:101] certificate yurttunnel-agent not signed, waiting...
I0904 14:02:47.158551       1 start.go:101] certificate yurttunnel-agent not signed, waiting...
I0904 14:02:52.158505       1 start.go:101] certificate yurttunnel-agent not signed, waiting...
I0904 14:02:57.158553       1 start.go:101] certificate yurttunnel-agent not signed, waiting...
I0904 14:03:02.158554       1 start.go:101] certificate yurttunnel-agent not signed, waiting...
E0904 14:03:02.926781       1 certificate_manager.go:434] Failed while requesting a signed certificate from the master: cannot create certificate signing request: the server could not find the requested resource
I0904 14:03:07.158547       1 start.go:101] certificate yurttunnel-agent not signed, waiting...
I0904 14:03:12.158553       1 start.go:101] certificate yurttunnel-agent not signed, waiting...
I0904 14:03:17.158542       1 start.go:101] certificate yurttunnel-agent not signed, waiting...
I0904 14:03:22.158566       1 start.go:101] certificate yurttunnel-agent not signed, waiting...
I0904 14:03:27.158872       1 start.go:101] certificate yurttunnel-agent not signed, waiting...
I0904 14:03:32.158543       1 start.go:101] certificate yurttunnel-agent not signed, waiting...
E0904 14:03:34.929000       1 certificate_manager.go:434] Failed while requesting a signed certificate from the master: cannot create certificate signing request: the server could not find the requested resource
I0904 14:03:37.158548       1 start.go:101] certificate yurttunnel-agent not signed, waiting...
I0904 14:03:42.158561       1 start.go:101] certificate yurttunnel-agent not signed, waiting...
I0904 14:03:47.158542       1 start.go:101] certificate yurttunnel-agent not signed, waiting...
I0904 14:03:52.158547       1 start.go:101] certificate yurttunnel-agent not signed, waiting...
I0904 14:03:57.158544       1 start.go:101] certificate yurttunnel-agent not signed, waiting...
I0904 14:04:02.158566       1 start.go:101] certificate yurttunnel-agent not signed, waiting...
E0904 14:04:06.930163       1 certificate_manager.go:434] Failed while requesting a signed certificate from the master: cannot create certificate signing request: the server could not find the requested resource
I0904 14:04:07.158548       1 start.go:101] certificate yurttunnel-agent not signed, waiting...

Environment:

  • OpenYurt version: latest
  • Kubernetes version (use kubectl version): 1.22.8
  • OS (e.g: cat /etc/os-release): centos7

/kind bug

@windydayc windydayc added the kind/bug kind/bug label Sep 4, 2022
@rambohe-ch
Copy link
Member

@windydayc It looks like that you have used a old yurt-tunnel-agent version. would you be able to upload the image info of yurt-tunnel-agent?

@windydayc
Copy link
Member Author

[root@iZf8z8lt4aao5n3cymq3rzZ ~]# docker images
REPOSITORY                                                TAG                 IMAGE ID            CREATED             SIZE
openyurt/yurt-controller-manager                          latest              8b4dc25d9b5e        27 hours ago        50.5MB
openyurt/yurt-tunnel-server                               latest              2f6b80a7898d        4 days ago          48.3MB
openyurt/yurt-controller-manager                          <none>              6a80f9c1ccc5        4 days ago          50.5MB
openyurt/yurt-app-manager                                 latest              f9d16234defc        4 days ago          46.8MB
rancher/mirrored-flannelcni-flannel                       v0.19.1             252b2c3ee6c8        4 weeks ago         62.3MB
oamdev/kube-webhook-certgen                               v2.4.1              fda55395b2bf        3 months ago        54.7MB
coredns/coredns                                           1.9.3               5185b96f0bec        3 months ago        48.8MB
rancher/mirrored-flannelcni-flannel-cni-plugin            v1.1.0              fcecffc7ad4a        3 months ago        8.09MB
sea.hub:5000/kube-apiserver                               v1.22.8             c0d565df2c90        5 months ago        128MB
sea.hub:5000/kube-proxy                                   v1.22.8             c1cfbd59f774        5 months ago        104MB
sea.hub:5000/kube-controller-manager                      v1.22.8             41ff05350898        5 months ago        122MB
sea.hub:5000/kube-scheduler                               v1.22.8             398b2c18375d        5 months ago        52.7MB
registry.cn-hangzhou.aliyuncs.com/openyurt/flannel-edge   v0.14.0-1           85c9944d9ff5        7 months ago        68MB
<none>                                                    <none>              a37effd3346a        13 months ago       25.7MB
sea.hub:5000/etcd                                         3.5.0-0             004811815584        14 months ago       295MB
registry                                                  2.7.1               0d0107588605        15 months ago       25.7MB
sea.hub:5000/coredns/coredns                              v1.8.4              8d147537fb7d        15 months ago       47.6MB
quay.io/coreos/flannel                                    v0.14.0             8522d622299c        15 months ago       67.9MB
sea.hub:5000/pause                                        3.5                 ed210e3e4a5b        17 months ago       683kB
sea.hub:5000/kube-proxy                                   v1.19.8             ea03182b84a2        18 months ago       118MB
sea.hub:5000/kube-apiserver                               v1.19.8             9ba91a90b7d1        18 months ago       119MB
sea.hub:5000/kube-controller-manager                      v1.19.8             213ae7795128        18 months ago       111MB
sea.hub:5000/kube-scheduler                               v1.19.8             919a3f36437d        18 months ago       46.5MB
sea.hub:5000/etcd                                         3.4.13-0            0369cf4303ff        2 years ago         253MB
registry.aliyuncs.com/google_containers/coredns           1.7.0               bfe3a36ebd25        2 years ago         45.2MB
sea.hub:5000/coredns                                      1.7.0               bfe3a36ebd25        2 years ago         45.2MB
sea.hub:5000/pause                                        3.2                 80d28bedfe5d        2 years ago         683kB
[root@iZf8z8lt4aao5n3cymq3s0Z ~]# docker images
REPOSITORY                                                  TAG         IMAGE ID       CREATED        SIZE
registry.cn-hangzhou.aliyuncs.com/openyurt/yurthub          latest      5fec9617227e   27 hours ago   54.6MB
sea.hub:5000/kube-proxy                                     v1.22.8     c1cfbd59f774   5 months ago   104MB
registry.cn-hangzhou.aliyuncs.com/openyurt/flannel-edge     v0.14.0-1   85c9944d9ff5   7 months ago   68MB
openyurt/yurt-tunnel-agent                                  latest      0175b008dfc2   9 months ago   75.9MB
registry.aliyuncs.com/google_containers/coredns             1.7.0       bfe3a36ebd25   2 years ago    45.2MB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause   3.2         80d28bedfe5d   2 years ago    683kB

@windydayc
Copy link
Member Author

@windydayc It looks like that you have used a old yurt-tunnel-agent version. would you be able to upload the image info of yurt-tunnel-agent?

You are right. This is indeed a problem with yurt-tunnel-agent image. I re-pulled this image and the problem was solved.

@windydayc
Copy link
Member Author

windydayc commented Sep 5, 2022

But logs a pod in edge node, it still has an error:

[root@iZf8z8lt4aao5n3cymq3rzZ ~]# kubectl logs -n kube-system yurt-tunnel-agent-rm9wt
error: You must be logged in to the server (the server has asked for the client to provide credentials ( pods/log yurt-tunnel-agent-rm9wt))
[root@iZf8z8lt4aao5n3cymq3rzZ ~]#
[root@iZf8z8lt4aao5n3cymq3rzZ ~]# kubectl logs -n kube-system kube-flannel-ds-9rnw6
error: You must be logged in to the server (the server has asked for the client to provide credentials ( pods/log kube-flannel-ds-9rnw6))

yurt-tunnel-server logs:

[root@iZf8z8lt4aao5n3cymq3rzZ ~]# kubectl logs -n kube-system yurt-tunnel-server-6fdb679789-gwfhq
I0904 14:58:54.498831       1 start.go:63] yurttunnel-server version: projectinfo.Info{GitVersion:"-e710112", GitCommit:"e710112", BuildDate:"2022-09-01T02:27:15Z", GoVersion:"go1.17.1", Compiler:"gc", Platform:"linux/amd64"}
W0904 14:58:54.498963       1 client_config.go:615] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0904 14:58:54.499705       1 options.go:168] yurttunnel server config: &config.Config{EgressSelectorEnabled:false, EnableIptables:true, EnableDNSController:true, IptablesSyncPeriod:60, IPFamily:0x1, DNSSyncPeriod:1800, CertDNSNames:[]string{}, CertIPs:[]net.IP{}, CertDir:"", ListenAddrForAgent:"10.0.0.187:10262", ListenAddrForMaster:"10.0.0.187:10263", ListenInsecureAddrForMaster:"10.0.0.187:10264", ListenMetaAddr:"10.0.0.187:10265", RootCert:(*x509.CertPool)(0xc00012cc30), Client:(*kubernetes.Clientset)(0xc00012e420), SharedInformerFactory:(*informers.sharedInformerFactory)(0xc0002ff400), ServerCount:1, ProxyStrategy:"destHost", InterceptorServerUDSFile:""}
I0904 14:58:54.500145       1 leaderelection.go:248] attempting to acquire leader lease kube-system/tunnel-dns-controller...
E0904 14:58:54.502174       1 iptables.go:216] failed to delete rule that nat chain OUTPUT jumps to TUNNEL-PORT: error checking rule: exit status 2: iptables v1.8.7 (legacy): Couldn't load target `TUNNEL-PORT':No such file or directory

Try `iptables -h' or 'iptables --help' for more information.
I0904 14:58:59.518004       1 certificate_store.go:130] Loading cert/key pair from "/var/lib/yurttunnel-server/pki/yurttunnel-server-current.pem".
I0904 14:58:59.518230       1 certificate_manager.go:270] kubernetes.io/kubelet-serving: Certificate rotation is enabled
I0904 14:58:59.518262       1 certificate_store.go:130] Loading cert/key pair from "/var/lib/yurttunnel-server/pki/yurttunnel-server-proxy-client-current.pem".
I0904 14:58:59.518358       1 certificate_manager.go:270] kubernetes.io/kube-apiserver-client: Certificate rotation is enabled
I0904 14:58:59.518356       1 certificate_manager.go:270] kubernetes.io/kubelet-serving: Certificate expiration is 2122-08-11 13:05:38 +0000 UTC, rotation deadline is 2105-11-29 20:00:54.765966272 +0000 UTC
I0904 14:58:59.518380       1 certificate_manager.go:270] kubernetes.io/kubelet-serving: Waiting 729629h1m55.247587976s for next certificate rotation
I0904 14:58:59.518428       1 certificate_manager.go:270] kubernetes.io/kube-apiserver-client: Certificate expiration is 2122-08-08 09:16:58 +0000 UTC, rotation deadline is 2101-05-15 16:48:48.626345312 +0000 UTC
I0904 14:58:59.518449       1 certificate_manager.go:270] kubernetes.io/kube-apiserver-client: Waiting 689809h49m49.107897674s for next certificate rotation
I0904 14:58:59.520745       1 handler.go:141] enqueue service add event for kube-system/x-tunnel-server-internal-svc
I0904 14:58:59.521863       1 handler.go:43] enqueue node add event for izf8z8lt4aao5n3cymq3rzz
I0904 14:58:59.521878       1 handler.go:43] enqueue node add event for izf8z8lt4aao5n3cymq3s0z
I0904 14:58:59.522035       1 handler.go:175] handle configmap add event for kube-system/yurt-tunnel-server-cfg to update localhost ports
I0904 14:58:59.522115       1 handler.go:93] enqueue configmap add event for kube-system/yurt-tunnel-server-cfg
I0904 14:58:59.625035       1 iptables.go:472] clear conntrack entries for ports ["10250" "10255"] and nodes ["10.0.0.187"]
E0904 14:58:59.627753       1 iptables.go:491] clear conntrack for 10.0.0.187:10250 failed: "conntrack v1.4.6 (conntrack-tools): 0 flow entries have been deleted.\n", error message: exit status 1
E0904 14:58:59.630325       1 iptables.go:491] clear conntrack for 10.0.0.187:10255 failed: "conntrack v1.4.6 (conntrack-tools): 0 flow entries have been deleted.\n", error message: exit status 1
I0904 14:58:59.630345       1 iptables.go:543] directly access nodes changed, [10.0.0.187] for ports [10250 10255]
I0904 14:59:04.519384       1 anpserver.go:107] start handling request from interceptor
I0904 14:59:04.519426       1 wraphandler.go:67] add localHostProxyMiddleware into wrap handler
I0904 14:59:04.519437       1 tracereq.go:80] 2 informer synced in traceReqMiddleware
I0904 14:59:04.519441       1 wraphandler.go:67] add TraceReqMiddleware into wrap handler
I0904 14:59:04.519466       1 anpserver.go:143] start handling https request from master at 10.0.0.187:10263
I0904 14:59:04.519512       1 anpserver.go:157] start handling http request from master at 10.0.0.187:10264
I0904 14:59:04.519562       1 anpserver.go:195] start handling connection from agents
I0904 14:59:04.519744       1 util.go:75] "start handling meta requests(metrics/pprof)" server endpoint="10.0.0.187:10265"
I0904 14:59:12.145959       1 leaderelection.go:258] successfully acquired lease kube-system/tunnel-dns-controller
I0904 14:59:12.148845       1 dns.go:177] starting tunnel dns controller
I0904 14:59:12.148863       1 shared_informer.go:240] Waiting for caches to sync for tunnel-dns-controller
I0904 14:59:12.148877       1 shared_informer.go:247] Caches are synced for tunnel-dns-controller
I0904 14:59:12.150757       1 dns.go:301] sync tunnel server service as whole
I0904 14:59:12.150839       1 dns.go:310] sync dns record as whole
I0904 14:59:12.150881       1 dns.go:310] sync dns record as whole
I0904 14:59:12.156450       1 handler.go:166] adding node dns record for izf8z8lt4aao5n3cymq3rzz
I0904 14:59:12.157590       1 handler.go:166] adding node dns record for izf8z8lt4aao5n3cymq3s0z
I0904 14:59:12.159165       1 dns.go:301] sync tunnel server service as whole
I0904 15:01:01.635356       1 tracereq.go:135] start handling request GET https://10.0.0.188:10250/containerLogs/kube-system/kube-flannel-ds-6tlq2/kube-flannel, from 10.103.110.173:51164 to 10.0.0.188:10250
E0904 15:01:01.635639       1 tunnel.go:74] "currently no tunnels available" err="No backend available"
E0904 15:01:01.635750       1 interceptor.go:136] fail to setup the tunnel: fail to setup TLS handshake through the Tunnel: write unix @->/tmp/interceptor-proxier.sock: write: broken pipe
I0904 15:01:01.635765       1 tracereq.go:139] stop handling request GET https://10.0.0.188:10250/containerLogs/kube-system/kube-flannel-ds-6tlq2/kube-flannel, request handling lasts 388.499µs
I0904 15:01:05.353882       1 tracereq.go:135] start handling request GET https://10.0.0.188:10250/containerLogs/kube-system/kube-flannel-ds-6tlq2/kube-flannel, from 10.103.110.173:51164 to 10.0.0.188:10250
E0904 15:01:05.354115       1 tunnel.go:74] "currently no tunnels available" err="No backend available"
E0904 15:01:05.354221       1 interceptor.go:136] fail to setup the tunnel: fail to setup TLS handshake through the Tunnel: write unix @->/tmp/interceptor-proxier.sock: write: broken pipe
I0904 15:01:05.354244       1 tracereq.go:139] stop handling request GET https://10.0.0.188:10250/containerLogs/kube-system/kube-flannel-ds-6tlq2/kube-flannel, request handling lasts 338.313µs
I0904 15:07:49.005417       1 tracereq.go:135] start handling request GET https://10.0.0.188:10250/containerLogs/kube-system/kube-flannel-ds-6tlq2/kube-flannel, from 10.103.110.173:54486 to 10.0.0.188:10250
E0904 15:07:49.005605       1 tunnel.go:74] "currently no tunnels available" err="No backend available"
E0904 15:07:49.005777       1 interceptor.go:136] fail to setup the tunnel: fail to setup TLS handshake through the Tunnel: write unix @->/tmp/interceptor-proxier.sock: write: broken pipe
I0904 15:07:49.005827       1 tracereq.go:139] stop handling request GET https://10.0.0.188:10250/containerLogs/kube-system/kube-flannel-ds-6tlq2/kube-flannel, request handling lasts 392.457µs
I0904 15:11:17.062869       1 tracereq.go:135] start handling request GET https://10.0.0.188:10250/containerLogs/kube-system/kube-flannel-ds-6tlq2/kube-flannel, from 10.103.110.173:56198 to 10.0.0.188:10250
E0904 15:11:17.063057       1 tunnel.go:74] "currently no tunnels available" err="No backend available"
E0904 15:11:17.063201       1 interceptor.go:136] fail to setup the tunnel: fail to setup TLS handshake through the Tunnel: write unix @->/tmp/interceptor-proxier.sock: write: broken pipe
I0904 15:11:17.063219       1 tracereq.go:139] stop handling request GET https://10.0.0.188:10250/containerLogs/kube-system/kube-flannel-ds-6tlq2/kube-flannel, request handling lasts 333.209µs
I0904 15:11:24.737302       1 tracereq.go:135] start handling request GET https://10.0.0.188:10250/containerLogs/kube-system/yurt-tunnel-agent-x2rjz/yurt-tunnel-agent, from 10.103.110.173:56198 to 10.0.0.188:10250
E0904 15:11:24.737544       1 tunnel.go:74] "currently no tunnels available" err="No backend available"
E0904 15:11:24.737646       1 interceptor.go:136] fail to setup the tunnel: fail to setup TLS handshake through the Tunnel: write unix @->/tmp/interceptor-proxier.sock: write: broken pipe
I0904 15:11:24.737665       1 tracereq.go:139] stop handling request GET https://10.0.0.188:10250/containerLogs/kube-system/yurt-tunnel-agent-x2rjz/yurt-tunnel-agent, request handling lasts 344.865µs
I0904 15:17:53.357098       1 tracereq.go:135] start handling request GET https://10.0.0.188:10250/containerLogs/kube-system/coredns-9vjzn/coredns, from 10.103.110.173:59420 to 10.0.0.188:10250
E0904 15:17:53.357262       1 tunnel.go:74] "currently no tunnels available" err="No backend available"
E0904 15:17:53.357401       1 interceptor.go:136] fail to setup the tunnel: fail to setup TLS handshake through the Tunnel: write unix @->/tmp/interceptor-proxier.sock: write: broken pipe
I0904 15:17:53.357422       1 tracereq.go:139] stop handling request GET https://10.0.0.188:10250/containerLogs/kube-system/coredns-9vjzn/coredns, request handling lasts 307.944µs
I0904 15:29:12.154524       1 dns.go:301] sync tunnel server service as whole
I0904 15:29:12.553429       1 dns.go:310] sync dns record as whole
I0904 15:59:12.164339       1 dns.go:301] sync tunnel server service as whole
I0904 15:59:12.563856       1 dns.go:310] sync dns record as whole
I0904 16:29:12.174106       1 dns.go:301] sync tunnel server service as whole
I0904 16:29:12.571578       1 dns.go:310] sync dns record as whole
I0904 16:59:12.185433       1 dns.go:301] sync tunnel server service as whole
I0904 16:59:12.578934       1 dns.go:310] sync dns record as whole
I0904 17:29:12.193977       1 dns.go:301] sync tunnel server service as whole
I0904 17:29:12.588215       1 dns.go:310] sync dns record as whole
I0904 17:59:12.199038       1 dns.go:301] sync tunnel server service as whole
I0904 17:59:12.596292       1 dns.go:310] sync dns record as whole
I0904 18:29:12.208003       1 dns.go:301] sync tunnel server service as whole
I0904 18:29:12.603532       1 dns.go:310] sync dns record as whole
I0904 18:59:12.215138       1 dns.go:301] sync tunnel server service as whole
I0904 18:59:12.611657       1 dns.go:310] sync dns record as whole
I0904 19:29:12.225870       1 dns.go:301] sync tunnel server service as whole
I0904 19:29:12.620976       1 dns.go:310] sync dns record as whole
I0904 19:59:12.235036       1 dns.go:301] sync tunnel server service as whole
I0904 19:59:12.629608       1 dns.go:310] sync dns record as whole
I0904 20:29:12.247518       1 dns.go:301] sync tunnel server service as whole
I0904 20:29:12.636743       1 dns.go:310] sync dns record as whole
I0904 20:59:12.257256       1 dns.go:301] sync tunnel server service as whole
I0904 20:59:12.644518       1 dns.go:310] sync dns record as whole
I0904 21:29:12.270408       1 dns.go:301] sync tunnel server service as whole
I0904 21:29:12.651823       1 dns.go:310] sync dns record as whole
I0904 21:59:12.278208       1 dns.go:301] sync tunnel server service as whole
I0904 21:59:12.660708       1 dns.go:310] sync dns record as whole
I0904 22:29:12.293203       1 dns.go:301] sync tunnel server service as whole
I0904 22:29:12.669299       1 dns.go:310] sync dns record as whole
I0904 22:59:12.304101       1 dns.go:301] sync tunnel server service as whole
I0904 22:59:12.678396       1 dns.go:310] sync dns record as whole
I0904 23:29:12.313684       1 dns.go:301] sync tunnel server service as whole
I0904 23:29:12.688399       1 dns.go:310] sync dns record as whole
I0904 23:59:12.327828       1 dns.go:301] sync tunnel server service as whole
I0904 23:59:12.698098       1 dns.go:310] sync dns record as whole
I0905 00:29:12.335627       1 dns.go:301] sync tunnel server service as whole
I0905 00:29:12.706837       1 dns.go:310] sync dns record as whole
I0905 00:59:12.341903       1 dns.go:301] sync tunnel server service as whole
I0905 00:59:12.717557       1 dns.go:310] sync dns record as whole
I0905 01:29:12.348379       1 dns.go:301] sync tunnel server service as whole
I0905 01:29:12.727070       1 dns.go:310] sync dns record as whole
I0905 01:59:12.355174       1 dns.go:301] sync tunnel server service as whole
I0905 01:59:12.737533       1 dns.go:310] sync dns record as whole
I0905 02:29:12.365518       1 dns.go:301] sync tunnel server service as whole
I0905 02:29:12.745305       1 dns.go:310] sync dns record as whole
I0905 02:59:12.371325       1 dns.go:301] sync tunnel server service as whole
I0905 02:59:12.754511       1 dns.go:310] sync dns record as whole
I0905 03:29:12.383252       1 dns.go:301] sync tunnel server service as whole
I0905 03:29:12.762823       1 dns.go:310] sync dns record as whole
I0905 03:59:12.390255       1 dns.go:301] sync tunnel server service as whole
I0905 03:59:12.772396       1 dns.go:310] sync dns record as whole
I0905 04:29:12.396912       1 dns.go:301] sync tunnel server service as whole
I0905 04:29:12.779909       1 dns.go:310] sync dns record as whole
I0905 04:54:05.233651       1 tracereq.go:135] start handling request GET https://10.0.0.188:10250/containerLogs/kube-system/yurt-tunnel-agent-x2rjz/yurt-tunnel-agent, from 10.103.110.173:36494 to 10.0.0.188:10250
E0905 04:54:05.233852       1 tunnel.go:74] "currently no tunnels available" err="No backend available"
E0905 04:54:05.233995       1 interceptor.go:136] fail to setup the tunnel: fail to setup TLS handshake through the Tunnel: write unix @->/tmp/interceptor-proxier.sock: write: broken pipe
I0905 04:54:05.234023       1 tracereq.go:139] stop handling request GET https://10.0.0.188:10250/containerLogs/kube-system/yurt-tunnel-agent-x2rjz/yurt-tunnel-agent, request handling lasts 350.196µs
I0905 04:59:12.403067       1 dns.go:301] sync tunnel server service as whole
I0905 04:59:12.788548       1 dns.go:310] sync dns record as whole
I0905 05:09:34.353933       1 server.go:616] "Connect request from agent" agentID="izf8z8lt4aao5n3cymq3s0z"
I0905 05:09:34.353979       1 backend_manager.go:184] "Register backend for agent" connection=&{ServerStream:0xc00044dec0} agentID="10.0.0.188"
I0905 05:09:34.353987       1 backend_manager.go:184] "Register backend for agent" connection=&{ServerStream:0xc00044dec0} agentID="izf8z8lt4aao5n3cymq3s0z"
I0905 05:10:16.665339       1 tracereq.go:135] start handling request GET https://10.0.0.188:10250/containerLogs/kube-system/yurt-tunnel-agent-rm9wt/yurt-tunnel-agent, from 10.103.110.173:44446 to 10.0.0.188:10250
I0905 05:10:16.666884       1 tunnel.go:128] "Starting proxy to host" host="10.0.0.188:10250" agentID="izf8z8lt4aao5n3cymq3s0z" connectionID=1
I0905 05:10:16.679106       1 tracereq.go:139] stop handling request GET https://10.0.0.188:10250/containerLogs/kube-system/yurt-tunnel-agent-rm9wt/yurt-tunnel-agent, request handling lasts 13.74549ms
I0905 05:10:16.679142       1 tunnel.go:141] "EOF from host" host="10.0.0.188:10250" agentID="izf8z8lt4aao5n3cymq3s0z" connID=1
I0905 05:10:16.679767       1 server.go:290] "Remove frontend for agent" agentID="izf8z8lt4aao5n3cymq3s0z" connectionID=1
I0905 05:12:07.510593       1 tracereq.go:135] start handling request GET https://10.0.0.188:10250/containerLogs/kube-system/coredns-9vjzn/coredns, from 10.103.110.173:45368 to 10.0.0.188:10250
I0905 05:12:07.511469       1 tunnel.go:128] "Starting proxy to host" host="10.0.0.188:10250" agentID="izf8z8lt4aao5n3cymq3s0z" connectionID=2
I0905 05:12:07.515863       1 tracereq.go:139] stop handling request GET https://10.0.0.188:10250/containerLogs/kube-system/coredns-9vjzn/coredns, request handling lasts 5.250736ms
I0905 05:12:07.515898       1 tunnel.go:141] "EOF from host" host="10.0.0.188:10250" agentID="izf8z8lt4aao5n3cymq3s0z" connID=2
I0905 05:12:07.516334       1 server.go:290] "Remove frontend for agent" agentID="izf8z8lt4aao5n3cymq3s0z" connectionID=2
I0905 05:16:40.301504       1 tracereq.go:135] start handling request GET https://10.0.0.188:10250/containerLogs/kube-system/coredns-9vjzn/coredns, from 10.103.110.173:47612 to 10.0.0.188:10250
I0905 05:16:40.302475       1 tunnel.go:128] "Starting proxy to host" host="10.0.0.188:10250" agentID="izf8z8lt4aao5n3cymq3s0z" connectionID=3
I0905 05:16:40.306823       1 tracereq.go:139] stop handling request GET https://10.0.0.188:10250/containerLogs/kube-system/coredns-9vjzn/coredns, request handling lasts 5.300476ms
I0905 05:16:40.306852       1 tunnel.go:141] "EOF from host" host="10.0.0.188:10250" agentID="izf8z8lt4aao5n3cymq3s0z" connID=3
I0905 05:16:40.307372       1 server.go:290] "Remove frontend for agent" agentID="izf8z8lt4aao5n3cymq3s0z" connectionID=3
I0905 05:16:58.873949       1 tracereq.go:135] start handling request GET https://10.0.0.188:10250/containerLogs/kube-system/coredns-9vjzn/coredns, from 10.103.110.173:47768 to 10.0.0.188:10250
I0905 05:16:58.874940       1 tunnel.go:128] "Starting proxy to host" host="10.0.0.188:10250" agentID="izf8z8lt4aao5n3cymq3s0z" connectionID=4
I0905 05:16:58.879063       1 tracereq.go:139] stop handling request GET https://10.0.0.188:10250/containerLogs/kube-system/coredns-9vjzn/coredns, request handling lasts 5.093721ms
I0905 05:16:58.879098       1 tunnel.go:141] "EOF from host" host="10.0.0.188:10250" agentID="izf8z8lt4aao5n3cymq3s0z" connID=4
I0905 05:16:58.879456       1 server.go:290] "Remove frontend for agent" agentID="izf8z8lt4aao5n3cymq3s0z" connectionID=4
I0905 05:19:45.869236       1 tracereq.go:135] start handling request GET https://10.0.0.188:10250/containerLogs/kube-system/yurt-tunnel-agent-rm9wt/yurt-tunnel-agent, from 10.103.110.173:49140 to 10.0.0.188:10250
I0905 05:19:45.870162       1 tunnel.go:128] "Starting proxy to host" host="10.0.0.188:10250" agentID="izf8z8lt4aao5n3cymq3s0z" connectionID=5
I0905 05:19:45.875331       1 tracereq.go:139] stop handling request GET https://10.0.0.188:10250/containerLogs/kube-system/yurt-tunnel-agent-rm9wt/yurt-tunnel-agent, request handling lasts 6.07646ms
I0905 05:19:45.875444       1 tunnel.go:141] "EOF from host" host="10.0.0.188:10250" agentID="izf8z8lt4aao5n3cymq3s0z" connID=5
I0905 05:19:45.875932       1 server.go:290] "Remove frontend for agent" agentID="izf8z8lt4aao5n3cymq3s0z" connectionID=5
I0905 05:19:59.528025       1 tracereq.go:135] start handling request GET https://10.0.0.188:10250/containerLogs/kube-system/kube-flannel-ds-9rnw6/kube-flannel, from 10.103.110.173:49250 to 10.0.0.188:10250
I0905 05:19:59.528924       1 tunnel.go:128] "Starting proxy to host" host="10.0.0.188:10250" agentID="izf8z8lt4aao5n3cymq3s0z" connectionID=6
I0905 05:19:59.533000       1 tracereq.go:139] stop handling request GET https://10.0.0.188:10250/containerLogs/kube-system/kube-flannel-ds-9rnw6/kube-flannel, request handling lasts 4.957065ms
I0905 05:19:59.533034       1 tunnel.go:141] "EOF from host" host="10.0.0.188:10250" agentID="izf8z8lt4aao5n3cymq3s0z" connID=6
I0905 05:19:59.533385       1 server.go:290] "Remove frontend for agent" agentID="izf8z8lt4aao5n3cymq3s0z" connectionID=6

yurt-tunnel-agent logs:

[root@iZf8z8lt4aao5n3cymq3s0Z ~]# docker logs c37f682a5360
I0905 05:09:29.336289       1 start.go:53] yurttunnel-agent version: projectinfo.Info{GitVersion:"-e710112", GitCommit:"e710112", BuildDate:"2022-09-05T02:28:52Z", GoVersion:"go1.17.1", Compiler:"gc", Platform:"linux/amd64"}
I0905 05:09:29.337303       1 options.go:148] ipv4=10.0.0.188&host=izf8z8lt4aao5n3cymq3s0z is set for agent identifies
I0905 05:09:29.337319       1 options.go:153] neither --kube-config nor --apiserver-addr is set, will use /etc/kubernetes/kubelet.conf as the kubeconfig
I0905 05:09:29.337328       1 options.go:157] create the clientset based on the kubeconfig(/etc/kubernetes/kubelet.conf).
I0905 05:09:29.348850       1 start.go:90] yurttunnel-server address: 10.0.0.187:31838
W0905 05:09:29.348894       1 filestore_wrapper.go:49] unexpected error occurred when loading the certificate: no cert/key files read at "/var/lib/yurttunnel-agent/pki/yurttunnel-agent-current.pem", ("", "") or ("/var/lib/yurttunnel-agent/pki", "/var/lib/yurttunnel-agent/pki"), will regenerate it
I0905 05:09:29.348919       1 certificate_manager.go:270] kubernetes.io/kube-apiserver-client: Certificate rotation is enabled
I0905 05:09:29.348967       1 certificate_manager.go:270] kubernetes.io/kube-apiserver-client: Rotating certificates
I0905 05:09:29.364162       1 csr.go:262] certificate signing request csr-xbdbs is approved, waiting to be issued
I0905 05:09:29.383416       1 csr.go:258] certificate signing request csr-xbdbs is issued
I0905 05:09:30.384592       1 certificate_manager.go:270] kubernetes.io/kube-apiserver-client: Certificate expiration is 2122-08-11 13:09:11 +0000 UTC, rotation deadline is 2105-11-29 22:44:04.915340224 +0000 UTC
I0905 05:09:30.384639       1 certificate_manager.go:270] kubernetes.io/kube-apiserver-client: Waiting 729617h34m34.530704387s for next certificate rotation
I0905 05:09:31.385367       1 certificate_manager.go:270] kubernetes.io/kube-apiserver-client: Certificate expiration is 2122-08-11 13:09:11 +0000 UTC, rotation deadline is 2101-05-19 00:03:40.762215936 +0000 UTC
I0905 05:09:31.385395       1 certificate_manager.go:270] kubernetes.io/kube-apiserver-client: Waiting 689874h54m9.376822601s for next certificate rotation
I0905 05:09:34.348988       1 start.go:122] certificate yurttunnel-agent ok
I0905 05:09:34.349153       1 anpagent.go:57] start serving grpc request redirected from yurttunnel-server: 10.0.0.187:31838
I0905 05:09:34.349293       1 util.go:75] "start handling meta requests(metrics/pprof)" server endpoint="127.0.0.1:10266"
I0905 05:09:34.354421       1 client.go:224] "Connect to" server="d399c1a2-a0a1-410e-bddc-e9ac18d308d3"
I0905 05:09:34.354438       1 clientset.go:190] "sync added client connecting to proxy server" serverID="d399c1a2-a0a1-410e-bddc-e9ac18d308d3"
I0905 05:09:34.354455       1 client.go:326] "Start serving" serverID="d399c1a2-a0a1-410e-bddc-e9ac18d308d3"
I0905 05:10:16.666741       1 client.go:412] received dial request to tcp:10.0.0.188:10250 with random=7095128839799372992 and connID=1
I0905 05:10:16.679546       1 client.go:382] "close connection" connectionID=1
I0905 05:12:07.511426       1 client.go:412] received dial request to tcp:10.0.0.188:10250 with random=8769843336475300403 and connID=2
I0905 05:12:07.516226       1 client.go:382] "close connection" connectionID=2
I0905 05:16:40.302389       1 client.go:412] received dial request to tcp:10.0.0.188:10250 with random=5030303602249184574 and connID=3
I0905 05:16:40.307207       1 client.go:382] "close connection" connectionID=3
I0905 05:16:58.874808       1 client.go:412] received dial request to tcp:10.0.0.188:10250 with random=1866003358406141496 and connID=4
I0905 05:16:58.879380       1 client.go:382] "close connection" connectionID=4
I0905 05:19:45.870044       1 client.go:412] received dial request to tcp:10.0.0.188:10250 with random=7297596633404176675 and connID=5
I0905 05:19:45.875785       1 client.go:382] "close connection" connectionID=5
I0905 05:19:59.528853       1 client.go:412] received dial request to tcp:10.0.0.188:10250 with random=8897508852385138594 and connID=6
I0905 05:19:59.533302       1 client.go:382] "close connection" connectionID=6

in edge node, kubelet logs:

[root@iZf8z8lt4aao5n3cymq3s0Z ~]# journalctl -f -u kubelet
-- Logs begin at Thu 2022-04-28 18:25:58 CST. --
Sep 05 13:09:28 iZf8z8lt4aao5n3cymq3s0Z kubelet[56126]: I0905 13:09:28.760042   56126 reconciler.go:225] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-pki\" (UniqueName: \"kubernetes.io/host-path/f83fc685-1d98-472c-a850-b3e3199f38d6-kubelet-pki\") pod \"yurt-tunnel-agent-rm9wt\" (UID: \"f83fc685-1d98-472c-a850-b3e3199f38d6\") "
Sep 05 13:09:28 iZf8z8lt4aao5n3cymq3s0Z kubelet[56126]: I0905 13:09:28.760158   56126 reconciler.go:225] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zsvg\" (UniqueName: \"kubernetes.io/projected/f83fc685-1d98-472c-a850-b3e3199f38d6-kube-api-access-2zsvg\") pod \"yurt-tunnel-agent-rm9wt\" (UID: \"f83fc685-1d98-472c-a850-b3e3199f38d6\") "
Sep 05 13:09:28 iZf8z8lt4aao5n3cymq3s0Z kubelet[56126]: I0905 13:09:28.760211   56126 reconciler.go:225] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tunnel-agent-dir\" (UniqueName: \"kubernetes.io/host-path/f83fc685-1d98-472c-a850-b3e3199f38d6-tunnel-agent-dir\") pod \"yurt-tunnel-agent-rm9wt\" (UID: \"f83fc685-1d98-472c-a850-b3e3199f38d6\") "
Sep 05 13:09:28 iZf8z8lt4aao5n3cymq3s0Z kubelet[56126]: I0905 13:09:28.760264   56126 reconciler.go:225] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-dir\" (UniqueName: \"kubernetes.io/host-path/f83fc685-1d98-472c-a850-b3e3199f38d6-k8s-dir\") pod \"yurt-tunnel-agent-rm9wt\" (UID: \"f83fc685-1d98-472c-a850-b3e3199f38d6\") "
Sep 05 13:10:16 iZf8z8lt4aao5n3cymq3s0Z kubelet[56126]: E0905 13:10:16.678634   56126 server.go:273] "Unable to authenticate the request due to an error" err="verifying certificate SN=240349838464050108402164717332002519232, SKID=, AKID=51:C4:F9:13:50:C4:87:D8:F0:BB:3B:25:3D:29:0E:D4:7C:D4:67:93 failed: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\")"
Sep 05 13:12:07 iZf8z8lt4aao5n3cymq3s0Z kubelet[56126]: E0905 13:12:07.515561   56126 server.go:273] "Unable to authenticate the request due to an error" err="verifying certificate SN=240349838464050108402164717332002519232, SKID=, AKID=51:C4:F9:13:50:C4:87:D8:F0:BB:3B:25:3D:29:0E:D4:7C:D4:67:93 failed: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\")"
Sep 05 13:16:40 iZf8z8lt4aao5n3cymq3s0Z kubelet[56126]: E0905 13:16:40.306513   56126 server.go:273] "Unable to authenticate the request due to an error" err="verifying certificate SN=240349838464050108402164717332002519232, SKID=, AKID=51:C4:F9:13:50:C4:87:D8:F0:BB:3B:25:3D:29:0E:D4:7C:D4:67:93 failed: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\")"
Sep 05 13:16:58 iZf8z8lt4aao5n3cymq3s0Z kubelet[56126]: E0905 13:16:58.878818   56126 server.go:273] "Unable to authenticate the request due to an error" err="verifying certificate SN=240349838464050108402164717332002519232, SKID=, AKID=51:C4:F9:13:50:C4:87:D8:F0:BB:3B:25:3D:29:0E:D4:7C:D4:67:93 failed: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\")"
Sep 05 13:19:45 iZf8z8lt4aao5n3cymq3s0Z kubelet[56126]: E0905 13:19:45.874127   56126 server.go:273] "Unable to authenticate the request due to an error" err="verifying certificate SN=240349838464050108402164717332002519232, SKID=, AKID=51:C4:F9:13:50:C4:87:D8:F0:BB:3B:25:3D:29:0E:D4:7C:D4:67:93 failed: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\")"
Sep 05 13:19:59 iZf8z8lt4aao5n3cymq3s0Z kubelet[56126]: E0905 13:19:59.532739   56126 server.go:273] "Unable to authenticate the request due to an error" err="verifying certificate SN=240349838464050108402164717332002519232, SKID=, AKID=51:C4:F9:13:50:C4:87:D8:F0:BB:3B:25:3D:29:0E:D4:7C:D4:67:93 failed: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\")"

@windydayc
Copy link
Member Author

windydayc commented Sep 5, 2022

@rambohe-ch This is very similar to what was mentioned in issue #508

@windydayc
Copy link
Member Author

After re-pulled, images are:

[root@iZf8z8lt4aao5n3cymq3rzZ ~]# docker images
REPOSITORY                                                TAG                 IMAGE ID            CREATED             SIZE
openyurt/yurt-tunnel-server                               latest              0986a4ef72ab        3 hours ago         48.3MB
openyurt/yurt-controller-manager                          latest              25ead6b7338f        4 hours ago         50.5MB
openyurt/yurt-app-manager                                 latest              f9d16234defc        4 days ago          46.8MB
rancher/mirrored-flannelcni-flannel                       v0.19.1             252b2c3ee6c8        4 weeks ago         62.3MB
oamdev/kube-webhook-certgen                               v2.4.1              fda55395b2bf        3 months ago        54.7MB
coredns/coredns                                           1.9.3               5185b96f0bec        3 months ago        48.8MB
rancher/mirrored-flannelcni-flannel-cni-plugin            v1.1.0              fcecffc7ad4a        3 months ago        8.09MB
sea.hub:5000/kube-apiserver                               v1.22.8             c0d565df2c90        5 months ago        128MB
sea.hub:5000/kube-scheduler                               v1.22.8             398b2c18375d        5 months ago        52.7MB
sea.hub:5000/kube-controller-manager                      v1.22.8             41ff05350898        5 months ago        122MB
sea.hub:5000/kube-proxy                                   v1.22.8             c1cfbd59f774        5 months ago        104MB
registry.cn-hangzhou.aliyuncs.com/openyurt/flannel-edge   v0.14.0-1           85c9944d9ff5        7 months ago        68MB
<none>                                                    <none>              a37effd3346a        13 months ago       25.7MB
sea.hub:5000/etcd                                         3.5.0-0             004811815584        14 months ago       295MB
registry                                                  2.7.1               0d0107588605        15 months ago       25.7MB
sea.hub:5000/coredns/coredns                              v1.8.4              8d147537fb7d        15 months ago       47.6MB
quay.io/coreos/flannel                                    v0.14.0             8522d622299c        15 months ago       67.9MB
sea.hub:5000/pause                                        3.5                 ed210e3e4a5b        17 months ago       683kB
sea.hub:5000/kube-proxy                                   v1.19.8             ea03182b84a2        18 months ago       118MB
sea.hub:5000/kube-apiserver                               v1.19.8             9ba91a90b7d1        18 months ago       119MB
sea.hub:5000/kube-controller-manager                      v1.19.8             213ae7795128        18 months ago       111MB
sea.hub:5000/kube-scheduler                               v1.19.8             919a3f36437d        18 months ago       46.5MB
sea.hub:5000/etcd                                         3.4.13-0            0369cf4303ff        2 years ago         253MB
registry.aliyuncs.com/google_containers/coredns           1.7.0               bfe3a36ebd25        2 years ago         45.2MB
sea.hub:5000/coredns                                      1.7.0               bfe3a36ebd25        2 years ago         45.2MB
sea.hub:5000/pause                                        3.2                 80d28bedfe5d        2 years ago         683kB
[root@iZf8z8lt4aao5n3cymq3s0Z ~]# docker images
REPOSITORY                                                  TAG         IMAGE ID       CREATED        SIZE
openyurt/yurt-tunnel-agent                                  latest      b222279c605a   3 hours ago    43.2MB
registry.cn-hangzhou.aliyuncs.com/openyurt/yurthub          latest      5fec9617227e   28 hours ago   54.6MB
sea.hub:5000/kube-proxy                                     v1.22.8     c1cfbd59f774   5 months ago   104MB
registry.cn-hangzhou.aliyuncs.com/openyurt/flannel-edge     v0.14.0-1   85c9944d9ff5   7 months ago   68MB
registry.aliyuncs.com/google_containers/coredns             1.7.0       bfe3a36ebd25   2 years ago    45.2MB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause   3.2         80d28bedfe5d   2 years ago    683kB

@windydayc
Copy link
Member Author

I cleaned the /var/lib/yurttunnel-server/pki/ dir and redeployed the yurt-tunnel-server and yurt-tunnel-agent, then this problem was solved.

The following are the specific steps:

rm -rf /var/lib/yurttunnel-server/pki
kubectl delete -f config/setup/yurt-tunnel-server.yaml
kubectl delete -f config/setup/yurt-tunnel-agent.yaml
kubectl apply -f config/setup/yurt-tunnel-server.yaml
kubectl apply -f config/setup/yurt-tunnel-agent.yaml

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug kind/bug
Projects
None yet
Development

No branches or pull requests

2 participants