You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What version of Kubernetes are you using?
I am using :
Openshift : v4.2.7
Kubernetes Version: v1.14.6+76aeb0c
What version of TiDB Operator are you using?
I am using TiDB Operator v1.0.4.
What storage classes exist in the Kubernetes cluster and what are used for PD/TiKV pods?
Storage class : gp2
What's the status of the TiDB cluster pods?
Still haven't deploy TiDB cluster pods cause I can't deploy TiDB scheduler properly :
oc get po -n tidb-admin -o wide
tidb-controller-manager-cc579cd85-rjlsh 1/1 Running 0 3m16s
tidb-scheduler-56f887c866-m58qq 1/2 InvalidImageName 0 3m16s
mkdir -p tidb-operator &&
helm inspect values pingcap/tidb-operator --version=1.0.4 > tidb-operator/values-tidb-operator.yaml
=> Change the operator image version on the values-tidb-operator.yaml version to 1.0.4.
helm install pingcap/tidb-operator --name=tidb-operator --namespace=tidb-admin --version=1.0.4 -f tidb-operator/values-tidb-operator.yaml &&
oc get po -n tidb-admin -l app.kubernetes.io/name=tidb-operator
What did you expect to see?
tidb-controller-manager-cc579cd85-rjlsh 1/1 Running
tidb-scheduler-56f887c866-m58qq 2/2 Running
What did you see instead?
tidb-controller-manager-cc579cd85-rjlsh 1/1 Running 0 3m16s
tidb-scheduler-56f887c866-m58qq 1/2 InvalidImageName 0 3m16s
Normal Scheduled 10m default-scheduler Successfully assigned tidb-admin/tidb-scheduler-56f887c866-m58qq to ip-10-0-157-94.eu-central-1.compute.internal
Normal Pulled 10m kubelet, ip-10-0-157-94.eu-central-1.compute.internal Container image "pingcap/tidb-operator:v1.0.4" already present on machine
Normal Created 10m kubelet, ip-10-0-157-94.eu-central-1.compute.internal Created container tidb-scheduler
Normal Started 10m kubelet, ip-10-0-157-94.eu-central-1.compute.internal Started container tidb-scheduler
Warning Failed 8m27s (x11 over 10m) kubelet, ip-10-0-157-94.eu-central-1.compute.internal Error: InvalidImageName
Warning InspectFailed 14s (x49 over 10m) kubelet, ip-10-0-157-94.eu-central-1.compute.internal Failed to apply default image tag "k8s.gcr.io/kube-scheduler:v1.14.6+76aeb0c": couldn't parse image reference "k8s.gcr.io/kube-scheduler:v1.14.6+76aeb0c": invalid reference format
The text was updated successfully, but these errors were encountered:
It doesn't work well if the GitVersion returned by kube-apiserver is not the image version of the k8s system. For example: v1.14.6+76aeb0c, or v1.10.2+coreos.0.
You should set the scheduler.kubeSchedulerImageTag explicitly to v1.14.6 and try again:
scheduler:
# With rbac.create=false, the user is responsible for creating this account
# With rbac.create=true, this service account will be created
# Also see rbac.create and clusterScoped
serviceAccount: tidb-scheduler
logLevel: 2
replicas: 1
schedulerName: tidb-scheduler
# features:
# - StableScheduling=true
resources:
limits:
cpu: 250m
memory: 150Mi
requests:
cpu: 80m
memory: 50Mi
kubeSchedulerImageName: k8s.gcr.io/kube-scheduler
# This will default to matching your kubernetes version
kubeSchedulerImageTag: v1.14.6
Bug Report
Hi everyone,
What version of Kubernetes are you using?
I am using :
Openshift : v4.2.7
Kubernetes Version: v1.14.6+76aeb0c
What version of TiDB Operator are you using?
I am using TiDB Operator v1.0.4.
What storage classes exist in the Kubernetes cluster and what are used for PD/TiKV pods?
Storage class : gp2
What's the status of the TiDB cluster pods?
Still haven't deploy TiDB cluster pods cause I can't deploy TiDB scheduler properly :
oc get po -n tidb-admin -o wide
tidb-controller-manager-cc579cd85-rjlsh 1/1 Running 0 3m16s
tidb-scheduler-56f887c866-m58qq 1/2 InvalidImageName 0 3m16s
What did you do?
I followed the guide to install TiDB Operator :
oc apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/crd.yaml &&
oc get crd tidbclusters.pingcap.com
mkdir -p tidb-operator &&
helm inspect values pingcap/tidb-operator --version=1.0.4 > tidb-operator/values-tidb-operator.yaml
=> Change the operator image version on the values-tidb-operator.yaml version to 1.0.4.
helm install pingcap/tidb-operator --name=tidb-operator --namespace=tidb-admin --version=1.0.4 -f tidb-operator/values-tidb-operator.yaml &&
oc get po -n tidb-admin -l app.kubernetes.io/name=tidb-operator
What did you expect to see?
tidb-controller-manager-cc579cd85-rjlsh 1/1 Running
tidb-scheduler-56f887c866-m58qq 2/2 Running
What did you see instead?
tidb-controller-manager-cc579cd85-rjlsh 1/1 Running 0 3m16s
tidb-scheduler-56f887c866-m58qq 1/2 InvalidImageName 0 3m16s
oc describe po -n tidb-admin tidb-scheduler-56f887c866-m58qq
Name: tidb-scheduler-56f887c866-m58qq
Namespace: tidb-admin
Priority: 0
PriorityClassName:
Node: ip-x-x-x-x.eu-central-1.compute.internal/x.x.x.x
Start Time: Wed, 27 Nov 2019 09:28:16 +0100
Labels: app.kubernetes.io/component=scheduler
app.kubernetes.io/instance=tidb-operator
app.kubernetes.io/name=tidb-operator
pod-template-hash=56f887c866
Annotations: k8s.v1.cni.cncf.io/networks-status:
[{
"name": "openshift-sdn",
"interface": "eth0",
"ips": [
"x.x.x.x"
],
"default": true,
"dns": {}
}]
openshift.io/scc: restricted
Status: Pending
IP: x.x.x.x
Controlled By: ReplicaSet/tidb-scheduler-56f887c866
Containers:
tidb-scheduler:
Container ID: cri-o://5f633b329c028de4ffd0490141ebbd1b3ca4fba0ec85385d3414a0ae39b05a60
Image: pingcap/tidb-operator:v1.0.4
Image ID: docker.io/pingcap/tidb-operator@sha256:cb0c759a747a5447eb8c7edb67c3963328332f452bb481a2fee945da993a1c17
Port:
Host Port:
Command:
/usr/local/bin/tidb-scheduler
-v=2
-port=10262
State: Running
Started: Wed, 27 Nov 2019 09:28:24 +0100
Ready: True
Restart Count: 0
Limits:
cpu: 150m
memory: 150Mi
Requests:
cpu: 80m
memory: 50Mi
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from tidb-scheduler-token-6smtx (ro)
kube-scheduler:
Container ID:
Image: k8s.gcr.io/kube-scheduler:v1.14.6+76aeb0c
Image ID:
Port:
Host Port:
Command:
kube-scheduler
--port=10261
--leader-elect=true
--lock-object-name=tidb-scheduler
--lock-object-namespace=tidb-admin
--scheduler-name=tidb-scheduler
--v=2
--policy-configmap=tidb-scheduler-policy
--policy-configmap-namespace=tidb-admin
State: Waiting
Reason: InvalidImageName
Ready: False
Restart Count: 0
Limits:
cpu: 150m
memory: 150Mi
Requests:
cpu: 80m
memory: 50Mi
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from tidb-scheduler-token-6smtx (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
tidb-scheduler-token-6smtx:
Type: Secret (a volume populated by a Secret)
SecretName: tidb-scheduler-token-6smtx
Optional: false
QoS Class: Burstable
Node-Selectors:
Tolerations: node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
Normal Scheduled 10m default-scheduler Successfully assigned tidb-admin/tidb-scheduler-56f887c866-m58qq to ip-10-0-157-94.eu-central-1.compute.internal
Normal Pulled 10m kubelet, ip-10-0-157-94.eu-central-1.compute.internal Container image "pingcap/tidb-operator:v1.0.4" already present on machine
Normal Created 10m kubelet, ip-10-0-157-94.eu-central-1.compute.internal Created container tidb-scheduler
Normal Started 10m kubelet, ip-10-0-157-94.eu-central-1.compute.internal Started container tidb-scheduler
Warning Failed 8m27s (x11 over 10m) kubelet, ip-10-0-157-94.eu-central-1.compute.internal Error: InvalidImageName
Warning InspectFailed 14s (x49 over 10m) kubelet, ip-10-0-157-94.eu-central-1.compute.internal Failed to apply default image tag "k8s.gcr.io/kube-scheduler:v1.14.6+76aeb0c": couldn't parse image reference "k8s.gcr.io/kube-scheduler:v1.14.6+76aeb0c": invalid reference format
The text was updated successfully, but these errors were encountered: