Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

change the way to config tidb/tikv/pd in charts #638

Merged
merged 14 commits into from
Jul 17, 2019
134 changes: 73 additions & 61 deletions charts/tidb-cluster/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -48,9 +48,20 @@ discovery:
enableConfigMapRollout: false

pd:
# Please refer to https://github.com/pingcap/pd/blob/master/conf/config.toml for the default
# pd configurations (change to the tags of your pd version),
# just follow the format in the file and configure in the 'config' section
# as below if you want to customize any configuration.
# Please refer to https://pingcap.com/docs-cn/v3.0/reference/configuration/pd-server/configuration-file/
# (choose the version matching your pd) for detailed explanation of each parameter.
config: |
[log]
level = "info"
[replication]
location-labels = ["region", "zone", "rack", "host"]
DanielZhangQD marked this conversation as resolved.
Show resolved Hide resolved

replicas: 3
image: pingcap/pd:v3.0.0-rc.1
logLevel: info
# storageClassName is a StorageClass provides a way for administrators to describe the "classes" of storage they offer.
# different classes might map to quality-of-service levels, or to backup policies,
# or to arbitrary policies determined by the cluster administrators.
Expand All @@ -60,11 +71,6 @@ pd:
# Image pull policy.
imagePullPolicy: IfNotPresent

# maxStoreDownTime is how long a store will be considered `down` when disconnected
# if a store is considered `down`, the regions will be migrated to other stores
maxStoreDownTime: 30m
# maxReplicas is the number of replicas for each region
maxReplicas: 3
resources:
limits: {}
# cpu: 8000m
Expand Down Expand Up @@ -147,9 +153,57 @@ pd:
annotations: {}

tikv:
# Please refer to https://github.com/tikv/tikv/blob/master/etc/config-template.toml for the default
# tikv configurations (change to the tags of your tikv version),
# just follow the format in the file and configure in the 'config' section
# as below if you want to customize any configuration.
# Please refer to https://pingcap.com/docs-cn/v3.0/reference/configuration/tikv-server/configuration-file/
# (choose the version matching your tikv) for detailed explanation of each parameter.
config: |
log-level = "info"

# Here are some parameters you may want to customize (Please configure in the above 'config' section):
# [readpool.storage]
# ## Size of the thread pool for high-priority operations.
# # high-concurrency = 4
# ## Size of the thread pool for normal-priority operations.
# # normal-concurrency = 4
# ## Size of the thread pool for low-priority operations.
# # low-concurrency = 4
# [readpool.coprocessor]
# ## Most read requests from TiDB are sent to the coprocessor of TiKV. high/normal/low-concurrency is
# ## used to set the number of threads of the coprocessor.
# ## If there are many read requests, you can increase these config values (but keep it within the
# ## number of system CPU cores). For example, for a 32-core machine deployed with TiKV, you can even
# ## set these config to 30 in heavy read scenarios.
# ## If CPU_NUM > 8, the default thread pool size for coprocessors is set to CPU_NUM * 0.8.
# # high-concurrency = 8
# # normal-concurrency = 8
# # low-concurrency = 8
# [server]
# ## Size of the thread pool for the gRPC server.
# # grpc-concurrency = 4
# [storage]
# ## Scheduler's worker pool size, i.e. the number of write threads.
# ## It should be less than total CPU cores. When there are frequent write operations, set it to a
# ## higher value. More specifically, you can run `top -H -p tikv-pid` to check whether the threads
# ## named `sched-worker-pool` are busy.
# # scheduler-worker-pool-size = 4
#### Below parameters available in TiKV 2.x only
# [rocksdb.defaultcf]
# ## block-cache used to cache uncompressed blocks, big block-cache can speed up read.
# ## in normal cases should tune to 30%-50% tikv.resources.limits.memory
# # block-cache-size = "1GB"
# [rocksdb.writecf]
# ## in normal cases should tune to 10%-30% tikv.resources.limits.memory
# # block-cache-size = "256MB"
#### Below parameters available in TiKV 3.x and above only
# [storage.block-cache]
# ## Size of the shared block cache. Normally it should be tuned to 30%-50% of container's total memory.
# # capacity = "1GB"

replicas: 3
image: pingcap/tikv:v3.0.0-rc.1
logLevel: info
# storageClassName is a StorageClass provides a way for administrators to describe the "classes" of storage they offer.
# different classes might map to quality-of-service levels, or to backup policies,
# or to arbitrary policies determined by the cluster administrators.
Expand All @@ -159,11 +213,6 @@ tikv:
# Image pull policy.
imagePullPolicy: IfNotPresent

# syncLog is a bool value to enable or disable syc-log for raftstore, default is true
# enable this can prevent data loss when power failure
syncLog: true
# size of thread pool for grpc server.
# grpcConcurrency: 4
resources:
limits: {}
# cpu: 16000m
Expand Down Expand Up @@ -200,27 +249,20 @@ tikv:
## For example, ["zone", "rack"] means that we should place replicas to
## different zones first, then to different racks if we don't have enough zones.
## default value is ["region", "zone", "rack", "host"]
## If you change the default value below, please do sync the change to pd.config.[replication].location-labels
## storeLabels: ["region", "zone", "rack", "host"]

# block-cache used to cache uncompressed blocks, big block-cache can speed up read.
# in normal cases should tune to 30%-50% tikv.resources.limits.memory
# defaultcfBlockCacheSize: "1GB"

# in normal cases should tune to 10%-30% tikv.resources.limits.memory
# writecfBlockCacheSize: "256MB"

# size of thread pool for high-priority/normal-priority/low-priority operations
# readpoolStorageConcurrency: 4

# Notice: if tikv.resources.limits.cpu > 8, default thread pool size for coprocessors
# will be set to tikv.resources.limits.cpu * 0.8.
# readpoolCoprocessorConcurrency: 8

# scheduler's worker pool size, should increase it in heavy write cases,
# also should less than total cpu cores.
# storageSchedulerWorkerPoolSize: 4

tidb:
# Please refer to https://github.com/pingcap/tidb/blob/master/config/config.toml.example for the default
# tidb configurations(change to the tags of your tidb version),
# just follow the format in the file and configure in the 'config' section
# as below if you want to customize any configuration.
# Please refer to https://pingcap.com/docs-cn/v3.0/reference/configuration/tidb-server/configuration-file/
# (choose the version matching your tidb) for detailed explanation of each parameter.
config: |
[log]
level = "info"

replicas: 2
# The secret name of root password, you can create secret with following command:
# kubectl create secret generic tidb-secret --from-literal=root=<root-password> --namespace=<namespace>
Expand All @@ -232,37 +274,7 @@ tidb:
image: pingcap/tidb:v3.0.0-rc.1
# Image pull policy.
imagePullPolicy: IfNotPresent
logLevel: info
preparedPlanCacheEnabled: false
preparedPlanCacheCapacity: 100
# Enable local latches for transactions. Enable it when
# there are lots of conflicts between transactions.
txnLocalLatchesEnabled: false
txnLocalLatchesCapacity: "10240000"
# The limit of concurrent executed sessions.
tokenLimit: "1000"
# Set the memory quota for a query in bytes. Default: 32GB
memQuotaQuery: "34359738368"
# The limitation of the number for the entries in one transaction.
# If using TiKV as the storage, the entry represents a key/value pair.
# WARNING: Do not set the value too large, otherwise it will make a very large impact on the TiKV cluster.
# Please adjust this configuration carefully.
txnEntryCountLimit: "300000"
# The limitation of the size in byte for the entries in one transaction.
# If using TiKV as the storage, the entry represents a key/value pair.
# WARNING: Do not set the value too large, otherwise it will make a very large impact on the TiKV cluster.
# Please adjust this configuration carefully.
txnTotalSizeLimit: "104857600"
# enableBatchDml enables batch commit for the DMLs
enableBatchDml: false
# check mb4 value in utf8 is used to control whether to check the mb4 characters when the charset is utf8.
checkMb4ValueInUtf8: true
# treat-old-version-utf8-as-utf8mb4 use for upgrade compatibility. Set to true will treat old version table/column UTF8 charset as UTF8MB4.
treatOldVersionUtf8AsUtf8mb4: true
# lease is schema lease duration, very dangerous to change only if you know what you do.
lease: 45s
# Max CPUs to use, 0 use number of CPUs in the machine.
maxProcs: 0

resources:
limits: {}
# cpu: 16000m
Expand Down
2 changes: 2 additions & 0 deletions deploy/aws/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -165,6 +165,8 @@ The values file ([`./tidb-cluster/values/default.yaml`](./tidb-cluster/values/de

For example, the default cluster specify using `./default-cluster.yaml` as the overriding values file, and enable the ConfigMap rollout feature in this file.

To customize TiDB cluster, follow the [cluster configuration](https://pingcap.com/docs-cn/v3.0/reference/configuration/tidb-in-kubernetes/cluster-configuration/) to see the detail of each parameter and customize your values file.

In EKS, some values are not customizable as usual, including the cluster version, replicas, node selectors and taints. These variables are controlled by the terraform instead in favor of consistency. To customize these variables, you can edit the [`clusters.tf`](./clusters.tf) and change the variables of each `./tidb-cluster` module directly.

### Customized TiDB Operator
Expand Down
26 changes: 23 additions & 3 deletions deploy/aws/tidb-cluster/values/default.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,19 +2,39 @@
timezone: UTC

pd:
logLevel: info
config: |
[log]
level = "info"
[replication]
location-labels = ["region", "zone", "rack", "host"]

storageClassName: ebs-gp2
tikv:
logLevel: info
config: |
log-level = "info"

stroageClassName: local-storage
syncLog: true
tidb:
logLevel: info
config: |
[log]
level = "info"

service:
type: LoadBalancer
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: '0.0.0.0/0'
service.beta.kubernetes.io/aws-load-balancer-type: nlb
separateSlowLog: true
slowLogTailer:
image: busybox:1.26.2
resources:
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 20m
memory: 5Mi
DanielZhangQD marked this conversation as resolved.
Show resolved Hide resolved

monitor:
storage: 100Gi
Expand Down