TiProxy is a database proxy that is based on TiDB. It keeps client connections alive while the TiDB server upgrades, restarts, scales in, and scales out.
TiProxy is forked from Weir.
When a TiDB instance restarts or shuts down, TiProxy migrates backend connections on this instance to other instances. In this way, the clients won't be disconnected.
For more details, please refer to the blogs Achieving Zero-Downtime Upgrades with TiDB and Maintaining Database Connectivity in Serverless Infrastructure with TiProxy.
TiProxy routes new connections to backends based on their scores to keep load balanced. The score is basically calculated from the connections on each backend.
Besides, when the clients create or close connections, TiProxy also migrates backend connections to keep the backends balanced.
When a new TiDB instance starts, the TiProxy detects the new TiDB instance and migrates backend connections to the instance.
The TiProxy also checks health on TiDB instances to ensure they are alive, and migrates the backend connections to other TiDB instances if any instance is down.
For more details, see Design Doc.
TiProxy's role as a versatile database proxy is continuously evolving to meet the diverse needs of self-hosting users. Here are some of the key expectations that TiProxy is poised to fulfill:
In a multi-tenant database environment that supports database consolidation, TiProxy offers the ability to route connections based on usernames or client addresses. This ensures the effective isolation of TiDB resources, safeguarding data and performance for different tenants.
Sudden traffic spikes can catch any system off guard. TiProxy steps in with features like rate limiting and query refusal in extreme cases, enabling you to better manage and control incoming traffic to TiDB.
Ensuring the smooth operation of TiDB after an upgrade is crucial. TiProxy can play a vital role in this process by replicating traffic and replaying it on a new TiDB cluster. This comprehensive testing helps verify that the upgraded system works as expected.
Build the binary locally:
$ make
Build a docker image:
$ make docker
Refer to https://docs.pingcap.com/tidb/dev/tiproxy-overview#installation-and-usage.
Refer to https://docs.pingcap.com/tidb-in-kubernetes/stable/deploy-tiproxy.
- Generate a self-signed certificate, which is used for the token-based authentication between TiDB and TiProxy.
For example, if you use openssl:
openssl req -x509 -newkey rsa:4096 -sha256 -days 3650 -nodes \
-keyout key.pem -out cert.pem -subj "/CN=example.com"
Put the certs and keys to all the TiDB servers. Make sure all the TiDB instances use the same certificate.
- Update the
config.toml
of TiDB instances:
security.auto-tls=true
security.session-token-signing-cert={path/to/cert.pem}
security.session-token-signing-key={path/to/key.pem}
graceful-wait-before-shutdown=10
Where the session-token-signing-cert
and session-token-signing-key
are the paths to the certs generated in the 1st step.
And then start the TiDB cluster with the config.toml.
- Update the
proxy.toml
of TiProxy:
[proxy]
pd-addrs = "127.0.0.1:2379"
Where the pd-addrs
contains the addresses of all PD instances.
And then start TiProxy:
bin/tiproxy --config=conf/proxy.toml
- Connect to TiProxy with your client. The default port is 6000:
mysql -h127.0.0.1 -uroot -P6000
This project is for everyone. We ask that our users and contributors take a few minutes to review our Code of Conduct.
TiProxy is under the Apache 2.0 license. See the LICENSE file for details.