-
Notifications
You must be signed in to change notification settings - Fork 442
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
关于分片集群必须关闭Balancer的疑问 #65
Labels
good first issue
Good for newcomers
Comments
|
你说的有道理,这块有了解过其他产品的做法吗?比如mongo-connector,它也支持分区集群的同步的。 |
这块不清楚 |
翻看了一下shard迁移日志,发现迁移日志是存在标签,是否在工具中过滤该类的日志,这样分片集群是不是就不必关闭Balancer? |
shard的主要问题不在于迁移标签,而在于全局id,在两个shard上并发不能保证全局的顺序一致性。也就是说对同一个shard key的操作落在2个shard上,保证不了这些操作的顺序性。 |
感谢解答,也就是说当shard数在3个以下,过滤日志是可以保证数据一致性的,对吧? |
@yunyang1991 shard在3个以下也会触发balance,不能保证一致性。最好的是跟业务配合,hash方式进行分片,关闭balancer进行同步。 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
1.另一个issue谈到,当迁移发生,老shard要删除,新shard要增加,同时读取无法解决时序问题。
我想,能不能忽略掉同步源的这些由于迁移(fromMigrate)产生的oplog,源库发生的迁移,目标库不管,目标库balancer自己做自身集群的均衡。这样,两个集群都可以开启balancer,数据又不会出问题。
2.如果一定要关闭balancer,那么在运营过程中,发现源库数据量不均或者需要扩容,想手工做均衡,要怎么做?
我现在想到的方案是:先关闭shake,然后源库做迁移,迁移之后,要全量同步一次数据到目标同时记下时间点t,再启动shake并从t时间开始同步。这块有没有更好的方案???
shake不能处理moveChunk这种命令,貌似有点局限
The text was updated successfully, but these errors were encountered: