-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Amazon Dynamo for token management #28
Comments
Hi Akbar, I couldn't find material on DynomiteDB or on other forums for the following questions:
Any pointers will really help. |
Typically, you'll deploy a Dynomite cluster with a somewhat fixed size. However, as mentioned above, there is active work to automate the scale up/down process. |
Appreciate your quick response here @akbarahmed ! To give you some context, we typically have an on-premise deployment where the customer would typically start with N nodes each with a capacity of C. Depending on the load a new Node N1 will be added with the same capacity C. For approach three (the manual process), we'll probably follow the steps recommended by @Minh in #173 . If it isn't too much to ask do you have a tentative date for when the PR would be submitted? |
There's no ETA for when the PR will be ready. |
@avinashZato I am not following why Dynomite needs to follow the diurnal pattern of the client. In fact, if you plan to scale up then you need to scale down, and generally speaking stateful systems must not follow that pattern because data can get out of sync. If you are using Dynomite as a cache, then it means that you set a short TTL on the data. Hence the Dyno dual write functionality would probably solve your problem. In other words, you will bring a new Dynomite cluster that has some extra capacity, dual write to it, then take the older cluster down once the data have expired. The same way can work for scaling down. |
@akbarahmed I like a lot having the DynamoDB DAO because it supports cross-region replication and you do not need another system outside of AWS. In case of region outage, we can move everything to another region and still have the tokens to come back to the failed region. This was the idea behind the Cassandra DAO. @diegopacheco was also considering changing the Astyanax to the Datastax Java Driver, which would make the DAO a little more standardized, but I am not sure how far he got with it. |
@ipapapa not much far yet. |
@ipapapa This is working in progress. You can follow progress here https://github.com/diegopacheco/dynomite-manager-1/tree/dev-datastax-javadriver I will fire a PR once I'm done. :-) |
DynamoDB does not actually support bi-directional cross-region replication (only one way). More info. Hence at this point, I do not see any reason to pursue the DynamoDB DAO. We can revisit this one bi-directional replication is added. |
This claims it is full multi region , can we reopen this?https://aws.amazon.com/dynamodb/global-tables/ |
@posix4e Done. It would be really nice to see the DAO being added. |
dao? |
DAO = Data Access Object. It is a Java design pattern. More information: https://www.oracle.com/technetwork/java/dataaccessobject-138824.html |
@ipapapa , @akbarahmed Thanks in advance |
Hi, I submitted a pull request for DynamoDB support, the only piece missing is integrating dynamodb with Florida. Can you review and share comments if any Thanks |
This is a feature proposal to add an Amazon Dynamo DAO for token management.
We are actively working on this feature and will issue a PR when ready.
The text was updated successfully, but these errors were encountered: