-
Notifications
You must be signed in to change notification settings - Fork 60
Directory
The Directory is the service in charge of knowing who is connected to the bus and what they are listening to. Along with the Environment, the Directory endpoint is the only information needed by a Peer to connect to the Bus.
When starting up, a Peer connects to the Directory and provides its PeerId and its Environment, it will then receive the current state of the bus in order to have a local cache. Any later change to the Bus (Peer started, Peer stopped, etc.) will be sent to all the peers so they can update their local versions of the Directory.
Since Zebus is a peer to peer bus, any peer can connect to any other peer to send a message. We were concerned about the isolation between our environments (dev, prod, etc.) so we added an environment field to the messages.
This means that the environment configured at the start up of a service is sent in every message and if a Peer connects by mistake to a Peer in another environment, its messages will be ignored. The same goes for the connection procedure to the Directory, so connecting to a wrong environment is not possible.
Zebus is built to be robust, and having a SPOF as an entry point to the Bus would not be acceptable. This is why we built the Directory as a service that could be redundant. In order to achieve this goal, the Peers can be provided with a list of Directories so they can round-robin on all of them. The synchronization of the directory state between the different directories is handled in the data storage layer.
Three storage layers are provided out of the box:
We provide a naive "In memory" implementation since first release. It doesn't allow Directories to be replicated or restarted and should only be used for test purposes.
The implementation we use in production uses Cassandra as a backend to handle the synchronization and distribution of the Directory. Using this implementation allows the Directory to be distributed to prevent it from being a Single Point of Failure.
This implementation was released in Version 3.2.1