-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Drop the consumer_manager module completely #84
Conversation
During |
Other that that, great work :) I think that practice has shown that having a single consumer per client is enough with the added benefit of a simpler overall design. |
Indeed, the Server's shutdown flow has been modified while it shouldn't. I'll restore the original behavior shortly. Nice catch, thank you! |
There's an non-handled case in |
Currently, a `Client` instance is allowed to have more than one `Consumer` instances associated with it. As the respective ddoc details (`design-rafka-rethinking.rst`), there is no actual reason to maintain this one-to-many relationship between Clients and Consumers which among others corresponds to a lot of extra logic in the source code. This commit drops support for multiple `Consumer` instances per `Client`; from now on, a `Client` instance can only be mapped to a single `Consumer`. In order to drop this functionality, the following modifications have been applied: - drop the `Client.consumers` map completely - drop the `Client.consByTopic` map completely - (temporarily) introduce the `consID` field on Client struct - replace the `ConsumerByTopic` function with the `fetchConsumer` equivalent As mentioned, the `consID` field is temporary; it is mainly used to facilitate the transition to a `ConsumerManager`-free Rafka server.
The `ConsumerID` type semantically belongs to the consumer module. This commit moves the definition from `consumer_manager.go` to `consumer.go` module, and also casts the `Consumer.id` field to the `ConsumerID` type, as it should be.
This commit introduces some major changes in Rafka internals, as the `design-rafka-rethinking.rst` ddoc details. In short, the `consumer_manager.go` module and all the respective logic, is completely dropped from the source code. Subsequently, the handling of the `Consumer` instances will be shared between the `Client` and `Server` instances. To do so, we've had to make the `Client` struct aware of the server's `Context` as well as the Rafka configuration file. Since `Client` becomes the authoritative handler of `Consumer` objects, those two extra fields are essential to properly register and de-register a `Consumer` instance via the `Client` object. Moreover, the temporary `consID` Client's field is removed and replaced by the `consumer` one, which is a reference to the `Client`'s `Consumer` object. Also, a new function is introduced in `Client`, named `registerConsumer` which will be used to create a new `Consumer` instance upon request. The `Consumer` struct is extended with a new field containing the parent's cancel context (aka `Client`), which will allow us signal a `Consumer` instance about the Client's termination; a task which was performed until now by the `ConsumerManager` handler.
When the server is requested to shutdown we need to wait for all active consumers to close prior to stopping the server, a task which was among ConsumerManager's responsibilities. Now that the respective module is dropped, we make the server the authoritative controller of this task via a `sync.WaitGroup` struct.
A server shutdown operation may last longer than expected in case the Rafka server is loaded. During this interval we don't want to register any new Consumers to Rafka. This commit introduces a new flag in server struct named `teardown`, which denotes whether a shutdown request is in-progress. If so, no new consumers will be registered and an appropriate message will be returned to the client. Since the `teardown` flag is shared among the Server and its Client's, we had to guard its references using a `sync.RWMUtex`. Note: In contrast to Consumer's handling, we allow new Producers to be registered while the server shuts down. Producers are way more lightweight comparing to Consumers, and so we can start and stop them freely. This way we, we also descrease the downtime interval of registered Producers in Rafka.
This new folder will contain all Rafka-specific documents. The already existing `ddocs` folder as well as its contents, are moved under the `docs/designs` subdirectory.
Add a document detailing the server's shutdown flow and describing what should be expected during this process.
Squashed all commits and forced-push for a last look. |
🚀 👏 🎉 |
This PR implements the Redefine Rafka scope and Drop redundant functionality proposals of the Rafka Rethinking design doc. For more details on the topic, please refer to those references.
A lot of major changes are introduced by this PR which are detailed in the respective commit messages. The most important ones are that the
Client
struct becomes aware of the server'sContext
as well as the Rafka configuration file. SinceClient
becomes the authoritative handler ofConsumer
objects, those two extra fields are essential to properly register and de-register aConsumer
instance via theClient
object.Moreover, the
consumer
field is also added to theClient
, which is a reference to the registeredConsumer
object. Also, a new function is introduced inclient
module, namedregisterConsumer
which will be used to create a newConsumer
instance upon request. TheConsumer
struct is extended with a new field containing the parent's cancel context (akaClient
), which will allow us signal aConsumer
instance about the Client's termination; a task which was performed until now by theConsumerManager
handler.