Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shutdown Kafka consumers and producer when the external stream is stopped #645

Merged
merged 5 commits into from
Apr 9, 2024

Conversation

zliang-min
Copy link
Collaborator

PR checklist:

  • Did you run ClangFormat ?
  • Did you separate headers to a different section in existing community code base ?
  • Did you surround proton: starts/ends for new code in existing community code base ?

Please write user-readable short description of the changes:

A follow-up change of #643. In this change, it makes sure that if an external stream is stopped, all consumers and producer of that stream will be shutdown, and all ad-hoc queries will fail.

std::string name() const { return rd_kafka_name(rk.get()); }

private:
void backgroundPoll(UInt64 poll_timeout_ms) const;

klog::KafkaPtr rk {nullptr, rd_kafka_destroy};
ThreadPool poller;
std::atomic_flag stopped;
bool stopped {false};
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

multiple thread unsafe ? one thread call shutdown, another thread call consumeBatch

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It appears there is still race

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, an atomic is not needed here, there won't be a race.

So, the stopped flag is to for telling the Consumer (and the Producer in the below case) that it's already stopped. And, when we set stopped to true, we don't do anything else ( like, we don't detroy anything ), so there is nothing to guard with it, i.e. there is no such logic in our code:

if (!stopped)
{
    stopped = true;
    /// do something
}
else
{
    /// do something else
}

So there is no race.

The purpose is to to let operations on the Consumer fail, but it does not have to fail immediately. Take your case as an example, if one thread call shutdown, and another call consumeBatch, it's fine, it does not care which one wins, even consumeBatch wins, nothing bad will happen, and next time when consumeBatch is called, it will fail as expected, and then, KafkaSource will stopped reading more data. And also, we don't even need to guard shutdown, it can be called multiple times, no side effects.

Please let me know if this does not make sense to you.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regardless it does not have any impact in current implementation, I reverted it back, and now use std::atomic_flag.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The race i meant it not the stopped data member but at high level..

  1. Client call isStopped(), returns false, context switch to 2)
  2. Consumer then stopped, switch back to 1)
  3. Client already checked consumer is not stopped, but it actually is, continue operating on stopped consumer ?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same for producer and we run into this problem before

src/Storages/ExternalStream/Kafka/Consumer.cpp Outdated Show resolved Hide resolved
std::string name() const { return rd_kafka_name(rk.get()); }

private:
void backgroundPoll(UInt64 poll_timeout_ms) const;

klog::KafkaPtr rk {nullptr, rd_kafka_destroy};
ThreadPool poller;
std::atomic_flag stopped;
bool stopped {false};
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It appears there is still race

@zliang-min zliang-min force-pushed the kafka/graceful-shutdown-2 branch from 5846533 to c0511fe Compare April 7, 2024 18:58
@zliang-min
Copy link
Collaborator Author

@chenziliang @yl-lisen I reverted the change and use std::atomic_flag in Consumer and Producer again, please take another look. Thanks.

@jovezhong jovezhong changed the title Shutdow Kafka consumers and producer when the external stream is stopped Shutdown Kafka consumers and producer when the external stream is stopped Apr 8, 2024
@chenziliang chenziliang merged commit 996c5f3 into develop Apr 9, 2024
21 checks passed
@jovezhong
Copy link
Contributor

(Jove Github Bot) added it to the current sprint.

@jovezhong
Copy link
Contributor

(Jove Github Bot) moved this ticket out of the GitHub project(up to 1200 tickets for one project).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants