-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: adds pull based metrics exporter #271
Conversation
Codecov Report
@@ Coverage Diff @@
## main #271 +/- ##
============================================
- Coverage 80.46% 79.22% -1.24%
- Complexity 1209 1230 +21
============================================
Files 106 110 +4
Lines 4663 4823 +160
Branches 435 439 +4
============================================
+ Hits 3752 3821 +69
- Misses 710 798 +88
- Partials 201 204 +3
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
@surajpuvvada @laxmanchekka @rish691 I am still working on this PR for adding tests. But, open for review for early inputs. |
Map<String, Object> baseProperties = new HashMap<>(); | ||
baseProperties.put(ConsumerConfig.GROUP_ID_CONFIG, "hypertrace-metrics-exporter"); | ||
baseProperties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true"); | ||
baseProperties.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "1000"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As mentioned in the description that we will handle the manual coming in the second iteration.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
private final KafkaConsumer<byte[], byte[]> consumer; | ||
private final InMemoryMetricsProducer inMemoryMetricsProducer; | ||
|
||
public MetricsKafkaConsumer(Config config, InMemoryMetricsProducer inMemoryMetricsProducer) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am thinking of adding TestContainer based test as a follow-up. Added test for an important one.
@rish691 @surajpuvvada @laxmanchekka can you have a look? |
} | ||
|
||
public void run() { | ||
while (true && !Thread.currentThread().isInterrupted()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is true needed ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will address this in follow up PR.
List<MetricData> metricData = OtlpProtoToMetricDataConverter.toMetricData(rm); | ||
boolean result = false; | ||
while (!result) { | ||
result = inMemoryMetricsProducer.addMetricData(metricData); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if in memory queue is full then wait and loop until the queue is drained during a prometheus pull ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we are waiting for enough space to add all metrics data associated with current ResourceMetrics. this can delay the poll at least for pull duration from Prometheus. So, for now, we need to set poll session time out higher than Prometheus poll (1/3 of session time out).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
Description
As part of this ticket - hypertrace/hypertrace#304 - this PR adds,
Testing
Checklist: