-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dogstatsd metrics exporter #326
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
few minor nits.
Although I've noted a bunch of TODOs both in the code and the PR description, I think it would be appropriate to make incremental progress by merging this (as it supports basic functionality) and will help us decide on future SDK refactorings discussed in #334. @paivagustavo Please take another look. |
I'd like to go on record as saying that I believe this PR, which introduces both a common statsd support package and "dogstatsd" exporter, does not constitute support for a specific commercial vendor. The statsd ecosystem is old and wide, and the dogstatsd syntax is supported by open-source systems like Veneur. Therefore I would exclude it from attempts to separate out a specific vendor's exporter from the default SDK and this repo. There is a TODO in this code to see if DataDog opens up about the "d" format for reporting approximate distributions over statsd. Assuming they do open this up, we can add support here, but we won't support things in this repository that are not widely seen as open-source formats. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One single question but lgtm
_, _ = buf.WriteString(rec.Descriptor().Name()) | ||
} | ||
|
||
func (*testAdapter) AppendTags(rec export.Record, buf *bytes.Buffer) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need a different Adapter here?
Is there any chance that the test Record
has labels encoded with another LabelEncoder
? (since the only difference I could see between the real Adapter
and the test Adapter
is that one panics and another dont)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's now a test for a "plain" statsd exporter adapter that formats tag values as name suffixes. There's also a new ForceEncode
function on the statsd label encoder which tests whether the encoder is the same as itself.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it, looks better :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Your last comment made me take a broad look at how label encoders are dealt with and add more tests--the dogstatsd exporter will now do the right thing even if export records were encoded with a different encoder. For consistency, I renamed DefaultLabelEncoder()
to NewDefaultLabelEncoder()
.
_, _ = buf.WriteString(rec.Descriptor().Name()) | ||
} | ||
|
||
func (*testAdapter) AppendTags(rec export.Record, buf *bytes.Buffer) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's now a test for a "plain" statsd exporter adapter that formats tag values as name suffixes. There's also a new ForceEncode
function on the statsd label encoder which tests whether the encoder is the same as itself.
@paivagustavo please review this diff 4cf2f3c |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The LabelEncoder
changes looks really great.
In the future we could think a way to abstract this out to the LabelEncoder
itself. That would help export implementations that may require a specific encoder.
I agree, @paivagustavo, I spent a bit of time thinking about the Right pattern for dealing with specific encoders, and felt like it was going to derail this PR. I'm definitely open to such thinking. |
Ping cla/linuxfoundation. |
…xes (#326) * Add default quantiles, update config tests, and add quantile tests * Change sendRequest test to use non-empty request and verify the payload * Refactor createLabelSet to use label.KeyValue instead of strings * Refactor ConvertToTimeSeries to add the correct labels for histogram and distribution timeseries * Change createTimeSeries to use specified NumberKind and update conversion functions * Remove validCheckpointSet test * Change mock time to milliseconds for conversion tests * Fix error where results in TestConvertToTimeSeries weren't being compared * Update convertFromSum, convertFromLastValue tests to use multiple values * Add tests for quantiles and distributions in utils module * Update docker-compose and reduce sleep time in main.go for example project * Fix docker-compose to pass CI test * Run make precommit * Update default Config quantiles * Update tests to match new quantile defaults * Run make precommit
This PR adds a statsd-common library, a dogstatsd exporter, an example using the ungrouped Batcher to send metrics tagged with complete LabelSets (including "additional labels"), and tests.
Things this exporter needs to be really complete:
Working on this PR has revealed a minor concern about the CheckpointSet.ForEach method. It's difficult to handle errors--the ForEach has no way to stop when an error is encountered, which leaves the exporter potentially dealing with multiple errors. I think we should add an error return to the ForEach argument to short-circuit exporting. Still, as was discussed in the stdout exporter, I don't think it's "best" to abort an entire batch of metrics because one Aggregator has an error, so I'm a bit undecided how to proceed. In this PR as it stands, the exporter will:
I generally don't like to accumulate multiple errors and try to wrap them in a single error, but that's another possible approach. It would be nice to have a standard convention here.