Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UDP Client: Split large points #7305

Merged
merged 6 commits into from
Sep 23, 2016
Merged

UDP Client: Split large points #7305

merged 6 commits into from
Sep 23, 2016

Conversation

joelegasse
Copy link
Contributor

@joelegasse joelegasse commented Sep 14, 2016

The v2 UDP client will attempt to split points that exceed the configured payload size. It will only do this for points that have a timestamp specified.

This follows from several ancestors: #6913 #7288

Required for all non-trivial PRs
  • Rebased/mergable
  • Tests pass
  • CHANGELOG.md updated
Required only if applicable

You can erase any checkboxes below this note if they are not applicable to your Pull Request.

  • InfluxQL Spec updated
  • Provide example syntax
  • Update man page when modifying a command

Addr string

// PayloadSize is the maximum size of a UDP client message, optional
// Tune this based on your network. Defaults to UDPBufferSize.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is UDPBufferSize?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A typo that should be updated to UDPPayloadSize :-)

@joelegasse
Copy link
Contributor Author

My latest commit adds some improvements to the models package to reduce string allocations when writing points.

For now, these are only taken advantage of in the UDP client, but could be used by the HTTP client as well.

I also removed the bytes.Buffer and replaced it with a plain old byte slice.

@joelegasse
Copy link
Contributor Author

@phemmer @vlasad @maksadbek @sebito91

You've all been involved with this issue for a while now. I think this current change is good enough to be merged. I would really appreciate any testing you could do with your data that would stress the point splitting behavior while ensuring correctness. I've added some changes that should increase performance over previous iterations: mostly memory re-use and in-place scanning, rather than the decode-reencode process that was happening previously.

I still need to update the docs for the client to describe the splitting behavior, along with the reasoning behind the various decisions we've come to over the past 2.5 months :-)

@sebito91
Copy link
Contributor

@joelegasse thank you sir, we'll test this out now and get back to you ASAP.

@jwilder
Copy link
Contributor

jwilder commented Sep 23, 2016

@sebito91 Did you get a chance to test this out?

vlasad and others added 5 commits September 23, 2016 13:22
The v2 UDP client will attempt to split points that exceed the
configured payload size. It will only do this for points that have a
timestamp specified.
This change also updates the UDP client to take advantage of these
improvements, as well as some code review changes.
@joelegasse joelegasse force-pushed the jl-udp-split branch 2 times, most recently from fb0004b to 879378e Compare September 23, 2016 17:24
The UDP client now supports splitting single points that exceed the configured
payload size. The logic for doing so is listed here, starting with an empty payload.

1. If a point would cause the current payload to exceed the configured size,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/would/would not/?

@sebito91
Copy link
Contributor

Apologies fr the delay, but this works well for us!

@timhallinflux timhallinflux added this to the 1.1.0 milestone Dec 19, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants