-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(inputs.modbus): Optimise grouped requests #11106
feat(inputs.modbus): Optimise grouped requests #11106
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for tackling this issue @TimurDela! Well spotted and you are completely right that this needs fixing.
I have some comments in the code, but for the background: We introduced the omit
flag to give the user more control over what is actually sent to the device. There might be strange behaviors, we have for example seen devices requiring to read coil starting from address zero and there might be others with similar constraints.
Therefore, I would not just skip leading omit
s as they might be required but keep the starting address. We should omit the field assignment though...
Furthermore, I would filter non-empty requests (the once filling at least one field) at the end of groupFieldsToRequests
as this simplifies the logic. What do you think?
plugins/inputs/modbus/modbus_test.go
Outdated
require.Equal(t, uint16(2), modbus.requests[1].holding[0].address) | ||
require.Equal(t, uint16(4), modbus.requests[1].holding[0].length) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should not mess with the addresses here (see my overall comment) but rather check that we only scrape out one field from this request
require.Equal(t, uint16(2), modbus.requests[1].holding[0].address) | |
require.Equal(t, uint16(4), modbus.requests[1].holding[0].length) | |
require.Len(t, modbus.requests[1].holding[0].fields, 1) |
plugins/inputs/modbus/modbus_test.go
Outdated
require.NoError(t, modbus.Init()) | ||
require.NotEmpty(t, modbus.requests) | ||
require.NotNil(t, modbus.requests[1]) | ||
require.Len(t, modbus.requests[1].holding, 0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a matter of taste, but I like require.Empty
as it is more speaking
require.Len(t, modbus.requests[1].holding, 0) | |
require.Empty(t, modbus.requests[1].holding) |
plugins/inputs/modbus/request.go
Outdated
current.fields = append(current.fields, f) | ||
if isNonEmptyRequest(current) { | ||
current.fields = append(current.fields, f) | ||
} else { | ||
current = newRequest(f, tags) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about taking another approach here? For the "starting with omit" case we could simply modify newRequest
to omit the field like
func newRequest(f field, tags map[string]string) request {
r := request{
address: f.address,
length: f.length,
fields: []field{},
tags: map[string]string{},
}
if !f.omit {
r.fields = append(r.fields, f)
}
// Copy the tags
for k, v := range tags {
r.tags[k] = v
}
return r
}
and later check the requests. See below.
plugins/inputs/modbus/request.go
Outdated
if isNonEmptyRequest(current) { | ||
requests = append(requests, current) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of all of this I would just check if the request will fill a field and delete it otherwise
if isNonEmptyRequest(current) { | |
requests = append(requests, current) | |
} | |
requests = append(requests, current) | |
// Filter out completely empty requests | |
nonEmptyRequests := make([]request, 0, len(requests)) | |
for _, request := range requests { | |
if len(request.fields) != 0 { | |
nonEmptyRequests = append(nonEmptyRequests, request) | |
} | |
} |
and then return nonEmptyRequests
below.
Hi @srebhan, If I use the code I pushed, my configuration leads to 7 requests. If I use your suggested code and throw out only the empty requests, I end up with 21 requests, which becomes incompatible with how often I need to log the data. Could we think of a way to have both behaviours and let the users decide which one is more appropriate for their needs? |
@TimurDela how do you get 21 requests? In my suggestions I'm dropping completely empty requests, so I cannot think of any configuration where 50 registers in 1300 entries cause 21 requests. Can you please provide an example? The only change I request to your code is to not drop leading omits in requests. I can see that this makes sense, but it would need an additional config option for the user to explicitly enable it. I'm open to discuss this. My suggestion is to split this PR. Please move fixing the bug with not omitting the first field to an own PR and keep the "empty-filtering" in this PR. Does that make sense to you? |
For the I will split this pull request in two as you ask. Please have a look at #11202 . |
My suggestion of the further procedure would be to first get #11202 ready. In the meantime, I want to ask you to fix the "omit first field" bug, maybe in this PR or you split it out to another. If you furthermore, need the above optimization, you should do it in this PR. Please guard the "new" behavior with a flag (e.g. How does that sound to you? |
638f0d7
to
cf8a047
Compare
Hello @srebhan , is there something else you want me to modify in this pull request? |
@TimurDela after thinking a bit longer on your request I realized that there are solutions closer to the optimum and came up with #11273. Can you give it a try so we can discuss the further proceeding? Your test is integrated and the result matches your expectations. However, you can optimize further by setting Looking forward to your comments. |
Thank you @srebhan |
I have left a few |
575eb44
to
46feed7
Compare
Can you please use #11273 as a basis for your implementation by adding another strategy there? I intentionally kept my PR open for additional strategies to make it extensible. |
Just to be clear, do you want me to rebase this pull request on top of srebhan:modbus or do you want me to add my changes into pull request #11273 ? |
I want to add your optimization scheme on-top of #11273. Give it another name (e.g. |
remove extra empty line Co-authored-by: Joshua Powers <powersj@fastmail.com>
Co-authored-by: Sven Rebhan <36194019+srebhan@users.noreply.github.com>
e881f7e
to
89a2cdf
Compare
Thank you @srebhan. This PR is now rebased and the outputs have been moved to Tests fail for the Windows test suite but the error messages mention plugins I have not modified (google_cloud_storage and statsd) so I am not sure what to do about it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code looks quite good now @TimurDela! I have some smaller issues like naming or documentation style. However, the size of the test-diff worries me. Can you please try to get this into a more readable state, e.g. by reverting the modbus_test.go
to the current master and re-add your tests in a suitable place?
Thank you @srebhan I have now implemented your suggestions and cleaned up the test file |
Download PR build artifacts for linux_amd64.tar.gz, darwin_amd64.tar.gz, and windows_amd64.zip. 📦 Click here to get additional PR build artifactsArtifact URLs |
Hello @srebhan I just solved new conflicts with master but I would really appreciate if we could merge this PR before the next change on modbus so I don't have to do that again. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks really beautiful @TimurDela! Thanks for all your effort and the valuable contribution!
Thank you @srebhan for the very useful feedback. I am really happy this is going to be on the main branch. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the PR and your patience!
Simple modification to introduce an
optimise_requests
flag in the configuration setup that, when set totrue
As a result, the total number of request is generally lower, also improving performance
Required for all PRs:
optimise_requests
inrequestDefinition
true
,groupFieldsToRequests
switches togroupFieldsToRequestsOptimised
thatThis results in fewer and shorter requests when the user specifies many (several hundred) fields, many of which are omitted, hence increasing performance.