-
-
Notifications
You must be signed in to change notification settings - Fork 32.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Derivative sensor doesn't update correctly for non-changing values #31579
Comments
+1 |
Hey there @afaucogney, mind taking a look at this issue as its been labeled with a integration ( |
having the same on my server |
Any news regarding the issue? The component is in production but doesn't work. |
same problem here |
Does your sensor update its value even if it doesn't change ? I mean do you have a real values table with several item of the same value ? or just a single that state constant ? Could you please reproduce the issue and send the values table, I can add this to a test and see what's happen. I need to understand if this is an issue in the "derivative component" algorithm or because of its integration. BTW have you tried to add a "time_window" Get more info : I someone has any idea, feel free to comment ! @basnijholt @dgomes |
statistics sensor runs periodically regardless if there are any changes in the sensor. This also means that it doesn't track changes during the period. derivate sensor (and integration sensor in which it is based) tracks changes in the source sensor. That means that if the source sensor doesn't change, the derivate sensor will keep it's value for long periods of time. One possible solution is to combine both methods: track changes and periodically read the source sensor to detect "no changes". |
It DOES NOT work thus, as you can see on me initial screenshots. In case, when water meter keeps its value (on screenshot - 7, February, 1:18 AM and later on) what means that the first order derivative is zero, while HA continues to show 0.12 l/min. |
@Spirituss Are you sur that "HA continue to show 0.12", or maybe this is the chart which draw a line between 2 points. This is the reason why I ask about value table ? |
There are plenty way of implementing derivative when it come to digital. Unfortunately. |
How can I get if from HA? I physically switched off source sensor for my water flow derivative but HA still show 0.16 l/min in states list. No matter what does chart show. |
It does not explain the issue. Digital calculation of derivative can cause its inaccurate calculation, but in case when there is no change it must show zero. |
This is the point, if you switch off the sensor, the value is not updated, and its value keep the same value, but its timestamp is not updated. So everything is normal from my side. Did you try the time_window ? I'm sur this is what you are looking for ! |
When you say 'there is no change' : What is the diff between "no change" and "waiting for the next value". How can the component know if before getting the new value:
In your context it is maybe obvious, but I designed to provide derivate values indexed on sensor values, nothing else ! because my sensor do not have any update frequency (or I do not want to care about) If your case doesn't work with time_widow, feel free to open a PR, we can look on that. |
Possibly, it is what I need. I read the manual but It's not clear how it works. What value should I use for the time_window? Is it a time delta that derivative uses for calculation of increment? In this case I think it's better to use the time interval that my sensor is being updated (15 sec).
I'm not agree with you, since we are talking about physical conception 'derivative' what means the speed of value changing versus time line, no matter what is the reason of such changes, either "no change" or "waiting for the next value". This is the nature of any derivative. But in case you start to matter regarding the reason of changes you mean statistics but not derivative. Home Assistant already has statistics sensor which does work exactly the way you tell about. |
The irony is that after one of the issue made for statistics sensor its behaviour has been updated and now can work just as derivative, while the derivative component started to work as statistics. |
I added time_window to my sensors and nothing has changed. Derivatives show the same value as before. |
About a month there was no any news. Do you still support the component? |
Hi @Spirituss, I still support the component, and of course PR are also welcomes. |
Maybe you miss configure it, please post your configuration, and the output. Extract of the datatable would also be suitable. |
This happens because to calculate the derivate it uses the last known values and if new data comes in the older values are discarded (of time window). In your case your sensor didn’t admit any data for over a day so the derivative is based on that data from a day ago. I am not sure whether we want to change this logic. If we did, the following happens, you have a time window of 15 seconds, and if data comes in every (let’s say) 20 seconds, the derivative would never be able to be calculated because you would have only one point. |
What is the value of the time_window parameter in this case? In such way it looks ridiculous.
This is the point! Of course, in case no data received during last 20 seconds with 15 sec update interval in terms of approximation definitely means that flow is zero! |
@basnijholt @afaucogney I believe that the problem lies in the plotting. Since people usually use such sensors for plotting, it's highly desirable to see when change stops. It means that if the time window has exceeded and no new values are present, sensor should report 0, null, undef whatever, but not the last value. The statistics integration does this by issuing a timer for time window time and resetting sensor's value on it's exceeding. |
I fully agree with @divanikus - the problem is that the function shows value at the current moment but the value is being calculated on the past sensor values. Ir order to automate the solution is the value out-of-date, the integration can use time window parameter. |
Seriosly? Without a solution? |
@afaucogney already posted :
|
@dgomes I don't know how to explain even better, but |
I have this issue as well, sadly #45822 is closed, since I think @popperx it has a better description of the issue and the solution. The discussion centers I think around this comment:
Because, even though this is true, the approach in the code seems to use the incorrect approximation (disclaimer: on a quick read from me, I could misunderstand the code). To calculate the derivative in the code there appears to be the assumption of a linear increase between datapoints, which I think is a reasonable one. However, when there is no data point, this assumption is dropped, so essentially suddenly the code uses different logic. Put into different words, the time_window appears to be a maximum, not a constant.
I think you can do better. The old state to use within the time window is rarely if ever the old state, but should be the interpolated state at ten minutes in the past. In essence I think the list should always contain at least one value outside of the time window and use that to interpolate the starting value. I assume this will have the effect of 'dampening' all values, not just these spikes, but it will make it much more predictable and the window more meaningful. |
After my last comment a year ago, I copied the integration as a In short, I made sure that the actual window respects the minimum Here's what I've done based on this version from a year ago : The main differences are :
[...]
now = new_state.last_updated
last = old_state.last_updated
# If it's the first valid data (empty list) or if the last data received exceeds
# the `time_window`, the `_state_list` gets (re)initialized
if (
len(self._state_list) == 0
or (now - last).total_seconds() > self._time_window
):
self._state_list = [(old_state.last_updated, old_state.state),(new_state.last_updated, new_state.state)]
# Checking if the new value makes the data set too short to respect the `time_window` if so, it's added to the `_state_list`
elif (now - self._state_list[0][0]).total_seconds() < self._time_window:
self._state_list.append((new_state.last_updated, new_state.state))
# If the new data makes the data set larger than the `time_window`, then the same check is made with the second data
# This will confirm if we still need to add another data to respect the `time_window` or if the time window can be moved
elif (now - self._state_list[1][0]).total_seconds() < self._time_window:
self._state_list.append((new_state.last_updated, new_state.state))
# Moving the window and adding the new value
else:
self._state_list.pop(0)
self._state_list.append((new_state.last_updated, new_state.state))
[...] I went back in time with Grafana, and I think this is about the time where I did the changes. A lot less spikes... I wanted to submit those changes to Github at the time but I don't really know how to do it. |
I'm a bit surprised you have any spikes at all, have you looked at those data points to see what is going on? In the mean time I've been thinking about this problem a bit more and I think I have an easier/better solution. We have three issues:
A solution to all imho is to average all measured derivatives, weighted by time. It is also easy to implement, because in practice we will only need to keep the previous calculated derivative in order to calculate the total. I plan to code this soon, since I don't expect this to be too hard. An additional advantage of this approach could be that we can do different weighting, such as an exponential moving average. This way new measurements will have more weight, but you can still have smoothing. That would require us to keep all states or averages though. Here's some quick pseudo python code which I think should work: # derivative is intialized at 0
[...]
# if it is the first data point, return
if (old_state is None): return
# calculate linear derivative between new and old state (so this **always** happens)
delta_t = new_state.last_updated - old_state.last_updated
delta_y = new_state.state - old_state.state
new_derivative = delta_y/delta_t
# if delta_t is larger than time window, just use the new derivative,
# otherwise calculate weighted average with old average
if (delta_t > self._time_window):
derivative = new_derivative
else:
time_left = self._time_window - delta_t
derivative = (new_derivative*delta_t + self._state*time_left)/self._time_window
self._state = derivative The biggest advantage to this method is that - after the first time window has passed after init - the time window will always be applied as a constant smoothing factor. [edit] |
Since I don't have the tools installed to do a proper pull request I copied the code and made a custom component with my changes. This is what I ended up with and it works pretty well:
I hope this helps. |
Regarding the original issue @Spirituss described: I had the same problem that the derivative integration didn't update values when my source sensor values were constant. I use this integration for calculating the power (in kW) based on the energy (kWh) which my energy meters provide. rest:
- resource: http://192.168.60.11/cm?cmnd=status%2010
scan_interval: 30
sensor:
- name: "Verbrauch Normalstrom"
state_class: total_increasing
device_class: energy
unit_of_measurement: kWh
value_template: >
{% set v = value_json.StatusSNS.normal.bezug_kwh %}
{% if float(v) > 0 -%}
{{ v }}
{%- endif %}
sensor:
- platform: derivative
source: sensor.verbrauch_normalstrom
name: "Verbrauch Normalstrom Leistung"
time_window: "00:03:00"
unit_time: h
unit: kW I guess the value updates didn't take place because hass didn't write values of the source sensor in the database. I haven't verified this, I just took a look at the Prometheus metric I could fix it by adding Just wanted to quickly post this solution in case someone else finds this issue and uses the restful integration. Maybe other integrations provide similar functionality. |
Pretty amazing this issue goes back 2 years. It seems pretty obvious that because the source sensor doesn't update when the value no longer changes, thus the derivative sensor, which needs more than 1 data point, doesn't update either until another value gets sent from the source. It seems most people that "fix" this issue does so like the above, with some means to force an update. I'm no different. In my case, I created a template sensor of the source and added an attribute that updates every minute. Then I based my derivative sensor off of this.
Hope that helps someone. |
I assumed this issue to be the same as the one I was having, but apparently it is not fixed? (Because my issue has definitely been fixed by my pull request). So, to be clear, the issue is that with no change in the source sensor, the derivative is not changed but of course it should trend to 0 in actuality? I just had a look at a few derivatives I use and noticed that although they are almost zero, they aren't exactly zero and indeed never updated to that. For my applications it doesn't matter, because the last value is always very close to zero, but I can see this might be problematic. I'm a bit surprised that this happens as well, since the derivative sensor essentially keeps it's own history list and doesn't depend on the database. So it must indeed be that a non state change is not communicated. I'd have to check, but I expect this is because we use the 'changed' signal, where we could/should use the 'updated' signal (I thought that was already the case tbh, but I will check). |
Yeah it gets to that last value and doesn’t calculate the derivative is zero until one more value is updated. So it gets close to 0 and is trending that way, but that last final step can take hours (however long it takes for the source sensor to update one more time). It’s the same behavior as with the trend integration, which is where I stole the work around above. https://community.home-assistant.io/t/add-force-update-support-to-template-sensor/106901/2 I presume that to get a derivative of 0, you need two consecutive values of the same number, but the way HA tends to work until specified otherwise is that it won’t send a second value from a device until the value changes. So the more I think about it, the more it seems the fault of HA as a whole and how derivatives work, and not the code or integration. |
@zSprawl , thanks a lot. That fixed my problem with my diy gas counter sensor. |
Hi, I also faced an issue with derivative never changing to 0, when value does not change, but surprisingly it works fine with "raw" sensors, but not with templates. I believe time window will work around the issue of never going to 0, but something seems to be wrong.
|
I'm so astounded that many are focusing on "how would I implement" and not starting with the obvious... I understand the concern for "non-reporting sensors". But since HA drops the repetitive values, it seems best to work with what we have. No data within the window? Report the derivative as '0.0'. Only one reading within the window? Report the derivative as '0.0'. It almost feels like the derivative helper is not written with HA in mind, since HA, by default, discards repeated readings, yet a derivative sensor, by its nature, needs multiple readings INCLUDING repeated readings. |
Related to "how do I work around the current state of things"... Do I just need to use "some" technique to get HA to log values that haven't changed, either the 'now()' hack, the force_update config, or some other approach? |
I absolutely agree with this. If there is no signal within the window, you have to assume a rate of zero and the derivative sensor has to return a state of 0 until it gets a change in signal. Once it gets the new signal, the slope is equal to the change in signal divided by the derivative window. So say you have the following:
The derivative at 0:03:30 for a window of one minute is equal to (1.5-1.0)/(0:03:30 - 0:02:30) because regardless of whether you got a repeated signal of 1.0 at 1:30 - 3:00 and HA discarded them OR you just didn't get a signal at all, it doesn't matter and you can't know when that last signal changed. The answer is NOT (1.5 - 1.0) / (0:03:30 - 0:01:00) or whatever the slope was the last time it had multiple signals within a window. ALso, it seems to me that calculating slopes between every signal is fairly cpu-intensive and unnecessary. Wouldn't it be better to obtain the slope between the oldest signal and the newest signal and discard signals as they age out of the window? You could assume a pseudo last signal equal which was read at "null" time until a new signal is received. Then you'd assign a time of "now() minus window" to that signal value and take the slope between it and your current signal value. That's far, far more efficient than weighted averages of slopes, especially if you have hundreds of signals within your window. |
Any update on this? I see a few workarounds - but they appear to be more yaml based. Ive started transitioning away from yaml since that appears to be the suggested way of doing things going forward. |
TLDR: I propose to update this integration as described in the final chapter of this loooong post. I would like to give my view on the topic, which is based on mathematical insight: Let me first start with a bit of explanation of how I approached this topic. How I mathematically approach sensor values in HASensor values are non-uniform (not equi-distant, i.e. not always with equal time between them) samples of the value a real-world "function" This means that even if the method by which new values are obtained is a polling method with a regular interval, the samples may still be non-uniform because HA throws out equal values. Thus, any method we think of to process data should always assume the data is non-uniform, because even though we might know that data is checked regularly, we never know when the next different value comes in. Now, based on the sample list So, how do we obtain a best representation
The problem with any method that would fall in the category of item 2 is that they use future values of Let me pose an assumption, without trying to clarify too much why I think it is true:
The derivative integrationSo back to this integration. In my opinion, there is 1 property that would need to be satisfied for this integration to be useful, and that is:
So, based on all of the above, I can now properly explain why I personally have a couple of "issues" with the current implementation of the Derivative integration:
Let me give an example of what the derivative samples should look like in my opinion. Assume the samples are
So our derivative samples become: Now lets ty to
Wait what?!? That is not even close to the original list of samples! Indeed, but due to the
And if we cross-check the times at which the riemann sum integral integration calculated a new value, it matches 100% with the above list. My proposed updateSo, basically I would say that this integration needs an update as follows:
@afaucogney Would you be so kind as to read the above rationale and comment on whether you think this is a good improvement of this integration? If so, let me know, I can start working on implementing it on relatively short notice. |
The problem
I use derivative sensor to measure the water flow through water counters. When the flow is changing, derivative shows realistic values. But when the flow becomes zero, derivative is still showing the last measured value for a very long period (about some hours).
In the meanwhile, 'change' attribute of the statistics sensor becomes zero with zero flow after the last HA update 0.105 (the values keeping logic was changed).
Below is the measurements of derivative and statistics sensors, and the historical values of the water meter itself as a prove.
Environment
Home Assistant 0.105.1 (ex. Hass.io)
arch: x86_64
Problem-relevant
configuration.yaml
Traceback/Error logs
No error, but incorrect behaviour.
Additional information
Here is the water meter values, which became zero at 01:18
The derivative sensor (sensor.raw_water_flow) was still showing non-zero (0.12 l/min) value after 01:18
The statistics sensor (sensor.raw_water_flow_stat) showed zero at 01:18
The text was updated successfully, but these errors were encountered: