You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
FusionCache currently has 700+ tests (including params combinations).
Usually they all pass locally, meaning 100% of them, both on Linux and on Windows.
Problem
Not everything is perfect though, and 2 different issues have been observed:
LOCAL: very rarely, a couple of tests do not pass, seemingly at random
GITHUB ACTIONS: looking at tests running on GitHub Actions, it happens that 4 or 5 of them don't pass
So I added some extra logging, investigated this more and what came up seems to be all related to a microscopic difference in timing, where in this case "micro" is literally indicative of the problem, since the problem is about a difference in measured time of less than a ms (so we are talking about micro seconds).
For example in one test I setup FusionCache with a soft timeout of 1 sec and use a factory that runs for 5 sec: by virtue of soft timeout, the method takes 1 sec, but when measuring it with a Stopwatch the reported time is very rarely between 998.5 ms and 999.9 ms, so the assertions that checks that the time passed is >= 1 sec fails.
Basically there may be some microscopic differences in the way time is measured, so that 99%+ of the time all is good, but every now and then it may happens that it seems like less time has passed because of measurements errors, therefore we have to take this into account.
Solution
I created an ext method, limited in scope to the tests project, called GetElapsedWithSafePad() that takes into account this problem.
Since I'd like the tests to be resilient in spite of microscopic differences, and since in the tests I never work with something as small as 1 ms to 10 ms times, I just settled for an extra 5 ms.
privatestaticreadonlyTimeSpanStopwatchExtraPadding=TimeSpan.FromMilliseconds(5);publicstaticTimeSpanGetElapsedWithSafePad(thisStopwatchsw){// NOTE: for the extra 5ms, see here https://github.com/dotnet/runtime/issues/100455returnsw.Elapsed+StopwatchExtraPadding;}
I then updated the tests that were impacted by this and give it a go:
Boom 🥳
The text was updated successfully, but these errors were encountered:
Scenario
FusionCache currently has 700+ tests (including params combinations).
Usually they all pass locally, meaning 100% of them, both on Linux and on Windows.
Problem
Not everything is perfect though, and 2 different issues have been observed:
So I added some extra logging, investigated this more and what came up seems to be all related to a microscopic difference in timing, where in this case "micro" is literally indicative of the problem, since the problem is about a difference in measured time of less than a ms (so we are talking about micro seconds).
For example in one test I setup FusionCache with a soft timeout of
1 sec
and use a factory that runs for5 sec
: by virtue of soft timeout, the method takes1 sec
, but when measuring it with aStopwatch
the reported time is very rarely between998.5 ms
and999.9 ms
, so the assertions that checks that the time passed is>= 1 sec
fails.But why is that?
After some research I discovered this.
Basically there may be some microscopic differences in the way time is measured, so that 99%+ of the time all is good, but every now and then it may happens that it seems like less time has passed because of measurements errors, therefore we have to take this into account.
Solution
I created an ext method, limited in scope to the tests project, called
GetElapsedWithSafePad()
that takes into account this problem.Since I'd like the tests to be resilient in spite of microscopic differences, and since in the tests I never work with something as small as
1 ms
to10 ms
times, I just settled for an extra5 ms
.I then updated the tests that were impacted by this and give it a go:
Boom 🥳
The text was updated successfully, but these errors were encountered: