Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE] ✅ Better Tests #288

Closed
jodydonetti opened this issue Aug 18, 2024 · 1 comment
Closed

[FEATURE] ✅ Better Tests #288

jodydonetti opened this issue Aug 18, 2024 · 1 comment
Assignees
Labels
enhancement New feature or request
Milestone

Comments

@jodydonetti
Copy link
Collaborator

jodydonetti commented Aug 18, 2024

Scenario

FusionCache currently has 700+ tests (including params combinations).
Usually they all pass locally, meaning 100% of them, both on Linux and on Windows.

image

Problem

Not everything is perfect though, and 2 different issues have been observed:

  • LOCAL: very rarely, a couple of tests do not pass, seemingly at random
  • GITHUB ACTIONS: looking at tests running on GitHub Actions, it happens that 4 or 5 of them don't pass

So I added some extra logging, investigated this more and what came up seems to be all related to a microscopic difference in timing, where in this case "micro" is literally indicative of the problem, since the problem is about a difference in measured time of less than a ms (so we are talking about micro seconds).

For example in one test I setup FusionCache with a soft timeout of 1 sec and use a factory that runs for 5 sec: by virtue of soft timeout, the method takes 1 sec, but when measuring it with a Stopwatch the reported time is very rarely between 998.5 ms and 999.9 ms, so the assertions that checks that the time passed is >= 1 sec fails.

But why is that?

After some research I discovered this.

Basically there may be some microscopic differences in the way time is measured, so that 99%+ of the time all is good, but every now and then it may happens that it seems like less time has passed because of measurements errors, therefore we have to take this into account.

Solution

I created an ext method, limited in scope to the tests project, called GetElapsedWithSafePad() that takes into account this problem.

Since I'd like the tests to be resilient in spite of microscopic differences, and since in the tests I never work with something as small as 1 ms to 10 ms times, I just settled for an extra 5 ms.

private static readonly TimeSpan StopwatchExtraPadding = TimeSpan.FromMilliseconds(5);
public static TimeSpan GetElapsedWithSafePad(this Stopwatch sw)
{
  // NOTE: for the extra 5ms, see here https://github.com/dotnet/runtime/issues/100455
  return sw.Elapsed + StopwatchExtraPadding;
}

I then updated the tests that were impacted by this and give it a go:

image

Boom 🥳

@jodydonetti jodydonetti added the enhancement New feature or request label Aug 18, 2024
@jodydonetti jodydonetti added this to the v1.4.0 milestone Aug 18, 2024
@jodydonetti jodydonetti self-assigned this Aug 18, 2024
@jodydonetti jodydonetti pinned this issue Aug 18, 2024
@jodydonetti jodydonetti unpinned this issue Sep 15, 2024
@jodydonetti
Copy link
Collaborator Author

Hi, v1.4.0 has been released 🥳

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant