-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Coverage test is extremely slow on multiple threads #36142
Comments
FWIW, this has hit / confused us over at STREAMBenchmark.jl leading to unrealistic low memory bandwidths. |
@IanButterworth: Thanks for the original fix! We find though that for a large codebase like ours, adding threads still makes a significant difference, e.g. some of our tests go from ~1 min to ~10 min just by using I am reopening this issue, since it seems like it needs a fundamental fix for large codebases, in addition to the work you've already done to limit the coverage computation to just the code under test? |
For example, some solutions that we just brainstormed include:
|
I think @KristofferC's explanation on discourse is likely a plausible one: https://discourse.julialang.org/t/coverage-test-is-extremely-slow-on-multiple-threads/40682/2 |
Here is a simple example benchmark that demonstrates the thread scaling issue: # Lots of whitespace to cause lots of "blocks" in code coverage
module MyPkg
function test(x,y)
z = x + y
if z > 0
return z
else
return -z
end
end
foo(x) = x + 1
bar(x) = x + 1
baz(x) = x + 1
end # module MyPkg
using .MyPkg
@show Threads.nthreads()
SCALE = parse(Int, ENV["SCALE"])
# Perform significant cpu-bound work in each task:
out = Threads.Atomic{Int}(0)
@sync for i in 1:50
Threads.@spawn begin
i = 0
for _ in 1:($SCALE * 1_000_000)
i += MyPkg.test($i,2)
rand() < 0.01 && (i += MyPkg.foo(i))
rand() < 0.01 && (i += MyPkg.bar(i))
rand() < 0.01 && (i += MyPkg.baz(i))
end
Threads.atomic_add!(out, i)
end
end
@show out Thread-scaling without code coverage is as you'd expect:
Thread-scaling with code-coverage sadly exhibits inverse scaling:
(Note: the benchmark is run with lower data scale in the coverage case so it doesn't take forever) |
Suggestion from @adnan-alhomssi: If we only care about whether a line is covered or not, can we implement an easier solution, where we first load the counter value and increment it only if it is equal to zero? This seem like a good idea to me too. Do people actually use the "how many times was this line touched" mechanism? We are simply trying to track whether our code is covered or not (a binary) for tracking test coverage. It seems that trace-profiling and code coverage are maybe two separate features and could be supported separately, so that code coverage on its own could be more performant. We could add a separate mode to indicate this is the desired behavior. And then the full profiling-mode that we currently have could be made more performant via separate per-thread counters. |
I don't know the internals, but might atomic adds with Atomix.jl be an easy quick fix to make this at least reasonably fast without having to implement the whole thread-local coverage counts stuff? Maybe as a temporary solution? |
Atomics would make this a lot slower than it already is (we had briefly tested to see how bad it would be, as we already knew atomics are about 100x slower, but were not initially certain if that would be slow overall). I assume the penalty for this may be much lower on non-x86 platforms such as Aarch64, but x86 forces atomic-ish TSO on every value even when you don't want it. |
I wonder how the SanitizerCoveragePass LLVM has performs |
My understanding is that LLVM uses a per-thread allocation |
We actually are measuring on new m2 macs, and we're seeing pretty dramatic slowdowns there. :( Sorry I didn't make that clear but the above numbers were on my macbook pro. We currently hypothesize that this is caused by thrashing the cash-lines, due to every thread invalidating the cached value on all other CPUs on every hit on that line. In which case, Adnan's suggestion above seems like the best one. |
Running
Pkg.test("MyModule", coverage=true)
on more than one thread is extremely slow.I measured the time on my machine using 1, 2, 8 and 16 threads:
1 thread: 339 seconds
2 threads: 3106 seconds
8 threads: 6401 seconds
16 threads: 5072 seconds
Running
Pkg.test("MyModule")
without coverage takes approximately the same time for every number of threads (around 2 minutes) for this module.See also the discourse discussion.
I am using Julia 1.4.1 on Windows 10 and my CPU has 8 cores/16 threads.
The text was updated successfully, but these errors were encountered: