Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OPENBLAS_NUM_THREADS=1 by default #59

Merged
merged 1 commit into from
Sep 18, 2023
Merged

Conversation

fonsp
Copy link
Member

@fonsp fonsp commented Sep 18, 2023

Just like Distributed: JuliaLang/julia#47803

Seems to avoid OOM errors on Windows: fonsp/Pluto.jl#2240 (comment)

@fonsp fonsp merged commit d062d1a into main Sep 18, 2023
14 of 16 checks passed
@fonsp fonsp deleted the OPENBLAS_NUM_THREADS=1-default branch September 18, 2023 09:53
@jebej
Copy link

jebej commented Sep 27, 2023

This will kill the performance of linear algebra operations on multicore systems. It makes sense for Distributed to do this, but not really for Pluto, since the purpose of Pluto workers is not to distribute computation.

@fonsp
Copy link
Member Author

fonsp commented Sep 28, 2023

@jebej Can you try to write constructive feedback? eg what do you suggest?

@jebej
Copy link

jebej commented Sep 28, 2023

This disables multithreading for matrix multiply and related operations. I would suggest to revert the change. Try e.g.

julia> using BenchmarkTools
julia> A = rand(500,500);
julia> B = rand(500,500);
julia> @btime $A*$B # by default on a 6 core CPU
  1.190 ms (2 allocations: 1.91 MiB)
julia> @btime $A*$B # with OPENBLAS_NUM_THREADS=1
  4.291 ms (2 allocations: 1.91 MiB)

@Pangoraw
Copy link
Member

One can still set the number of threads with BLAS.set_num_threads().

@pankgeorg
Copy link
Member

This disables multithreading for matrix multiply and related operations. I would suggest to revert the change. Try e.g.

julia> using BenchmarkTools
julia> A = rand(500,500);
julia> B = rand(500,500);
julia> @btime $A*$B # by default on a 6 core CPU
  1.190 ms (2 allocations: 1.91 MiB)
julia> @btime $A*$B # with OPENBLAS_NUM_THREADS=1
  4.291 ms (2 allocations: 1.91 MiB)

Pluto used to use Distributed until now. Are you seeing a performance degradation between Pluto with Distributed and Pluto with Malt.jl?

@jebej
Copy link

jebej commented Sep 29, 2023

One can still set the number of threads with BLAS.set_num_threads().

Yes absolutely, but there's a reason julia doesn't set the number of threads to 1 by default.

@jebej
Copy link

jebej commented Sep 29, 2023

Pluto used to use Distributed until now. Are you seeing a performance degradation between Pluto with Distributed and Pluto with Malt.jl?

I did not check for a difference, I was just curious about Pluto/Malt and noticed this PR.

@pankgeorg
Copy link
Member

Yes absolutely, but there's a reason julia doesn't set the number of threads to 1 by default.

Yes, and there are reasons why we need to set it to 1 by default.

I did not check for a difference, I was just curious about Pluto/Malt and noticed this PR.

Ok, thanks for clarifying that. There is significant context around this change that can't be summarized in this discussion, but feel free to study it and come up and propose a solution that circumvents the limitations we currently face. We'll gladly review a PR that addresses this problem and makes Malt.jl even better!

Best,

@PallHaraldsson
Copy link

PallHaraldsson commented Jan 10, 2024

I'm not sure why the OOM, linked to threading it seems, but would a different compromise help like having 2 or 4 (I'm looking into, and have suggested new threading defaults for Julia, so like to know what people consider the best compromise)? I doubt the memory use scales linearally. And was it mostly or only a problem on Julia 1.6? Then drop support for it? At least as soon as there's a new LTS, there's been talk 1.10 will be it, at least eventually. It makes no sense for anyone using 1.6 LTS, or at least starting now.. with it going out, and updates to it seemingly stalled/stopped.

[@jebej Do you use Linux, or Windows? Or concerned about both non-default threading for both?]

Also I don't know if the OOM problem was only on Windows (which does not overcommit, unlike Linux). Maybe lowered thread-setting could be done only on Windows, to 1, or 2 or whatever compromise. I don't know if macOS overcommits (or FreeBSF), so maybe change there too, or only on non-Linux?

@jebej
Copy link

jebej commented Sep 16, 2024

FWIW I ran across this thread on the julia Discourse today, I'd be surprised if this is the first time this happened: https://discourse.julialang.org/t/poor-openblas-performance-for-large-matrix-multiply/119354

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants