-
Notifications
You must be signed in to change notification settings - Fork 181
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Interfaces and wrappers: BLAS #450
Comments
One additional thing: as the source code does not use modules, we are
"polluting" the namespace with the names of the BLAS routines. So an
additional step might be to put the individual routines into modules to
avoid that.
Another aspect is the use of routines such as DMACH1 (not in LAPACK and
BLAS, but similar routines are). Some of these venerable packages require
you to adjust the code for specific machines. With Fortran in its current
incarnation we can rely on intrinsic functions for most of these.
Op vr 2 jul. 2021 om 13:52 schreef Sebastian Ehlert <
***@***.***>:
… *Description*
To support linear algebra operations we want to have interfaces/wrappers
for BLAS and LAPACK, maybe even to BLACS, PBLAS and ScaLAPACK at some
point. Before we start working on a high-level API, there are additional
stages to consider. For this issue I want to focus on BLAS only, because
the size of the problem is much more limited than the other linear algebra
libraries and we can flesh out the workflow
1. How to include BLAS in stdlib?
- building (reference) BLAS from source with fpm and/or CMake
sounds like a bad idea (performance), but might be a fallback
- linking against system BLAS and optimized BLAS libraries like MKL
or OpenBLAS
2. How due we provide the BLAS functionality?
- interface modules like BLAS95 are available
- do we reproduce the actual reference BLAS interface (doesn't
include intent in the dummy arguments)
- do we overload the names to drop the s/d/c/z prefixes
- are the interfaces part of stdlib or bundled from a separate
project
3. How do we abstract the operations available in BLAS?
- how many layers of abstractions do we want?
- just simple wrapper to infer dimensions
- overloaded operators
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#450>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAN6YR6EX4HDKZCIZD5Y7ITTVWSADANCNFSM47WSLGFA>
.
|
@arjenmarkus I think the namespace pollution can't be helped, at least if we want to rely on optimized BLAS libraries. Putting the routines into modules would produce ABI issues unless we export them with bind(C), which kind of defeats the purpose of cleaning up the global namespace with modules in the first place. |
It will be good to study how NumPy embeds BLAS: Accelerated BLAS/LAPACK libraries Kind of ironically for the fallback BLAS NumPy depends on a f2c'd version of BLAS which is patched to replace any new constructs with legacy constructs that work with f2c. |
I was surprised to see a new release of LAPACK 3.10.0 on netlib just a few days ago, now also featuring free format source code. Therefore, I doubt that NumPy's f2c'ing strategy will be viable in the long run, maybe the transpilers planned for LFortran will be a viable alternative for NumPy at some point. |
The fact that the code is on github as well, makes life a lot easier for
fpm as well.
Op vr 2 jul. 2021 om 16:17 schreef Sebastian Ehlert <
***@***.***>:
… I was surprised to see a new release of LAPACK 3.10.0 on netlib
<http://netlib.org/lapack/lapack-3.10.0.html> just a few days ago, now
also featuring free format source code. Therefore, I doubt that NumPy's
f2c'ing strategy will be viable in the long run, maybe the transpilers
planned for LFortran will be a viable alternative for NumPy at some point.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#450 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAN6YR2O4UBJCS4EKBINZODTVXC6BANCNFSM47WSLGFA>
.
|
Is there interest in moving this project forward? I have an implementation of a pretty complete BLAS interface library as fpm package, which could be included in stdlib: https://github.com/awvwgk/blas-interface There are two variants in this package, a |
Integrating BLAS and LAPACK support in stdlib Apologies if I hijack this discussion for updating it, there are already several BLAS/LAPACK issues open. I agree with @awvwgk on the need to link against platform-optimized libraries. To do this, I think we ought to start simple, with a build of the reference BLAS/LAPACK, because:
I'm using numpy as an example: it has several options managed by the build process (fpm can't do this now), but anyways uses f2c copies of BLAS and LAPACK embedded in the repo in case none are available. So I've been putting together a nearly-complete, Modern-Fortran version of BLAS and LAPACK which is ~90% automated from the Reference-LAPACK repo (license allows embedding in code provided that copyright is retained). Please note this is not an interface, but just a modern Fortran reimplementation: I would love to see that merge into
Thanks and regards. |
Thank you @perazz for this work!
Impressive work!
I support totally this, and would be a very good addition for
I think it would be good to integrate it in
Once the issues regarding
Not sure to understand your question. |
Thank you @jvdp1, let's make it happen then! I see that I can make progress pretty fast on the separate repo, so take your time, and when ready, we can merge it to the linear algebra branch with a PR. I think it may even be more convenient to leave it as a separate repo until folks want try out and give feedback the modern APIs, because they can do it easily with fpm. Both options will work.
I think so. IMHO All that's needed is the ability to define/undefine a macro flag to tell the library whether the internal implementation, or the external library, should be used. This is no problem for CMake, but maybe not even for the [dependencies]
stdlib = "*"
[preprocess]
[preprocess.cpp]
macros = ["STDLIB_WITH_EXTERNAL_BLAS"] |
I meant that if one wants to use the local implementation, they can just call |
Thanks for the initiative @perazz, it's good to see this (collection of) issue(s) get traction!
I very much agree with this and the two points below it. Let's get something working and in the hands of users, and we can iterate on the solution as feedback arrives.
Yes, at least from me, but I would like us to converge towards some design principles (see below).
It depends. If we are to use the same architectural design as it's currently used in stdlib it should be 1). Another solution that I would consider is doing 1) and 2), with the repo being a Git submodule in the main I believe that the 3rd point is not mutually exclusive to 1) and 2). If updating the reference BLAS/LAPACK in some manner, that does include a lot of manual work is possible, I would be in favour of that, if on the other hand it is not (for any number of reasons) we can use a reimplementation. Either way, I think we should provide a fallback version of BLAS/LAPACK packaged with
I agree with both of these. The only thing that I would caution is to make it very clear to users that whatever I think doing this will finally bring |
Description
To support linear algebra operations we want to have interfaces/wrappers for BLAS and LAPACK, maybe even to BLACS, PBLAS and ScaLAPACK at some point. Before we start working on a high-level API, there are additional stages to consider. For this issue I want to focus on BLAS only, because the size of the problem is much more limited than the other linear algebra libraries and we can flesh out the workflow
intent
in the dummy arguments)The text was updated successfully, but these errors were encountered: