-
Notifications
You must be signed in to change notification settings - Fork 48
cisstNumerical tutorial
These examples provide a quick introduction to the features of cisstNumerical. The code can be found in the git repository under cisstNumerical/examples/tutorial. To compile your own code, remember to include cisstNumerical.h
.
cisstNumerical contains native functions as well as wrappers for existing numerical routines. As many algorithms can be found in FORTRAN, a significant part of cisstNumerical interfaces with FORTRAN routines. This has numerous consequences listed in section 5.
The Singular Value Decomposition is a common algorithm and the cisst implementation illustrates many features of the cisstNumerical FORTRAN wrappers. The goal of SVD is to find the decomposition of a matrix A such as A = U * Sigma * V where both U and V are orthonormal and S is a diagonal matrix composed of singular values.
Most of the FORTRAN routines we are using will not allocate any memory nor check that the parameters are valid (see also section 5). Our wrappers not only check the size of the parameters to verify that enough memory has been allocated but they also provide some flexible mechanisms to allocate the required memory.
For the SVD, the underlying FORTRAN routine requires an input matrix A, two matrices to store U and Vt, a vector for the singular values S and a workspace for temporary variables (a.k.a. scratch space). From now on, we will call the two matrices U and Vt and the vector S the output.
Finally, since cisstVector supports both fixed size and dynamic vectors and matrices, cisstNumerical provides different wrappers and classes for each type of memory allocation. The examples we are providing illustrate different possible configurations, i.e. who allocates memory for what and how.
Example | Type | Output allocation | Workspace allocation |
---|---|---|---|
2.1 | Dynamic | User, manually | |
2.2 | Dynamic | User, manually | nmrSVD function |
2.3 | Dynamic | User, manually | User with method WorkspaceSize |
2.4 | Dynamic | User with nmrSVDDynamicData | |
2.6 | Fixed size | User, manually | |
2.7 | Fixed size | User with nmrSVDFixedSizeData |
2.1 Using nmrSVD with user allocated containers
This example shows how to use the function nmrSVD with a dynamic matrix.
1 void ExampleSVDUserOutputWorkspace(void) { 2 const unsigned int size = 6; 3 // fill a matrix with random numbers 4 vctDynamicMatrix A(size, size); 5 vctRandom(A, -10.0, 10.0); 6 // create matrices (U, Vt) and vector (S) for the result 7 vctDynamicMatrix U(size, size); 8 vctDynamicMatrix Vt(size, size); 9 vctDynamicVector S(size); 10 // now, create a workspace of the right size 11 // we will explain later why this is needed 12 vctDynamicVector workspace(size * 10); 13 // and we can finally call the nmrSVD function 14 // using a copy of A because nmrSVD modifies the input 15 vctDynamicMatrix Acopy = A; 16 try { 17 nmrSVD(Acopy, U, S, Vt, workspace); 18 } catch(...) { 19 std::cout << "An exception occured, check cisstLog.txt." << std::endl; 20 } 21 // display the result 22 std::cout << "U:\n" << U << "\nS:\n" << S << "\nV:\n" 23 << Vt.TransposeRef() << std::endl; 24 } 25 26 void ExampleSVDEconomyUserOutputWorkspace(void) { 27 const unsigned int sizerows = 20; 28 const unsigned int sizecols = 3; 29 // fill a matrix with random numbers 30 vctDynamicMatrix A(sizerows, sizecols, VCT_COL_MAJOR); 31 vctRandom(A, -10.0, 10.0); 32 // create matrices (U, Vt) and vector (S) for the result 33 vctDynamicMatrix U(sizerows, sizecols, VCT_COL_MAJOR); 34 vctDynamicMatrix Vt(sizecols, sizecols, VCT_COL_MAJOR); 35 vctDynamicVector S(sizecols); 36 // now, create a workspace of the right size 37 // we will explain later why this is needed 38 vctDynamicVector workspace(sizerows * 10); 39 // and we can finally call the nmrSVD function 40 // using a copy of A because nmrSVD modifies the input 41 vctDynamicMatrix Acopy = A; 42 try { 43 nmrSVDEconomy(Acopy, U, S, Vt, workspace); 44 } catch(...) { 45 std::cout << "An exception occured, check cisstLog.txt." << std::endl; 46 } 47 // display the result 48 std::cout << "A:\n" << A 49 << "\nUeconomy:\n" << U << "\nS:\n" << S << "\nV:\n" 50 << Vt.TransposeRef() << std::endl; 51 } In this example, we have used a workspace 10 times bigger than the initial matrix which is large enough. Since the size of the workspace can be determined automatically, cisstNumerical also provides an overloaded version of nmrSVD which doesn't require a workspace.
2.2 Using nmrSVD without specifying a workspace
1 void ExampleSVDImplicitWorkspace(void) { 2 const unsigned int size = 6; 3 // fill a matrix with random numbers 4 vctDynamicMatrix A(size, size); 5 vctRandom(A, -10.0, 10.0); 6 // create matrices (U, Vt) and vector (S) for the result 7 vctDynamicMatrix U(size, size); 8 vctDynamicMatrix Vt(size, size); 9 vctDynamicVector S(size); 10 // and we can finally call the nmrSVD function 11 // using a copy of A because nmrSVD modifies the input 12 vctDynamicMatrix Acopy = A; 13 try { 14 nmrSVD(Acopy, U, S, Vt); 15 } catch(...) { 16 std::cout << "An exception occured, check cisstLog.txt." << std::endl; 17 } 18 } 19 20 void ExampleSVDEconomyImplicitWorkspace(void) { 21 const unsigned int sizerows = 20; 22 const unsigned int sizecols = 3; 23 // fill a matrix with random numbers 24 vctDynamicMatrix A(sizerows, sizecols, VCT_COL_MAJOR); 25 vctRandom(A, -10.0, 10.0); 26 // create matrices (U, Vt) and vector (S) for the result 27 vctDynamicMatrix U(sizerows, sizecols, VCT_COL_MAJOR); 28 vctDynamicMatrix Vt(sizecols, sizecols, VCT_COL_MAJOR); 29 vctDynamicVector S(sizecols); 30 // and we can finally call the nmrSVD function 31 // using a copy of A because nmrSVD modifies the input 32 vctDynamicMatrix Acopy = A; 33 try { 34 nmrSVDEconomy(Acopy, U, S, Vt); 35 } catch(...) { 36 std::cout << "An exception occured, check cisstLog.txt." << std::endl; 37 } 38 } This is easier to use, but one has to remember that a workspace is created dynamically by nmrSVD, i.e. every time the function is called some memory is allocated and released.
This behavior might not suit everyone, therefore cisstNumerical provides a couple of methods to ease the allocation of the workspace and the output matrices and vectors. All these methods are declared within the scope of a class called ``data''. For SVD, we have two different classes available, nmrSVDDynamicData and nmrSVDFixedSizeData.
2.3 Using nmrSVDDynamicData::WorkspaceSize
1 void ExampleSVDWorkspaceSize(void) {
2 const unsigned int size = 6;
3 // create the input matrix with the correct size
4 vctDynamicMatrix A(size, size);
5 // now, create a workspace of the right size
6 vctDynamicVector workspace;
7 workspace.SetSize(nmrSVDDynamicData::WorkspaceSize(A));
8 // Allocate U, Vt, S and use the workspace for nmrSVD ...
9 }
10
11 void ExampleSVDEconomyWorkspaceSize(void) {
12 const unsigned int sizerows = 20;
13 const unsigned int sizecols = 6;
14 // create the input matrix with the correct size
15 vctDynamicMatrix A(sizerows, sizecols);
16 // now, create a workspace of the right size
17 vctDynamicVector workspace;
18 workspace.SetSize(nmrSVDEconomyDynamicData::WorkspaceSize(A));
19 // Allocate U, Vt, S and use the workspace for nmrSVD ...
20 }
This method simplifies the allocation of the workspace but doesn't solve another problem that we have ignored so far: If the input matrix is not square, the size of the different output containers are a bit trickier to determine, i.e. if the input matrix is
2.4 Using nmrSVDDynamicData to allocate everything
1 void ExampleSVDDynamicData(void) { 2 // fill a matrix with random numbers 3 vctDynamicMatrix A(10, 3, VCT_COL_MAJOR); 4 vctRandom(A, -10.0, 10.0); 5 // create a data object 6 nmrSVDDynamicData svdData(A); 7 // and we can finally call the nmrSVD function 8 vctDynamicMatrix Acopy = A; 9 nmrSVD(Acopy, svdData); 10 // display the result 11 std::cout << "A:\n" << A 12 << "\nU:\n" << svdData.U() 13 << "\nS:\n" << svdData.S() 14 << "\nV:\n" << svdData.Vt().TransposeRef() << std::endl; 15 } 16 17 void ExampleSVDEconomyDynamicData(void) { 18 // fill a matrix with random numbers 19 vctDynamicMatrix A(10, 3, VCT_COL_MAJOR); 20 vctRandom(A, -10.0, 10.0); 21 // create a data object 22 nmrSVDEconomyDynamicData svdData(A); 23 // and we can finally call the nmrSVD function 24 vctDynamicMatrix Acopy = A; 25 nmrSVDEconomy(Acopy, svdData); 26 // display the result 27 std::cout << "A:\n" << A 28 << "\nU:\n" << svdData.U() 29 << "\nS:\n" << svdData.S() 30 << "\nV:\n" << svdData.Vt().TransposeRef() << std::endl; 31 } In this example we have declared a data object based on the input matrix A. The constructor of nmrSVDDynamicData allocates the required memory for all the output containers (U, Vt and S) as well as the workspace. Another overloaded version nmrSVD takes the matrix A and the object svdData to perform the singular value decomposition.
This data object performs a one time memory allocation and can be used multiple times (in a loop for example) without performing any new memory allocation.
2.5 Using nmrSVDDynamicData::UpdateMatrixS
Once the decomposition has been performed, nmrSVD stores all the singular values in decreasing order in a vector. This might be convenient for some but one might need a diagonal matrix instead. To update this matrix, the class nmrSVDDynamicData provides the method UpdateMatrixS.
1 void ExampleSVDUpdateMatrixS(void) { 2 // fill a matrix with random numbers 3 vctDynamicMatrix A(5, 7, VCT_COL_MAJOR); 4 vctRandom(A, -10.0, 10.0); 5 // create a data object 6 nmrSVDDynamicData svdData(A); 7 // and we can finally call the nmrSVD function 8 vctDynamicMatrix Acopy = A; 9 nmrSVD(Acopy, svdData); 10 // compute the matrix S 11 vctDynamicMatrix S(5, 7); 12 nmrSVDDynamicData::UpdateMatrixS(A, svdData.S(), S); 13 // display the initial matrix as well as U * S * V 14 std::cout << "A:\n" << A 15 << "\nU * S * Vt:\n" 16 << svdData.U() * S * svdData.Vt() << std::endl; 17 } 18 19 void ExampleSVDEconomyUpdateMatrixS(void) { 20 // fill a matrix with random numbers 21 vctDynamicMatrix A(15, 2, VCT_COL_MAJOR); 22 vctRandom(A, -10.0, 10.0); 23 // create a data object 24 nmrSVDEconomyDynamicData svdData(A); 25 // and we can finally call the nmrSVD function 26 vctDynamicMatrix Acopy = A; 27 nmrSVDEconomy(Acopy, svdData); 28 // compute the matrix S 29 vctDynamicMatrix S(2, 2, VCT_COL_MAJOR); 30 nmrSVDEconomyDynamicData::UpdateMatrixS(A, svdData.S(), S); 31 // display the initial matrix as well as U * S * V 32 std::cout << "A:\n" << A 33 << "\nU * S * Vt:\n" 34 << svdData.U() * S * svdData.Vt() << std::endl; 35 } Note that the method UpdateMatrixS is a static method which can be called even if no data object has been created. Since the method is static, it needs the input matrix A to determine the correct size of the matrix S.
2.6 Using fixed size matrices without a data object
1 void ExampleSVDFixedSize(void) { 2 // fill a matrix with random numbers 3 vctFixedSizeMatrix<double, 5, 5> A, Acopy; 4 vctRandom(A, -10.0, 10.0); 5 Acopy = A; 6 // create U, S, Vt and a workspace 7 vctFixedSizeMatrix<double, 5, 5> U, Vt; 8 vctFixedSizeVector<double, 5> S; 9 vctFixedSizeVector<double, 50> workspace; 10 // and we can finally call the nmrSVD function 11 nmrSVD(Acopy, U, S, Vt, workspace); 12 // display the result 13 std::cout << "U:\n" << U << "\nS:\n" << S << "\nV:\n" 14 << Vt.TransposeRef() << std::endl; 15 } The example is very similar to the first one, i.e. the user has to create all the containers with the correct sizes before calling nmrSVD.
As for the dynamic matrices, nmrSVD has an overloaded version which doesn't require a user allocated workspace. For the fixed size matrices, the function nmrSVD will create a workspace on the stack if none was provided (not a dynamic memory allocation as seen for dynamic matrices!).
2.7 Using fixed size matrices with nmrSVDFixedSizeData
1 void ExampleSVDFixedSizeData(void) { 2 // fill a matrix with random numbers 3 vctFixedSizeMatrix<double, 5, 7, VCT_COL_MAJOR> A, Acopy; 4 vctRandom(A, -10.0, 10.0); 5 Acopy = A; 6 // create a data object 7 typedef nmrSVDFixedSizeData<5, 7, VCT_COL_MAJOR> SVDDataType; 8 SVDDataType svdData; 9 // and we can finally call the nmrSVD function 10 nmrSVD(Acopy, svdData); 11 // compute the matrix S 12 SVDDataType::MatrixTypeS S; 13 SVDDataType::UpdateMatrixS(svdData.S(), S); 14 // display the initial matrix as well as U * S * V 15 std::cout << "A:\n" << A 16 << "\nU * S * Vt:\n" 17 << svdData.U() * S * svdData.Vt() << std::endl; 18 } The interface of the nmrSVDFixedSizeData is pretty much the same as nmrSVDDynamicData except that the size and storage order are now specified using template parameters. To simplify our example, we introduced a type for DataType. This approach is strongly recommended whenever one uses the cisst fixed size vectors and matrices.
3 Functions with data object
Besides the function nmrSVD, cisstNumerical includes more numerical functions which can be used with either a data object or some vectors and matrices provided by the caller.
3.1 FORTRAN based functions
The cisstNumerical FORTRAN wrappers are all written using the approach used for nmrSVD and share the different properties listed in section 5.
3.1.1 nmrInverse
This function computes the inverse of a matrix using an LU decomposition. It can be used for dynamic and fixed size matrices of any storage order. Nevertheless, for fixed size matrices of size 2, 3 or 4, we recommend to use nmrGaussJordanInverse (see 4.1).
1 void ExampleInverse(void) { 2 // Start with a fixed size matrix 3 vctFixedSizeMatrix<double, 6, 6> A, AInverse; 4 // Fill with random values 5 vctRandom(A, -10.0, 10.0); 6 AInverse = A; 7 // Compute inverse and check result 8 nmrInverse(AInverse); 9 std::cout << A * AInverse << std::endl; 10 11 // Continue with a dynamic matrix 12 vctDynamicMatrix B, BInverse; 13 // Fill with random values 14 B.SetSize(8, 8, VCT_COL_MAJOR); 15 vctRandom(B, -10.0, 10.0); 16 BInverse = B; 17 // Compute inverse and check result 18 nmrInverse(BInverse); 19 std::cout << B * BInverse << std::endl; 20 } In this example, we used the overloaded version of nmrInverse which doesn't require a data object. This is possible since the data object doesn't provide any useful information or result.
As for most wrappers, using the function nmrInverse without a data object is not optimal if the function is going to be called multiple times. To optimize the memory allocation, one should use nmrInverseFixedSizeData or nmrInverseDynamicData.
3.1.2 nmrLU
The goal of LU is to find the factorization of a matrix A such as
1 void ExampleLUDynamicData(void) { 2 // fill a matrix with random numbers 3 vctDynamicMatrix A(5, 7, VCT_COL_MAJOR); 4 vctRandom(A, -10.0, 10.0); 5 // create a data object 6 nmrLUDynamicData luData(A); 7 // and we can finally call the nmrLU function 8 vctDynamicMatrix Acopy = A; 9 nmrLU(Acopy, luData); 10 // LAPACK routine store the LU in input A and use 11 // a vector to store the permutations P 12 vctDynamicMatrix P, L, U; 13 P.SetSize(nmrLUDynamicData::MatrixPSize(A)); 14 L.SetSize(nmrLUDynamicData::MatrixLSize(A)); 15 U.SetSize(nmrLUDynamicData::MatrixUSize(A)); 16 nmrLUDynamicData::UpdateMatrixP(Acopy, luData.PivotIndices(), P); 17 nmrLUDynamicData::UpdateMatrixLU(Acopy, L, U); 18 std::cout << "A:\n" << A 19 << "\nP * L * U:\n" << (P * L * U) << std::endl; 20 } It is important to notice that in this example we explicitly created the input using VCT_COL_MAJOR. As it is, nmrLU doesn't support the row first storage order.
Besides this constraint, the LU decomposition routine provided by LAPACK stores the result in one single matrix, replacing the input. This is perfectly good for most applications but one can also use the helper methods of nmrLUDynamicData to determine the size and compute the matrices P, L and U.
3.1.3 nmrPInverse
This function actually relies on nmrSVD. The corresponding data object nmrPInverseDynamicData and nmrPInverseFixedSizeData allocate a workspace large enough for nmrSVD.
3.2 Native cisst functions
The cisst native functions are more flexible than the FORTRAN wrappers mostly because the restrictions regarding the storage order and the compactness are lifted. The elements might also be different, i.e. one can use single precision floating point numbers if this makes any sense for his/her application.
3.2.1 nmrIsOrthonormal
In this example, we are using nmrSVD to create a couple of orthonormal matrices.
1 void ExampleIsOrthonormal(void) { 2 // fill a matrix with random numbers 3 vctDynamicMatrix A(5, 7); 4 vctRandom(A, -10.0, 10.0); 5 // create a workspace and use it for the SVD data 6 vctDynamicVector 7 workspace(nmrSVDDynamicData::WorkspaceSize(A)); 8 nmrSVDDynamicData svdData(A, workspace); 9 // we can call the nmrSVD function 10 vctDynamicMatrix Acopy = A; 11 nmrSVD(Acopy, svdData); 12 // check that the output is correct using our workspace 13 if (nmrIsOrthonormal(svdData.U(), workspace)) { 14 std::cout << "U is orthonormal" << std::endl; 15 } 16 // same with dynamic creation of a workspace 17 if (nmrIsOrthonormal(svdData.Vt())) { 18 std::cout << "Vt is orthonormal" << std::endl; 19 } 20 } This example demonstrates two different ways to use the function nmrIsOrthonormal, one with a user defined workspace and one with no workspace at all (i.e. the function will allocate and free memory on the fly).
Please note in this example how we created a single workspace used by different routines. This is very convenient to avoid any unnecessary memory allocation but one must make sure that this workspace is not being used by two different threads.
It is also possible to create a data object for this problem (see nmrIsOrthonormalDynamicData and nmrIsOrthonormalFixedSizeData).
4 Others
4.1 nmrGaussJordanInverse
The Gauss Jordan inverse methods are implemented for fixed size matrices
1 void ExampleGaussJordanInverse(void) { 2 vctFixedSizeMatrix<double, 4, 4> A, AInverse; 3 vctRandom(A, -10.0, 10.0); 4 bool nonSingular; 5 double tolerance = 10E-6; 6 // call nmrGaussJordanInverse4x4 7 nmrGaussJordanInverse4x4(A, nonSingular, AInverse, tolerance); 8 if (nonSingular) { 9 std::cout << "A * AInverse:\n" << A * AInverse << std::endl; 10 } else { 11 std::cout << "A is a singular matrix" << std::endl; 12 } 13 } Note that since these functions are fully implemented using the cisst package, any storage order or stride can be used (i.e. the matrices don't need to be compact).
5 FORTRAN specifics
5.1 Compilation
Most of the FORTRAN routines we are using come from the on-line code repository netlib.org. There is no standard binary distribution of these routines therefore we decided to provide a binary version of these routines (library and header files). We have two different versions for historical reasons:
CNetlib: This is the oldest version, soon to be deprecated. The main default of this version is that it is not thread safe. cisstNetlib: This version is based on LAPACK3E routines and is thread safe. We strongly recommend to use this version. The cisstNumerical API is the same (i.e. your code will be the same) for both binary distributions but you will need to configure your build differently using CMake: You will have to activate either CISST_HAS_CISSTNETLIB or CISST_HAS_CNETLIB.
For more details and to download these libraries, see www.cisst.org/cnetlib.
5.2 Common properties
All our wrappers for FORTRAN routines share the following properties:
The default storage order for matrices is column first in FORTRAN while it is row first in C/C++. Since cisstVector supports both formats, the user has to remember to create his matrices column first (using VCT_COL_MAJOR).
This is the default but there are some exceptions. For example, nmrSVD can be used with any storage order. In the case, cisstNumerical uses the fact that changing the storage order is similar to a transpose. For SVD, the problem
Most FORTRAN routines were not written with the concept of stride in mind. This means that all matrices and vectors which are finally used by a FORTRAN routine must be compact (i.e. use a contiguous block of memory). Most LAPACK routines, will modify the input to avoid unnecessary memory allocation. Since cisstNumerical has been designed to avoid implicit memory allocation and copies as well, it is the caller's responsibility to create a copy of the input for future use. These functions can only operate on matrices and vectors of doubles. This is because this function is actually a wrapper to a LAPACK routine which requires double precision floating point numbers. For the integers (e.g. vector of pivot indices), FORTRAN uses the equivalent of a C/C++ long int. To enforce this and remind the caller of this subtlety, the cisstNumerical interface defines and uses F_INTEGER. If the matrices or vectors provided by the user are not correct (size, storage order, compact), an exception will occur (std::runtime_error). Since these exceptions are logged, the user might want to look at the file cisstLog.txt if his/her application quits unexpectedly.
- Home
- Libraries & components
- Download
- Compile (FAQ)
- Reference manual
- cisstCommon
- cisstVector
- cisstNumerical
- cisstOSAbstraction
- TBD
- cisstMultiTask
- cisstRobot
- cisstStereoVision
- Developers