diff --git a/Readme.md b/Readme.md index 11f905ef7..6b7cc4dc5 100644 --- a/Readme.md +++ b/Readme.md @@ -3,21 +3,24 @@ ![alt text](./Icons/Icon_small.png) -## Install -See The following user guide for installation instructions and an introduction to Cytnx: +## What is Cytnx (pronounced as *sci-tens*)? -[https://kaihsinwu.gitlab.io/Cytnx_doc/install.html](https://kaihsinwu.gitlab.io/Cytnx_doc/install.html) - -## Intro slide -[Cytnx_v0.5.pdf (dated 07/25/2020)](https://drive.google.com/file/d/1vuc_fTbwkL5t52glzvJ0nNRLPZxj5en6/view?usp=sharing) - -## News - [v0.9.x] - - Implementation of new data structure for symmetric UniTensor, which differs from previous versions +Cytnx is a tensor network library designed for quantum physics simulations using tensor network algorithms, offering the following features: +* Most of the APIs are identical in C++ and Python, enabling seamless transitions between the two for prototyping and production. +* Cytnx APIs share very similar interfaces with popular libraries such as NumPy, SciPy, and PyTorch, minimizing the learning curve for new users. +* We implement these easy-to-use Python libraries interfacing to the C++ side in hope to benefit users who want to bring their Python programming experience to the C++ side and speed up their programs. +* Cytnx supports multi-device operations (CPUs/GPUs) directly at the base container level. Both the containers and linear algebra functions share consistent APIs regardless of the devices on which the input tensors are stored, similar to PyTorch. +* For algorithms in physics, Cytnx provides powerful tools such as UniTensor, Network, Symmetry etc. These objects are built on top of Tensor objects, specifically aiming to reduce the developing work of Tensor network algorithms by simplifying the user interfaces. +**Intro slides** +>[Cytnx_v0.5.pdf (dated 07/25/2020)](https://drive.google.com/file/d/1vuc_fTbwkL5t52glzvJ0nNRLPZxj5en6/view?usp=sharing) +## News + [v1.0.0] + This is the release of the version v1.0.0, which is the stable version of the project. + **See also** +[Release Note](misc_doc/version.log). ## API Documentation: @@ -30,271 +33,248 @@ See The following user guide for installation instructions and an introduction t ## Objects: - * Storage [binded] - * Tensor [binded] - * Accessor [C++ only] - * Bond [binded] - * Symmetry [binded] - * CyTensor [binded] - * Network [binded] +* Storage [Python binded] +* Tensor [Python binded] +* Accessor [C++ only] +* Bond [Python binded] +* Symmetry [Python binded] +* UniTensor [Python binded] +* Network [Python binded] -## Feature: +## Features: -### Python x C++ - Benefit from both side. - One can do simple prototype on Python side - and easy transfer to C++ with small effort! +### Python & C++ +>Benefit from both sides! + One can do simple prototyping in Python + and easy transfer code to C++ with small effort! ```c++ - // C++ version: - #include "cytnx.hpp" - cytnx::Tensor A({3,4,5},cytnx::Type.Double,cytnx::Device.cpu) +// C++ version: +#include "cytnx.hpp" +cytnx::Tensor A({3,4,5},cytnx::Type.Double,cytnx::Device.cpu); ``` ```python - # Python version: - import cytnx - A = cytnx.Tensor((3,4,5),dtype=cytnx.Type.Double,device=cytnx.Device.cpu) +# Python version: +import cytnx +A = cytnx.Tensor((3,4,5),dtype=cytnx.Type.Double,device=cytnx.Device.cpu) ``` -### 1. All the Storage and Tensor can now have mulitple type support. - The avaliable types are : +### 1. All Storage and Tensor objects support multiple types. +Available types are : - | cytnx type | c++ type | Type object - |------------------|----------------------|-------------------- - | cytnx_double | double | Type.Double - | cytnx_float | float | Type.Float - | cytnx_uint64 | uint64_t | Type.Uint64 - | cytnx_uint32 | uint32_t | Type.Uint32 - | cytnx_uint16 | uint16_t | Type.Uint16 - | cytnx_int64 | int64_t | Type.Int64 - | cytnx_int32 | int32_t | Type.Int32 - | cytnx_int16 | int16_t | Type.Int16 - | cytnx_complex128 | std::complex | Type.ComplexDouble - | cytnx_complex64 | std::complex | Type.ComplexFloat - | cytnx_bool | bool | Type.Bool +| cytnx type | c++ type | Type object +|------------------|----------------------|-------------------- +| cytnx_double | double | Type.Double +| cytnx_float | float | Type.Float +| cytnx_uint64 | uint64_t | Type.Uint64 +| cytnx_uint32 | uint32_t | Type.Uint32 +| cytnx_uint16 | uint16_t | Type.Uint16 +| cytnx_int64 | int64_t | Type.Int64 +| cytnx_int32 | int32_t | Type.Int32 +| cytnx_int16 | int16_t | Type.Int16 +| cytnx_complex128 | std::complex | Type.ComplexDouble +| cytnx_complex64 | std::complex | Type.ComplexFloat +| cytnx_bool | bool | Type.Bool ### 2. Storage - * Memory container with GPU/CPU support. - maintain type conversions (type casting btwn Storages) - and moving btwn devices. - * Generic type object, the behavior is very similar to python. +* Memory container with GPU/CPU support. + Type conversions (type casting between Storages) + and moving between devices easily possible. +* Generic type object, the behavior is very similar to Python. ```c++ - Storage A(400,Type.Double); - for(int i=0;i<400;i++) - A.at(i) = i; +Storage A(400,Type.Double); +for(int i=0;i<400;i++) +A.at(i) = i; - Storage B = A; // A and B share same memory, this is similar as python +Storage B = A; // A and B share same memory, this is similar to Python - Storage C = A.to(Device.cuda+0); +Storage C = A.to(Device.cuda+0); ``` ### 3. Tensor - * A tensor, API very similar to numpy and pytorch. - * simple moving btwn CPU and GPU: +* A tensor, API very similar to numpy and pytorch. +* Simple moving between CPU and GPU: ```c++ - Tensor A({3,4},Type.Double,Device.cpu); // create tensor on CPU (default) - Tensor B({3,4},Type.Double,Device.cuda+0); // create tensor on GPU with gpu-id=0 +Tensor A({3,4},Type.Double,Device.cpu); // create tensor on CPU (default) +Tensor B({3,4},Type.Double,Device.cuda+0); // create tensor on GPU with gpu-id=0 +Tensor C = B; // C and B share same memory. - Tensor C = B; // C and B share same memory. +// move A to GPU +Tensor D = A.to(Device.cuda+0); - // move A to gpu - Tensor D = A.to(Device.cuda+0); - - // inplace move A to gpu - A.to_(Device.cuda+0); +// inplace move A to GPU +A.to_(Device.cuda+0); ``` - * Type conversion in between avaliable: +* Type conversion possible: ```c++ - Tensor A({3,4},Type.Double); - Tensor B = A.astype(Type.Uint64); // cast double to uint64_t +Tensor A({3,4},Type.Double); +Tensor B = A.astype(Type.Uint64); // cast double to uint64_t ``` - * vitual swap and permute. All the permute and swap will not change the underlying memory - * Use Contiguous() when needed to actual moving the memory layout. +* Virtual swap and permute. All permute and swap operations do not change the underlying memory immediately. Minimizes cost of moving elements. +* Use `Contiguous()` when needed to actually move the memory layout. ```c++ - Tensor A({3,4,5,2},Type.Double); - A.permute_(0,3,1,2); // this will not change the memory, only the shape info is changed. - cout << A.is_contiguous() << endl; // this will be false! +Tensor A({3,4,5,2},Type.Double); +A.permute_(0,3,1,2); // this will not change the memory, only the shape info is changed. +cout << A.is_contiguous() << endl; // false - A.contiguous_(); // call Configuous() to actually move the memory. - cout << A.is_contiguous() << endl; // this will be true! +A.contiguous_(); // call Contiguous() to actually move the memory. +cout << A.is_contiguous() << endl; // true ``` - * access single element using .at +* Access a single element using `.at` ```c++ - Tensor A({3,4,5},Type.Double); - double val = A.at(0,2,2); +Tensor A({3,4,5},Type.Double); +double val = A.at(0,2,2); ``` - * access elements with python slices similarity: +* Access elements similar to Python slices: ```c++ - typedef Accessor ac; - Tensor A({3,4,5},Type.Double); - Tensor out = A(0,":","1:4"); - // equivalent to python: out = A[0,:,1:4] +typedef Accessor ac; +Tensor A({3,4,5},Type.Double); +Tensor out = A(0,":","1:4"); +// equivalent to Python: out = A[0,:,1:4] ``` ### 4. UniTensor - * extension of Tensor, specifically design for Tensor network simulation. - - * See Intro slide for more details +* Extension of Tensor, specifically designed for Tensor network simulations. +* `UniTensor` is a tensor with additional information such as `Bond`, `Symmetry` and `labels`. With these information, one can easily implement the tensor contraction. ```c++ - Tensor A({3,4,5},Type.Double); - UniTensor tA = UniTensor(A,2); // convert directly. +Tensor A({3,4,5},Type.Double); +UniTensor tA = UniTensor(A); // convert directly. +UniTensor tB = UniTensor({Bond(3),Bond(4),Bond(5)},{}); // init from scratch. +// Relabel the tensor and then contract. +tA.relabels_({"common_1", "common_2", "out_a"}); +tB.relabels_({"common_1", "common_2", "out_b"}); +UniTensor out = cytnx::Contract(tA,tB); +tA.print_diagram(); +tB.print_diagram(); +out.print_diagram(); +``` +Output: +``` +----------------------- +tensor Name : +tensor Rank : 3 +block_form : False +is_diag : False +on device : cytnx device: CPU + --------- + / \ + common_1 ____| 3 4 |____ common_2 + | | + | 5 |____ out_a + \ / + --------- +----------------------- +tensor Name : +tensor Rank : 3 +block_form : False +is_diag : False +on device : cytnx device: CPU + --------- + / \ + common_1 ____| 3 4 |____ common_2 + | | + | 5 |____ out_b + \ / + --------- +----------------------- +tensor Name : +tensor Rank : 2 +block_form : False +is_diag : False +on device : cytnx device: CPU + -------- + / \ + | 5 |____ out_a + | | + | 5 |____ out_b + \ / + -------- - UniTensor tB = UniTensor({Bond(3),Bond(4),Bond(5)},{},2); // init from scratch. ``` +* `UniTensor` supports `Block` form, which is useful if the physical system has a symmetry. See [user guide](https://kaihsinwu.gitlab.io/Cytnx_doc/) for more details. +## Linear Algebra +Cytnx provides a set of linear algebra functions. +* For instance, one can perform SVD, Eig, Eigh decomposition, etc. on a `Tensor` or `UniTensor`. +* Iterative methods such as Lanczos, Arnoldi are also available. +* The linear algebra functions are implemented in the `linalg` namespace. +For more details, see the [API documentation](https://kaihsinwu.gitlab.io/cytnx_api/). +```c++ +auto mean = 0.0; +auto std = 1.0; +Tensor A = cytnx::random::normal({3, 4}, mean, std); +auto svds = cytnx::linalg::Svd(A); // SVD decomposition +``` ## Examples - - A repository with the following examples will be released soon under the Cytnx organization on github: +See the examples in the folder `example` See example/ folder or documentation for how to use API See example/iTEBD folder for implementation on iTEBD algo. See example/DMRG folder for implementation on DMRG algo. - See example/iDMRG folder for implementation on iDMRG algo. - See example/HOTRG folder for implementation on HOTRG algo for classical system. - See example/ED folder for implementation using LinOp & Lanczos. - - -## Avaliable linear-algebra function (Keep updating): - - func | inplace | CPU | GPU | callby tn | Tn | CyTn (xlinalg) - --------------|-----------|-----|------|-------------|----|---------------- - Add | x | Y | Y | Y | Y | Y - Sub | x | Y | Y | Y | Y | Y - Mul | x | Y | Y | Y | Y | Y - Div | x | Y | Y | Y | Y | Y - Cpr | x | Y | Y | Y | Y | x - --------------|-----------|-----|------|-------------|----|---------------- - +,+=[tn] | x | Y | Y | Y (Add_) | Y | Y - -,-=[tn] | x | Y | Y | Y (Sub_) | Y | Y - *,*=[tn] | x | Y | Y | Y (Mul_) | Y | Y - /,/=[tn] | x | Y | Y | Y (Div_) | Y | Y - ==[tn] | x | Y | Y | Y (Cpr_) | Y | x - --------------|-----------|-----|------|-------------|----|---------------- - Svd | x | Y | Y | Y | Y | Y - *Svd_truncate| x | Y | Y | N | Y | Y - InvM | InvM_ | Y | Y | Y | Y | N - Inv | Inv _ | Y | Y | Y | Y | N - Conj | Conj_ | Y | Y | Y | Y | Y - --------------|-----------|-----|------|-------------|----|---------------- - Exp | Exp_ | Y | Y | Y | Y | N - Expf | Expf_ | Y | Y | Y | Y | N - Eigh | x | Y | Y | Y | Y | N - *ExpH | x | Y | Y | N | Y | Y - *ExpM | x | Y | N | N | Y | Y - --------------|-----------|-----|------|-------------|----|---------------- - Matmul | x | Y | Y | N | Y | N - Diag | x | Y | Y | N | Y | N - *Tensordot | x | Y | Y | N | Y | N - Outer | x | Y | Y | N | Y | N - Vectordot | x | Y | .Y | N | Y | N - --------------|-----------|-----|------|-------------|----|---------------- - Tridiag | x | Y | N | N | Y | N - Kron | x | Y | N | N | Y | N - Norm | x | Y | Y | Y | Y | N - *Dot | x | Y | Y | N | Y | N - Eig | x | Y | N | N | Y | N - --------------|-----------|-----|------|-------------|----|---------------- - Pow | Pow_ | Y | Y | Y | Y | Y - Abs | Abs_ | Y | N | Y | Y | N - Qr | x | Y | N | N | Y | Y - Qdr | x | Y | N | N | Y | Y - Det | x | Y | N | N | Y | N - --------------|-----------|-----|------|-------------|----|---------------- - Min | x | Y | N | Y | Y | N - Max | x | Y | N | Y | Y | N - *Trace | x | Y | N | Y | Y | Y - Mod | x | Y | Y | Y | Y | Y - Matmul_dg | x | Y | Y | N | Y | N - --------------|-----------|-----|------|-------------|----|---------------- - *Tensordot_dg | x | Y | Y | N | Y | N - - iterative solver: - - Lanczos_ER - - - * this is a high level linalg - - ^ this is temporary disable - - . this is floating point type only - -## Container Generators - - Tensor: zeros(), ones(), arange(), identity(), eye() - -## Physics category - - Tensor: pauli(), spin() - - -## Random - func | Tn | Stor | CPU | GPU - ----------------------------------------- - *Make_normal() | Y | Y | Y | Y - *Make_uniform() | Y | Y | Y | N - ^normal() | Y | x | Y | Y - ^uniform() | Y | x | Y | N - - * this is initializer - ^ this is generator - - [Note] The difference between initializer and generator is that the initializer is used to initialize the Tensor, and the generator creates a new Tensor. + See example/TDVP folder for implementation on TDVP algo. + See example/LinOp and example/ED folder for implementation using LinOp & Lanczos. + ## How to contribute & get in contact - If you want to contribute to the development of the library, you are more than welocome. No matter if you want to dig deep into the technical details of the library, help improving the documentation and make the library more accessible to new users, or if you want to contribute to the project with high level algorithms - we are happy to keep improving Cytnx together. - Also, if you have any questions or suggestions, feel free to reach out to us. +If you want to contribute to the development of the library, you are more than welocome. No matter if you want to dig deep into the technical details of the library, help improving the documentation and make the library more accessible to new users, or if you want to contribute to the project with high level algorithms - we are happy to keep improving Cytnx together. +Also, if you have any questions or suggestions, feel free to reach out to us. - You can contact us by: - * Discord: +You can contact us by: +* Discord: [https://discord.gg/dyhF7CCE9D](https://discord.gg/dyhF7CCE9D) - * Creating an issue on github if you find a bug or have a suggestion: - +* Creating an issue on github if you find a bug or have a suggestion: [https://github.com/Cytnx-dev/Cytnx/issues](https://github.com/Cytnx-dev/Cytnx/issues) - * Email, see below +* Email, see below ## Developers & Maintainers - - [Creator and Project manager] - Kai-Hsin Wu (Boston Univ., USA) kaihsinwu@gmail.com - - Chang Teng Lin (NTU, Taiwan): major maintainer and developer - Ke Hsu (NTU, Taiwan): major maintainer and developer - Hao Ti (NTU, Taiwan): documentation and linalg - Ying-Jer Kao (NTU, Taiwan): setuptool, cmake - +Creator and Project manager | Affiliation | Email +----------------------------|-----------------|--------- +Kai-Hsin Wu |Boston Univ., USA|kaihsinwu@gmail.com + +Developers | Affiliation | Roles +----------------|-----------------|--------- +Chang-Teng Lin |NTU, Taiwan |major maintainer and developer +Ke Hsu |NTU, Taiwan |major maintainer and developer +Ivana Gyro |NTU, Taiwan |major maintainer and developer +Hao-Ti Hung |NTU, Taiwan |documentation and linalg +Ying-Jer Kao |NTU, Taiwan |setuptool, cmake ## Contributors - - PoChung Chen (NCHU, Taiwan) - Chia-Min Chung (NSYSU, Taiwan) - Manuel Schneider (NYCU, Taiwan) - Yen-Hsin Wu (NTU, Taiwan) - Po-Kwan Wu (OSU, USA) - Wen-Han Kao (UMN, USA) - Yu-Hsueh Chen (NTU, Taiwan) +Contributors | Affiliation +----------------|----------------- +PoChung Chen | NTHU, Taiwan +Chia-Min Chung | NSYSU, Taiwan +Ian McCulloch | NTHU, Taiwan +Manuel Schneider| NYCU, Taiwan +Yen-Hsin Wu | NTU, Taiwan +Po-Kwan Wu | OSU, USA +Wen-Han Kao | UMN, USA +Yu-Hsueh Chen | NTU, Taiwan +Yu-Cheng Lin | NTU, Taiwan ## References +* Paper: +[https://arxiv.org/abs/2401.01921](https://arxiv.org/abs/2401.01921) - * example/DMRG: +* Example/DMRG: [https://www.tensors.net/dmrg](https://www.tensors.net/dmrg) - * hptt library: +* hptt library: [https://github.com/springer13/hptt](https://github.com/springer13/hptt) diff --git a/docs.doxygen b/docs.doxygen index c72855521..a0a0ea022 100644 --- a/docs.doxygen +++ b/docs.doxygen @@ -38,7 +38,7 @@ PROJECT_NAME = "Cytnx" # could be handy for archiving the generated documentation or if some version # control system is used. -PROJECT_NUMBER = "v0.9.7" +PROJECT_NUMBER = "v1.0.0" # Using the PROJECT_BRIEF tag one can provide an optional one line description # for a project that appears at the top of each page and should give viewer a @@ -51,7 +51,7 @@ PROJECT_BRIEF = # pixels and the maximum width should not exceed 200 pixels. Doxygen will copy # the logo to the output directory. -PROJECT_LOGO = "./Icon_small.png" +PROJECT_LOGO = "./Icons/Icon_small.png" # The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) path @@ -793,7 +793,7 @@ WARN_LOGFILE = # spaces. See also FILE_PATTERNS and EXTENSION_MAPPING # Note: If this tag is empty the current directory is searched. -INPUT = "./python_doc" "./dox.md" "./include" "./include/tn_algo" "./include/linalg.hpp" "./include/utils/utils.hpp" "./include/UniTensor.hpp" "./misc_doc" #"./src/linalg" "./src/utils" +INPUT = "./python_doc" "./dox.md" "./include" "./misc_doc" # This tag can be used to specify the character encoding of the source files # that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses @@ -877,7 +877,7 @@ RECURSIVE = NO # Note that relative paths are relative to the directory from which doxygen is # run. -EXCLUDE = "./include/Gncon.hpp" +EXCLUDE = "./include/Gncon.hpp" "./include/contraction_tree" "./include/search_tree.hpp""./include/stat.hpp" "./include/sp.py" # The EXCLUDE_SYMLINKS tag can be used to select whether or not files or # directories that are symbolic links (a Unix file system feature) are excluded diff --git a/dox.md b/dox.md index c37df548d..ccfb7b156 100644 --- a/dox.md +++ b/dox.md @@ -29,8 +29,8 @@ ``` -### 1. All the Storage and Tensor can now have mulitple type support. - The avaliable types are : +### 1. All the Storage and Tensor have mulitple type support. + Avaliable types are : (please refer to \link Type Type \endlink) | cytnx type | c++ type | Type object |------------------|----------------------|-------------------- @@ -47,96 +47,103 @@ | cytnx_bool | bool | Type.Bool ### 2. Multiple devices support. - * simple moving btwn CPU and GPU (see below) + * simple moving btwn CPU and GPU (see \link cytnx::Device Device \endlink and below) ## Objects: - * \link cytnx::Storage Storage \endlink [binded] - * \link cytnx::Tensor Tensor \endlink [binded] - * \link cytnx::Bond Bond \endlink [binded] - * \link cytnx::Accessor Accessor \endlink [c++ only] - * \link cytnx::Symmetry Symmetry \endlink [binded] - * \link cytnx::UniTensor UniTensor \endlink [binded] - * \link cytnx::Network Network \endlink [binded] + * Storage [Python binded] + * \link cytnx::Tensor Tensor \endlink [Python binded] + * \link cytnx::Bond Bond \endlink [Python binded] + * \link cytnx::Accessor Accessor \endlink [C++ only] + * \link cytnx::Symmetry Symmetry \endlink [Python binded] + * \link cytnx::UniTensor UniTensor \endlink [Python binded] + * \link cytnx::Network Network \endlink [Python binded] ## linear algebra functions: - See \link cytnx::linalg cytnx::linalg \endlink for further details - - func | inplace | CPU | GPU | callby tn - ----------|-----------|-----|------|----------- - \link cytnx::linalg::Add Add\endlink | x | Y | Y | Y - \link cytnx::linalg::Sub Sub\endlink | x | Y | Y | Y - \link cytnx::linalg::Mul Mul\endlink | x | Y | Y | Y - \link cytnx::linalg::Div Div\endlink | x | Y | Y | Y - \link cytnx::linalg::Cpr Cpr\endlink | x | Y | Y | Y - \link cytnx::linalg::Mod Mod\endlink | x | Y | Y | Y - +,+=[tn]| x | Y | Y | Y (\link cytnx::Tensor::Add_ Tensor.Add_\endlink) - -,-=[tn]| x | Y | Y | Y (\link cytnx::Tensor::Sub_ Tensor.Sub_\endlink) - *,*=[tn]| x | Y | Y | Y (\link cytnx::Tensor::Mul_ Tensor.Mul_\endlink) - /,/=[tn]| x | Y | Y | Y (\link cytnx::Tensor::Div_ Tensor.Div_\endlink) - == [tn]| x | Y | Y | Y (\link cytnx::Tensor::Cpr_ Tensor.Cpr_\endlink) - \link cytnx::linalg::Svd Svd\endlink | x | Y | Y | Y - *\link cytnx::linalg::Svd_truncate Svd_truncate\endlink | x | Y | Y | N - \link cytnx::linalg::InvM InvM\endlink | \link cytnx::linalg::InvM_ InvM_\endlink | Y | Y | Y - \link cytnx::linalg::Inv Inv\endlink | \link cytnx::linalg::Inv_ Inv_\endlink | Y | Y | Y - \link cytnx::linalg::Conj Conj\endlink | \link cytnx::linalg::Conj_ Conj_\endlink | Y | Y | Y - \link cytnx::linalg::Exp Exp\endlink | \link cytnx::linalg::Exp_ Exp_\endlink | Y | Y | Y - \link cytnx::linalg::Expf Expf\endlink | \link cytnx::linalg::Expf_ Expf_\endlink | Y | Y | Y - *\link cytnx::linalg::ExpH ExpH\endlink | x | Y | Y | N - *\link cytnx::linalg::ExpM ExpM\endlink | x | Y | Y | N - \link cytnx::linalg::Eigh Eigh\endlink | x | Y | Y | Y - \link cytnx::linalg::Matmul Matmul\endlink | x | Y | Y | N - \link cytnx::linalg::Diag Diag\endlink | x | Y | Y | N - *\link cytnx::linalg::Tensordot Tensordot\endlink | x | Y | Y | N - \link cytnx::linalg::Outer Outer\endlink | x | Y | Y | N - \link cytnx::linalg::Kron Kron\endlink | x | Y | N | N - \link cytnx::linalg::Norm Norm\endlink | x | Y | Y | Y - \link cytnx::linalg::Vectordot Vectordot\endlink | x | Y | .Y | N - \link cytnx::linalg::Tridiag Tridiag\endlink | x | Y | N | N - *\link cytnx::linalg::Dot Dot\endlink | x | Y | Y | N - \link cytnx::linalg::Eig Eig\endlink | x | Y | N | Y - \link cytnx::linalg::Pow Pow\endlink | \link cytnx::linalg::Pow_ Pow_\endlink | Y | Y | Y - \link cytnx::linalg::Abs Abs\endlink | \link cytnx::linalg::Abs_ Abs_\endlink | Y | N | Y - \link cytnx::linalg::Qr Qr\endlink | x | Y | N | N - \link cytnx::linalg::Qdr Qdr\endlink | x | Y | N | N - \link cytnx::linalg::Min Min\endlink | x | Y | N | Y - \link cytnx::linalg::Max Max\endlink | x | Y | N | Y - *\link cytnx::linalg::Trace Trace\endlink | x | Y | N | N - - - iterative solver - - \link cytnx::linalg::Lanczos_ER Lanczos_ER\endlink - - - * this is a high level linalg - - ^ this is temporary disable - - . this is floating point type only + +See \link cytnx::linalg cytnx::linalg \endlink for further details + |func | inplace | CPU | GPU | callby Tensor | Tensor | UniTensor| + |-------------|-----------|-----|------|-----------------|--------|-----------| + Add | \link cytnx::Tensor::Add_( const T& rhs) Add_\endlink |✓ | ✓ | \link cytnx::Tensor::Add( const T& rhs) Add\endlink | \link cytnx::linalg::Add(const cytnx::Tensor& Lt, const cytnx::Tensor& Rt) Add\endlink | \link cytnx::linalg::Add( const cytnx::UniTensor& Lt, const cytnx::UniTensor& Rt ) Add\endlink + Sub | \link cytnx::Tensor::Sub_( const T& rhs) Sub_\endlink |✓ | ✓ | \link cytnx::Tensor::Sub( const T& rhs) Sub\endlink | \link cytnx::linalg::Sub(const cytnx::Tensor& Lt, const cytnx::Tensor& Rt) Sub\endlink | \link cytnx::linalg::Sub( const cytnx::UniTensor& Lt, const cytnx::UniTensor& Rt ) Sub\endlink + Mul | \link cytnx::Tensor::Mul_( const T& rhs) Mul_\endlink |✓ | ✓ | \link cytnx::Tensor::Mul( const T& rhs) Mul\endlink | \link cytnx::linalg::Mul(const cytnx::Tensor& Lt, const cytnx::Tensor& Rt) Mul\endlink | \link cytnx::linalg::Mul( const cytnx::UniTensor& Lt, const cytnx::UniTensor& Rt ) Mul\endlink + Div | \link cytnx::Tensor::Div_( const T& rhs) Div_\endlink |✓ | ✓ | \link cytnx::Tensor::Div( const T& rhs) Div\endlink | \link cytnx::linalg::Div(const cytnx::Tensor& Lt, const cytnx::Tensor& Rt) Div\endlink | \link cytnx::linalg::Div( const cytnx::UniTensor& Lt, const cytnx::UniTensor& Rt ) Div\endlink + Mod | x |✓ | ✓ | \link cytnx::Tensor::Mod(const T& rhs) Mod\endlink | \link cytnx::linalg::Mod(const cytnx::Tensor& Lt, const cytnx::Tensor& Rt) Mod\endlink | \link cytnx::linalg::Mod( const cytnx::UniTensor& Lt, const cytnx::UniTensor& Rt ) Mod\endlink + Cpr | x |✓ | ✓ | \link cytnx::Tensor::Cpr( const T& rhs) Cpr\endlink | \link cytnx::linalg::Cpr(const cytnx::Tensor& Lt, const cytnx::Tensor& Rt) Cpr\endlink | x + +,+=| \link cytnx::Tensor::operator+=(const T& rc) +=\endlink |✓ | ✓ | \link cytnx::Tensor::operator+=(const T& rc) +=\endlink | \link cytnx::operator+(const cytnx::Tensor& Lt, const cytnx::Tensor& Rt) +\endlink,\link cytnx::Tensor::operator+=(const T& rc) +=\endlink| \link cytnx::operator+(const cytnx::UniTensor& Lt, const cytnx::UniTensor& Rt) +\endlink,\link cytnx::UniTensor::operator+=(const cytnx::UniTensor& rhs) +=\endlink + -,-=| \link cytnx::Tensor::operator-=(const T& rc) -=\endlink |✓ | ✓ | \link cytnx::Tensor::operator-=(const T& rc) -=\endlink | \link cytnx::operator-(const cytnx::Tensor& Lt, const cytnx::Tensor& Rt) -\endlink,\link cytnx::Tensor::operator-=(const T& rc) -=\endlink| \link cytnx::operator-(const cytnx::UniTensor& Lt, const cytnx::UniTensor& Rt) -\endlink,\link cytnx::UniTensor::operator-=(const cytnx::UniTensor& rhs) -=\endlink + *,*=| \link cytnx::Tensor::operator*=(const T& rc) *=\endlink |✓ | ✓ | \link cytnx::Tensor::operator*=(const T& rc) *=\endlink | \link cytnx::operator*(const cytnx::Tensor& Lt, const cytnx::Tensor& Rt) *\endlink,\link cytnx::Tensor::operator*=(const T& rc) *=\endlink| \link cytnx::operator*(const cytnx::UniTensor& Lt, const cytnx::UniTensor& Rt) *\endlink,\link cytnx::UniTensor::operator*=(const cytnx::UniTensor& rhs) *=\endlink + /,/=| \link cytnx::Tensor::operator/=(const T& rc) /=\endlink |✓ | ✓ | \link cytnx::Tensor::operator/=(const T& rc) /=\endlink | \link cytnx::operator/(const cytnx::Tensor& Lt, const cytnx::Tensor& Rt) /\endlink,\link cytnx::Tensor::operator/=(const T& rc) /=\endlink| \link cytnx::operator/(const cytnx::UniTensor& Lt, const cytnx::UniTensor& Rt) /\endlink,\link cytnx::UniTensor::operator/=(const cytnx::UniTensor& rhs) /=\endlink + Svd | x |✓ | ✓ | \link cytnx::Tensor::Svd(const bool& is_UvT) const Svd\endlink|\link cytnx::linalg::Svd(const cytnx::Tensor & Tin, const bool & is_UvT) Svd\endlink|\link cytnx::linalg::Svd(const cytnx::UniTensor & Tin, const bool & is_UvT ) Svd\endlink + Gesvd | x |✓ | ✓ | x |\link cytnx::linalg::Gesvd(const cytnx::Tensor & Tin, const bool & is_U, const bool& is_vT) Gesvd\endlink|\link cytnx::linalg::Gesvd(const cytnx::UniTensor & Tin, const bool & is_U, const bool& is_vT) Gesvd\endlink + Svd_truncate | x |✓ | ✓ | x |\link cytnx::linalg::Svd_truncate(const cytnx::Tensor& Tin, const cytnx_uint64& keepdim, const double& err, const bool& is_UvT, const unsigned int& return_err, const cytnx_uint64& mindim) Svd_truncate\endlink|\link cytnx::linalg::Svd_truncate(const cytnx::UniTensor& Tin, const cytnx_uint64& keepdim, const double& err, const bool& is_UvT, const unsigned int& return_err, const cytnx_uint64& mindim) Svd_truncate\endlink + Gesvd_truncate | x |✓ | ✓ | x |\link cytnx::linalg::Gesvd_truncate(const cytnx::Tensor& Tin, const cytnx_uint64& keepdim, const double& err, const bool& is_U, const bool& is_vT, const unsigned int& return_err, const cytnx_uint64& mindim) Gesvd_truncate\endlink|\link cytnx::linalg::Gesvd_truncate(const cytnx::UniTensor& Tin, const cytnx_uint64& keepdim, const double& err, const bool& is_U, const bool& is_vT, const unsigned int& return_err, const cytnx_uint64& mindim) Gesvd_truncate\endlink| + InvM | \link cytnx::linalg::InvM_(cytnx::Tensor& Tin) InvM_ \endlink |✓ | ✓ | \link cytnx::Tensor::InvM()const InvM \endlink |\link cytnx::linalg::InvM(const cytnx::Tensor& Tin) InvM \endlink|\link cytnx::linalg::InvM(const cytnx::UniTensor& Tin) InvM \endlink + Inv | \link cytnx::linalg::Inv_(cytnx::Tensor& Tin, const double& clip) Inv_ \endlink |✓ | ✓ | \link cytnx::Tensor::Inv(const double& clip)const Inv \endlink |\link cytnx::linalg::Inv(const cytnx::Tensor& Tin, const double& clip) Inv \endlink|x + Conj | \link cytnx::linalg::Conj_(cytnx::Tensor& Tin) Conj_ \endlink |✓ | ✓ | \link cytnx::Tensor::Conj() Conj \endlink |\link cytnx::linalg::Conj(const cytnx::Tensor& Tin) Conj \endlink|\link cytnx::linalg::Conj(const cytnx::UniTensor& Tin) Conj \endlink + Exp | \link cytnx::linalg::Exp_(cytnx::Tensor& Tin) Exp_ \endlink |✓ | ✓ | \link cytnx::Tensor::Exp() Exp \endlink |\link cytnx::linalg::Exp(const cytnx::Tensor& Tin) Exp \endlink|x + Expf | \link cytnx::linalg::Expf_(cytnx::Tensor& Tin) Expf_ \endlink |✓ | ✓ | x |\link cytnx::linalg::Expf(const cytnx::Tensor& Tin) Expf \endlink|x + Eigh | x |✓ | ✓ | \link cytnx::Tensor::Eigh(const bool& is_V, const bool& row_v)const Eigh\endlink |\link cytnx::linalg::Eigh(const cytnx::Tensor& Tin, const bool& is_V, const bool& row_v) Eigh\endlink|\link cytnx::linalg::Eigh(const cytnx::UniTensor& Tin, const bool& is_V, const bool& row_v) Eigh\endlink + ExpH | x |✓ | ✓ | x |\link cytnx::linalg::ExpH(const cytnx::Tensor& Tin, const T& a, const T& b) ExpH \endlink|\link cytnx::linalg::ExpH(const cytnx::UniTensor& Tin, const T& a, const T& b) ExpH \endlink + ExpM | x |✓ | x | x |\link cytnx::linalg::ExpM(const cytnx::Tensor& Tin, const T& a, const T& b) ExpM \endlink|\link cytnx::linalg::ExpM(const cytnx::UniTensor& Tin, const T& a, const T& b) ExpM \endlink + Matmul | x |✓ | ✓ | x |\link cytnx::linalg::Matmul(const cytnx::Tensor& TL, const cytnx::Tensor& TR) Matmul \endlink|x + Diag | x |✓ | ✓ | x |\link cytnx::linalg::Diag(const cytnx::Tensor& Tin) Diag \endlink|x + Tensordot | x |✓ | ✓ | x |\link cytnx::linalg::Tensordot(const cytnx::Tensor& Tl, const cytnx::Tensor& Tr, const std::vector& idxl, const std::vector& idxr, const bool& cacheL, const bool& cacheR) Tensordot \endlink|x + Outer | x |✓ | ✓ | x |\link cytnx::linalg::Outer(const cytnx::Tensor& Tl, const cytnx::Tensor& Tr) Outer \endlink|x + Vectordot | x |✓ | ✓ | x |\link cytnx::linalg::Vectordot(const cytnx::Tensor& Tl, const cytnx::Tensor& Tr, const bool& is_conj) Vectordot \endlink|x + Tridiag | x |✓ | ✓ | x |\link cytnx::linalg::Tridiag(const cytnx::Tensor& Diag, const cytnx::Tensor& Sub_diag, const bool& is_V, const bool& is_row, bool throw_excp) Tridiag \endlink|x + Kron | x |✓ | ✓ | x |\link cytnx::linalg::Kron(const cytnx::Tensor& Tl, const cytnx::Tensor& Tr, const bool& Tl_pad_left, const bool& Tr_pad_left) Kron \endlink|x + Norm | x |✓ | ✓ | \link cytnx::Tensor::Norm() Norm\endlink |\link cytnx::linalg::Norm(const cytnx::Tensor& Tin) Norm \endlink|\link cytnx::linalg::Norm(const cytnx::UniTensor& Tin) Norm \endlink + Dot | x |✓ | ✓ | x |\link cytnx::linalg::Dot(const cytnx::Tensor& Tl, const cytnx::Tensor& Tr) Dot \endlink|x + Eig | x |✓ | x | x |\link cytnx::linalg::Eig(const cytnx::Tensor& Tin, const bool& is_V, const bool& row_v) Eig\endlink|\link cytnx::linalg::Eig(const cytnx::UniTensor& Tin, const bool& is_V, const bool& row_v) Eig\endlink + Pow | \link cytnx::linalg::Pow_(cytnx::Tensor& Tin, const double& p) Pow_ \endlink |✓ | ✓ | \link cytnx::Tensor::Pow(const cytnx_double& p)const Pow \endlink |\link cytnx::linalg::Pow(const cytnx::Tensor& Tin, const double& p) Pow \endlink|\link cytnx::linalg::Pow(const cytnx::UniTensor& Tin, const double& p) Pow \endlink + Abs | \link cytnx::linalg::Abs_(cytnx::Tensor& Tin) Abs_ \endlink |✓ | ✓ | \link cytnx::Tensor::Abs()const Abs \endlink |\link cytnx::linalg::Abs(const cytnx::Tensor& Tin) Abs \endlink|x + Qr | x |✓ | ✓ | x |\link cytnx::linalg::Qr(const cytnx::Tensor& Tin, const bool& is_tau) Qr \endlink|\link cytnx::linalg::Qr(const cytnx::UniTensor& Tin, const bool& is_tau) Qr \endlink + Qdr | x |✓ | x | x |\link cytnx::linalg::Qdr(const cytnx::Tensor& Tin, const bool& is_tau) Qdr \endlink|\link cytnx::linalg::Qdr(const cytnx::UniTensor& Tin, const bool& is_tau) Qdr \endlink + Det | x |✓ | ✓ | x |\link cytnx::linalg::Det(const cytnx::Tensor& Tin) Det \endlink|x + Min | x |✓ | ✓ | \link cytnx::Tensor::Min()const Min\endlink |\link cytnx::linalg::Min(const cytnx::Tensor& Tn) Min\endlink|x + Max | x |✓ | ✓ | \link cytnx::Tensor::Max()const Max\endlink |\link cytnx::linalg::Max(const cytnx::Tensor& Tn) Max\endlink|x + Sum | x |✓ | ✓ | x |\link cytnx::linalg::Sum(const cytnx::Tensor& Tn) Sum\endlink|x + Trace | x |✓ | x | \link cytnx::Tensor::Trace(const cytnx_uint64& a, const cytnx_uint64& b)const Trace\endlink |\link cytnx::linalg::Trace(const cytnx::Tensor& Tn, const cytnx_uint64& axisA, const cytnx_uint64& axisB) Trace\endlink|\link cytnx::linalg::Trace(const cytnx::UniTensor& Tn, const std::string& a, const std::string& b) Trace \endlink + Matmul_dg | x |✓ | ✓ | x |\link cytnx::linalg::Matmul_dg(const cytnx::Tensor& TL, const cytnx::Tensor& TR) Matmul_dg \endlink|x + Tensordot_dg | x |✓ | x | x |\link cytnx::linalg::Tensordot_dg(const cytnx::Tensor& Tl, const cytnx::Tensor& Tr, const std::vector& idxl, const std::vector& idxr, const bool& diag_L) Tensordot_dg \endlink|x + Lstsq | x |✓ | x | x |\link cytnx::linalg::Lstsq(const cytnx::Tensor &A, const cytnx::Tensor &b, const float &rcond) Lstsq\endlink|x + Axpy | \link cytnx::linalg::Axpy_(const Scalar &a, const cytnx::Tensor &x, cytnx::Tensor &y) Axpy_ \endlink |✓ | x | x |\link cytnx::linalg::Axpy(const Scalar &a, const cytnx::Tensor &x, const cytnx::Tensor &y) Axpy \endlink|x + Ger | x |✓ | ✓ | x |\link cytnx::linalg::Ger(const cytnx::Tensor &x, const cytnx::Tensor &y, const Scalar &a) Ger\endlink|x + Gemm | \link cytnx::linalg::Gemm_(const Scalar &a, const cytnx::Tensor &x, const cytnx::Tensor &y, const Scalar& b, cytnx::Tensor& c) Gemm_\endlink |✓ | ✓ | x |\link cytnx::linalg::Gemm(const Scalar &a, const cytnx::Tensor &x, const cytnx::Tensor &y) Gemm\endlink|x + Gemm_Batch | x |✓ | ✓ | x |\link cytnx::linalg::Gemm_Batch(const std::vector< cytnx_int64 >& m_array, const std::vector< cytnx_int64 >& n_array, const std::vector< cytnx_int64 >& k_array, const std::vector< Scalar >& alpha_array, const std::vector< cytnx::Tensor >& a_tensors, const std::vector< cytnx::Tensor >& b_tensors, const std::vector< Scalar >& beta_array, std::vector< cytnx::Tensor >& c_tensors, const cytnx_int64 group_count, const std::vector< cytnx_int64 >& group_size ) Gemm_Batch\endlink|x + +**iterative solver:** + |func | CPU | GPU | Tensor | UniTensor| + |------------|-----|------|--------|----------| + |Lanczos |✓ | ✓ | \link cytnx::linalg::Lanczos(cytnx::LinOp *Hop, const cytnx::Tensor& Tin, const std::string method, const double &CvgCrit, const unsigned int &Maxiter, const cytnx_uint64 &k, const bool &is_V, const bool &is_row, const cytnx_uint32 &max_krydim, const bool &verbose) Lanczos\endlink|\link cytnx::linalg::Lanczos(cytnx::LinOp *Hop, const cytnx::UniTensor& Tin, const std::string method, const double &CvgCrit, const unsigned int &Maxiter, const cytnx_uint64 &k, const bool &is_V, const bool &is_row, const cytnx_uint32 &max_krydim, const bool &verbose) Lanczos\endlink + |Lanczos_Exp |✓ | x | x |\link cytnx::linalg::Lanczos_Exp(cytnx::LinOp *Hop, const cytnx::UniTensor& Tin, const Scalar& tau, const double &CvgCrit, const unsigned int &Maxiter, const bool &verbose) Lanczos_Exp\endlink + |Arnoldi |✓ | x | \link cytnx::linalg::Arnoldi(cytnx::LinOp *Hop, const cytnx::Tensor& Tin, const std::string which, const cytnx_uint64& maxiter, const cytnx_double &cvg_crit, const cytnx_uint64& k, const bool& is_V, const bool &verbose) Arnoli\endlink |\link cytnx::linalg::Arnoldi(cytnx::LinOp *Hop, const cytnx::UniTensor& Tin, const std::string which, const cytnx_uint64& maxiter, const cytnx_double &cvg_crit, const cytnx_uint64& k, const bool& is_V, const bool &verbose) Arnoli\endlink ## Container Generators Tensor: \link cytnx::zeros zeros()\endlink, \link cytnx::ones ones()\endlink, \link cytnx::arange arange()\endlink, \link cytnx::identity identity()\endlink, \link cytnx::eye eye()\endlink, ## Physics Category - Tensor: \link cytnx::physics::spin spin()\endlink \link cytnx::physics::pauli pauli()\endlink + Tensor: \link cytnx::physics::spin spin(),\endlink \link cytnx::physics::pauli pauli()\endlink ## Random See \link cytnx::random cytnx::random \endlink for further details - func | Tn | Stor | CPU | GPU - ----------|-----|------|-----|----------- - *\link cytnx::random::Make_normal Make_normal\endlink | Y | Y | Y | Y - ^\link cytnx::random::normal normal\endlink | Y | x | Y | Y + func | UniTensor | Tensor | Storage | CPU | GPU + ----------|-----------|--------|---------|-----|----- + ^normal | x | \link cytnx::random::normal(const std::vector< cytnx_uint64 > &Nelem, const double &mean, const double &std, const int &device, const unsigned int &seed, const unsigned int &dtype) normal\endlink | x | ✓ | ✓ + ^uniform | x | \link cytnx::random::uniform(const std::vector< cytnx_uint64 > &Nelem, const double &low, const double &high, const int &device, const unsigned int &seed, const unsigned int &dtype) uniform\endlink | x | ✓ | ✓ + *normal_ | \link cytnx::random::normal_(cytnx::UniTensor& Tin, const double& mean, const double& std, const unsigned int& seed) normal_ \endlink | \link cytnx::random::normal_(cytnx::Tensor& Tin, const double& mean, const double& std, const unsigned int& seed) normal_ \endlink | \link cytnx::random::normal_(cytnx::Storage& Sin, const double& mean, const double& std, const unsigned int& seed) normal_ \endlink | ✓ | ✓ + *uniform_ | \link cytnx::random::uniform_(cytnx::UniTensor& Tin, const double& low, const double& high, const unsigned int& seed) uniform_ \endlink | \link cytnx::random::uniform_(cytnx::Tensor& Tin, const double& low, const double& high, const unsigned int& seed) uniform_ \endlink | \link cytnx::random::uniform_(cytnx::Storage& Sin, const double& low, const double& high, const unsigned int& seed) uniform_ \endlink | ✓ | ✓ - * this is initializer + `*` this is initializer - ^ this is generator + `^` this is generator - [Note] The difference of initializer and generator is that initializer is used to initialize the Tensor, and generator generates a new Tensor. + \note The difference of initializer and generator is that initializer is used to initialize the Tensor, and generator generates a new Tensor. ## conda install - [Currently Linux only] + **[Currently Linux only]** without CUDA * python 3.6/3.7/3.8: conda install -c kaihsinwu cytnx @@ -148,16 +155,16 @@ ### Storage * Memory container with GPU/CPU support. - maintain type conversions (type casting btwn Storages) - and moving btwn devices. - * Generic type object, the behavior is very similar to python. + Type conversions (type casting between Storages) + and moving between devices easily possible. + * Generic type object, the behavior is very similar to Python. ```{.cpp} Storage A(400,Type.Double); for(int i=0;i<400;i++) A.at(i) = i; - Storage B = A; // A and B share same memory, this is similar as python + Storage B = A; // A and B share same memory, this is similar to Python Storage C = A.to(Device.cuda+0); @@ -165,7 +172,7 @@ ### Tensor * A tensor, API very similar to numpy and pytorch. - * simple moving btwn CPU and GPU: + * Simple moving btwn CPU and GPU: ```{.cpp} Tensor A({3,4},Type.Double,Device.cpu); // create tensor on CPU (default) @@ -174,77 +181,146 @@ Tensor C = B; // C and B share same memory. - // move A to gpu + // move A to GPU Tensor D = A.to(Device.cuda+0); - // inplace move A to gpu + // inplace move A to GPU A.to_(Device.cuda+0); ``` - * Type conversion in between avaliable: + * Type conversion possible: ```{.cpp} Tensor A({3,4},Type.Double); Tensor B = A.astype(Type.Uint64); // cast double to uint64_t ``` - * vitual swap and permute. All the permute and swap will not change the underlying memory - * Use Contiguous() when needed to actual moving the memory layout. + * Virtual swap and permute. All the permute and swap operations do not change the underlying memory immediately. Minimized cost of moving elements. + * Use `contiguous()` when needed to actually move the memory layout. ```{.cpp} Tensor A({3,4,5,2},Type.Double); A.permute_(0,3,1,2); // this will not change the memory, only the shape info is changed. - cout << A.is_contiguous() << endl; // this will be false! + cout << A.is_contiguous() << endl; // false - A.contiguous_(); // call Configuous() to actually move the memory. + A.contiguous_(); // call contiguous() to actually move the memory. cout << A.is_contiguous() << endl; // this will be true! ``` - * access single element using .at + * Access single element using `.at` ```{.cpp} Tensor A({3,4,5},Type.Double); double val = A.at(0,2,2); ``` - * access elements with python slices similarity: + * Access elements similar to Python slices: ```{.cpp} typedef Accessor ac; Tensor A({3,4,5},Type.Double); Tensor out = A(0,":","1:4"); - // equivalent to python: out = A[0,:,1:4] + // equivalent to Python: out = A[0,:,1:4] ``` +### UniTensor +* Extension of Tensor, specifically designed for Tensor network simulations. +* `UniTensor` is a tensor with additional information such as `Bond`, `Symmetry` and `labels`. With these information, one can easily implement the tensor contraction. +```c++ +Tensor A({3,4,5},Type.Double); +UniTensor tA = UniTensor(A); // convert directly. +UniTensor tB = UniTensor({Bond(3),Bond(4),Bond(5)},{}); // init from scratch. +// Relabel the tensor and then contract. +tA.relabels_({"common_1", "common_2", "out_a"}); +tB.relabels_({"common_1", "common_2", "out_b"}); +UniTensor out = cytnx::Contract(tA,tB); +tA.print_diagram(); +tB.print_diagram(); +out.print_diagram(); +``` +Output: +``` +----------------------- +tensor Name : +tensor Rank : 3 +block_form : False +is_diag : False +on device : cytnx device: CPU + --------- + / \ + common_1 ____| 3 4 |____ common_2 + | | + | 5 |____ out_a + \ / + --------- +----------------------- +tensor Name : +tensor Rank : 3 +block_form : False +is_diag : False +on device : cytnx device: CPU + --------- + / \ + common_1 ____| 3 4 |____ common_2 + | | + | 5 |____ out_b + \ / + --------- +----------------------- +tensor Name : +tensor Rank : 2 +block_form : False +is_diag : False +on device : cytnx device: CPU + -------- + / \ + | 5 |____ out_a + | | + | 5 |____ out_b + \ / + -------- +``` -## Fast Examples +* `UniTensor` supports `Block` form, which is useful if the physical system has a symmetry. See [user guide](https://kaihsinwu.gitlab.io/Cytnx_doc/) for more details. - See test.cpp for using C++ . - See test.py for using python +------------------------------ ## Developers & Maintainers - - [Creator and Project manager] - Kai-Hsin Wu (Boston Univ.) kaihsinwu@gmail.com - - Chang Teng Lin (NTU, Taiwan): major maintainer and developer - Ke Hsu (NTU, Taiwan): major maintainer and developer - Hao Ti (NTU, Taiwan): documentation and linalg - Ying-Jer Kao (NTU, Taiwan): setuptool, cmake - +Creator and Project manager | Affiliation | Email +----------------------------|-----------------|--------- +Kai-Hsin Wu |Boston Univ., USA|kaihsinwu@gmail.com +\n + +Developers | Affiliation | Roles +----------------|-----------------|--------- +Chang-Teng Lin |NTU, Taiwan |major maintainer and developer +Ke Hsu |NTU, Taiwan |major maintainer and developer +Ivana Gyro |NTU, Taiwan |major maintainer and developer +Hao-Ti Hung |NTU, Taiwan |documentation and linalg +Ying-Jer Kao |NTU, Taiwan |setuptool, cmake ## Contributors - - Yen-Hsin Wu (NTU, Taiwan) - Po-Kwan Wu (OSU) - Wen-Han Kao (UMN, USA) - Yu-Hsueh Chen (NTU, Taiwan) - PoChung Chen (NCHU, Taiwan) - - -## Refereces: - - * example/DMRG: - https://www.tensors.net/dmrg +Contributors | Affiliation +----------------|----------------- +PoChung Chen | NTHU, Taiwan +Chia-Min Chung | NSYSU, Taiwan +Ian McCulloch | NTHU, Taiwan +Manuel Schneider| NYCU, Taiwan +Yen-Hsin Wu | NTU, Taiwan +Po-Kwan Wu | OSU, USA +Wen-Han Kao | UMN, USA +Yu-Hsueh Chen | NTU, Taiwan +Yu-Cheng Lin | NTU, Taiwan + + +## References +* Paper: +[https://arxiv.org/abs/2401.01921](https://arxiv.org/abs/2401.01921) + +* Example/DMRG: +[https://www.tensors.net/dmrg](https://www.tensors.net/dmrg) + +* hptt library: +[https://github.com/springer13/hptt](https://github.com/springer13/hptt) diff --git a/example/TDVP/tdvp1_dense.py b/example/TDVP/tdvp1_dense.py index ac6b7ac31..3f3b95f56 100644 --- a/example/TDVP/tdvp1_dense.py +++ b/example/TDVP/tdvp1_dense.py @@ -263,14 +263,14 @@ def Local_meas(A, B, Op, site): def prepare_rand_init_MPS(Nsites, chi, d): lbls = [] A = [None for i in range(Nsites)] - A[0] = cytnx.UniTensor(cytnx.random.normal([1, d, min(chi, d)], 0., 1.), rowrank = 2) + A[0] = cytnx.UniTensor(cytnx.random.normal([1, d, min(chi, d)], 0., 1., seed=0), rowrank = 2) A[0].relabels_(["0","1","2"]) lbls.append(["0","1","2"]) # store the labels for later convinience. for k in range(1,Nsites): dim1 = A[k-1].shape()[2]; dim2 = d dim3 = min(min(chi, A[k-1].shape()[2] * d), d ** (Nsites - k - 1)) - A[k] = cytnx.UniTensor(cytnx.random.normal([dim1, dim2, dim3],0.,1.), rowrank = 2) + A[k] = cytnx.UniTensor(cytnx.random.normal([dim1, dim2, dim3],0.,1., seed=0), rowrank = 2) lbl = [str(2*k),str(2*k+1),str(2*k+2)] A[k].relabels_(lbl) diff --git a/example/Tensor/at.cpp b/example/Tensor/at.cpp index f99147c0a..4d7e6515e 100644 --- a/example/Tensor/at.cpp +++ b/example/Tensor/at.cpp @@ -4,24 +4,24 @@ using namespace cytnx; using namespace std; int main() { - Tensor A = arange(30, Type.Float).reshape(2, 3, 5); + Tensor A = arange(30).reshape(2, 3, 5); cout << A << endl; // note that type resolver should be consist with the dtype - cout << A.at(0, 0, 2) << endl; + cout << A.at(0, 0, 2) << endl; // the return is a ref., can be modify directly. - A.at(0, 0, 2) = 999; + A.at(0, 0, 2) = 999; - cout << A.at(0, 0, 2) << endl; + cout << A.at(0, 0, 2) << endl; // [Note] there are two way to give argument: // Method 1: more like 'c++' way: // (alternatively, you can also simply give a std::vector) - A.at({0, 0, 2}); // note the braket{} + A.at({0, 0, 2}); // note the braket{} // Method 2: more like 'python' way: - A.at(0, 0, 2); + A.at(0, 0, 2); return 0; } diff --git a/example/Tensor/at.cpp.out b/example/Tensor/at.cpp.out index e69de29bb..4b0362cde 100644 --- a/example/Tensor/at.cpp.out +++ b/example/Tensor/at.cpp.out @@ -0,0 +1,14 @@ +Total elem: 30 +type : Double (Float64) +cytnx device: CPU +Shape : (2,3,5) +[[[0.00000e+00 1.00000e+00 2.00000e+00 3.00000e+00 4.00000e+00 ] + [5.00000e+00 6.00000e+00 7.00000e+00 8.00000e+00 9.00000e+00 ] + [1.00000e+01 1.10000e+01 1.20000e+01 1.30000e+01 1.40000e+01 ]] + [[1.50000e+01 1.60000e+01 1.70000e+01 1.80000e+01 1.90000e+01 ] + [2.00000e+01 2.10000e+01 2.20000e+01 2.30000e+01 2.40000e+01 ] + [2.50000e+01 2.60000e+01 2.70000e+01 2.80000e+01 2.90000e+01 ]]] + + +2 +999 diff --git a/example/Tensor/to.cpp.out b/example/Tensor/to.cpp.out index e69de29bb..ad0e81484 100644 --- a/example/Tensor/to.cpp.out +++ b/example/Tensor/to.cpp.out @@ -0,0 +1,2 @@ +cytnx device: CUDA/GPU-id:0 +cytnx device: CPU diff --git a/example/Tensor/to_.cpp.out b/example/Tensor/to_.cpp.out index e69de29bb..76641bd38 100644 --- a/example/Tensor/to_.cpp.out +++ b/example/Tensor/to_.cpp.out @@ -0,0 +1 @@ +cytnx device: CUDA/GPU-id:0 diff --git a/include/LinOp.hpp b/include/LinOp.hpp index 21a310efc..a8000d039 100644 --- a/include/LinOp.hpp +++ b/include/LinOp.hpp @@ -62,10 +62,6 @@ namespace cytnx { examples for how to use them. ## Example: - ### c++ API: - \include example/LinOp/init.cpp - #### output> - \verbinclude example/LinOp/init.cpp.out ### python API: \include example/LinOp/init.py #### output> diff --git a/include/Network.hpp b/include/Network.hpp index 2a240d9df..75aa82db1 100644 --- a/include/Network.hpp +++ b/include/Network.hpp @@ -278,10 +278,10 @@ namespace cytnx { Currently, only Regular Network is support! - ##note: + @note 1. each network file cannot have more than 1024 lines. - ##detail: + @details Format of a network file: - each line defines a UniTensor, that takes the format '[name] : [Labels]' @@ -336,10 +336,10 @@ namespace cytnx { Currently, only Regular Network is support! - ##note: + @note 1. contents cannot have more than 1024 lines/strings. - ##detail: + @details Format of each string follows the same policy as Fromfile. diff --git a/include/Tensor.hpp b/include/Tensor.hpp index 46cf3e325..9ceeb573d 100644 --- a/include/Tensor.hpp +++ b/include/Tensor.hpp @@ -495,6 +495,8 @@ namespace cytnx { // This mechanism is to remove the 'void' type from Type_list. Taking advantage of it // appearing first ... + + /// @cond struct internal { template struct exclude_first; @@ -504,6 +506,7 @@ namespace cytnx { using type = std::variant; }; }; // internal + /// @endcond // std::variant of pointers to Type_list, without void .... using pointer_types = diff --git a/include/linalg.hpp b/include/linalg.hpp index def6767ba..42bc417b7 100644 --- a/include/linalg.hpp +++ b/include/linalg.hpp @@ -27,7 +27,7 @@ namespace cytnx { * @param[in] Rt The right UniTensor. * @return [UniTensor] The result of the addition. * @pre \p Lt and \p Rt must have the same shape. - * @see `linalg::Add(const cytnx::UniTensor &Lt, const cytnx::UniTensor &Rt)` + * @see linalg::Add(const cytnx::UniTensor &Lt, const cytnx::UniTensor &Rt) */ cytnx::UniTensor operator+(const cytnx::UniTensor &Lt, const cytnx::UniTensor &Rt); @@ -38,7 +38,7 @@ namespace cytnx { * @param[in] lc The left template type. * @param[in] Rt The right UniTensor. * @return [UniTensor] The result of the addition. - * @see `linalg::Add(const T &lc, const cytnx::UniTensor &Rt)` + * @see linalg::Add(const T &lc, const cytnx::UniTensor &Rt) */ template cytnx::UniTensor operator+(const T &lc, const cytnx::UniTensor &Rt); @@ -50,7 +50,7 @@ namespace cytnx { * @param[in] Lt The left UniTensor. * @param[in] rc The right template type. * @return [UniTensor] The result of the addition. - * @see `linalg::Add(const cytnx::UniTensor &Lt, const T &rc)` + * @see linalg::Add(const cytnx::UniTensor &Lt, const T &rc) */ template cytnx::UniTensor operator+(const cytnx::UniTensor &Lt, const T &rc); @@ -63,7 +63,7 @@ namespace cytnx { * @param[in] Rt The right UniTensor. * @return [UniTensor] The result of the subtraction. * @pre \p Lt and \p Rt must have the same shape. - * @see `linalg::Sub(const cytnx::UniTensor &Lt, const cytnx::UniTensor &Rt)` + * @see linalg::Sub(const cytnx::UniTensor &Lt, const cytnx::UniTensor &Rt) */ cytnx::UniTensor operator-(const cytnx::UniTensor &Lt, const cytnx::UniTensor &Rt); @@ -74,7 +74,7 @@ namespace cytnx { * @param[in] lc The left template type. * @param[in] Rt The right UniTensor. * @return [UniTensor] The result of the subtraction. - * @see `linalg::Sub(const T &lc, const cytnx::UniTensor &Rt)` + * @see linalg::Sub(const T &lc, const cytnx::UniTensor &Rt) */ template cytnx::UniTensor operator-(const T &lc, const cytnx::UniTensor &Rt); @@ -86,7 +86,7 @@ namespace cytnx { * @param[in] Lt The left UniTensor. * @param[in] rc The right template type. * @return [UniTensor] The result of the subtraction. - * @see `linalg::Sub(const cytnx::UniTensor &Lt, const T &rc)` + * @see linalg::Sub(const cytnx::UniTensor &Lt, const T &rc) */ template cytnx::UniTensor operator-(const cytnx::UniTensor &Lt, const T &rc); @@ -99,7 +99,7 @@ namespace cytnx { * @param[in] Rt The right UniTensor. * @return [UniTensor] The result of the multiplication. * @pre \p Lt and \p Rt must have the same shape. - * @see `linalg::Mul(const cytnx::UniTensor &Lt, const cytnx::UniTensor &Rt)` + * @see linalg::Mul(const cytnx::UniTensor &Lt, const cytnx::UniTensor &Rt) */ cytnx::UniTensor operator*(const cytnx::UniTensor &Lt, const cytnx::UniTensor &Rt); @@ -110,7 +110,7 @@ namespace cytnx { * @param[in] lc The left template type. * @param[in] Rt The right UniTensor. * @return [UniTensor] The result of the multiplication. - * @see `linalg::Mul(const T &lc, const cytnx::UniTensor &Rt)` + * @see linalg::Mul(const T &lc, const cytnx::UniTensor &Rt) */ template cytnx::UniTensor operator*(const T &lc, const cytnx::UniTensor &Rt); @@ -122,7 +122,7 @@ namespace cytnx { * @param[in] Lt The left UniTensor. * @param[in] rc The right template type. * @return [UniTensor] The result of the multiplication. - * @see `linalg::Mul(const cytnx::UniTensor &Lt, const T &rc)` + * @see linalg::Mul(const cytnx::UniTensor &Lt, const T &rc) */ template cytnx::UniTensor operator*(const cytnx::UniTensor &Lt, const T &rc); @@ -135,7 +135,7 @@ namespace cytnx { * @param[in] Rt The right UniTensor. * @return [UniTensor] The result of the division. * @pre \p Lt and \p Rt must have the same shape. - * @see `linalg::Div(const cytnx::UniTensor &Lt, const cytnx::UniTensor &Rt)` + * @see linalg::Div(const cytnx::UniTensor &Lt, const cytnx::UniTensor &Rt) */ cytnx::UniTensor operator/(const cytnx::UniTensor &Lt, const cytnx::UniTensor &Rt); @@ -146,7 +146,7 @@ namespace cytnx { * @param[in] lc The left template type. * @param[in] Rt The right UniTensor. * @return [UniTensor] The result of the division. - * @see `linalg::Div(const T &lc, const cytnx::UniTensor &Rt)` + * @see linalg::Div(const T &lc, const cytnx::UniTensor &Rt) */ template cytnx::UniTensor operator/(const T &lc, const cytnx::UniTensor &Rt); @@ -158,7 +158,7 @@ namespace cytnx { * @param[in] Lt The left UniTensor. * @param[in] rc The right template type. * @return [UniTensor] The result of the division. - * @see `linalg::Div(const cytnx::UniTensor &Lt, const T &rc)` + * @see linalg::Div(const cytnx::UniTensor &Lt, const T &rc) */ template cytnx::UniTensor operator/(const cytnx::UniTensor &Lt, const T &rc); @@ -171,7 +171,7 @@ namespace cytnx { * @param[in] Rt The right UniTensor. * @return [UniTensor] The result of the modulo. * @pre \p Lt and \p Rt must have the same shape. - * @see `linalg::Mod(const cytnx::UniTensor &Lt, const cytnx::UniTensor &Rt)` + * @see linalg::Mod(const cytnx::UniTensor &Lt, const cytnx::UniTensor &Rt) */ cytnx::UniTensor operator%(const cytnx::UniTensor &Lt, const cytnx::UniTensor &Rt); @@ -182,7 +182,7 @@ namespace cytnx { * @param[in] lc The left template type. * @param[in] Rt The right UniTensor. * @return [UniTensor] The result of the modulo. - * @see `linalg::Mod(const T &lc, const cytnx::UniTensor &Rt)` + * @see linalg::Mod(const T &lc, const cytnx::UniTensor &Rt) */ template cytnx::UniTensor operator%(const T &lc, const cytnx::UniTensor &Rt); @@ -194,7 +194,7 @@ namespace cytnx { * @param[in] Lt The left UniTensor. * @param[in] rc The right template type. * @return [UniTensor] The result of the modulo. - * @see `linalg::Mod(const cytnx::UniTensor &Lt, const T &rc)` + * @see linalg::Mod(const cytnx::UniTensor &Lt, const T &rc) */ template cytnx::UniTensor operator%(const cytnx::UniTensor &Lt, const T &rc); @@ -231,8 +231,8 @@ namespace cytnx { * @return The result UniTensor. * @pre \p Lt and \p Rt must have the same shape. * @see - * `UniTensor::Add(const cytnx::UniTensor &Rt) const, - * operator+(const cytnx::UniTensor &Lt, const cytnx::UniTensor &Rt)` + * UniTensor::Add(const cytnx::UniTensor &Rt) const, + * operator+(const cytnx::UniTensor &Lt, const cytnx::UniTensor &Rt) */ cytnx::UniTensor Add(const cytnx::UniTensor &Lt, const cytnx::UniTensor &Rt); @@ -265,8 +265,8 @@ namespace cytnx { * The inpute template type \p lc will be casted to the same type as * the UniTensor \p Rt. * @see - * `operator+(const T &lc, const cytnx::UniTensor &Rt), - * Add(const cytnx::UniTensor &Lt, const cytnx::UniTensor &Rt)` + * operator+(const T &lc, const cytnx::UniTensor &Rt), + * Add(const cytnx::UniTensor &Lt, const cytnx::UniTensor &Rt) */ template cytnx::UniTensor Add(const T &lc, const cytnx::UniTensor &Rt); @@ -301,8 +301,8 @@ namespace cytnx { * The inpute template type \p rc will be casted to the same type as * the UniTensor \p Lt. * @see - * `operator+(const cytnx::UniTensor &Lt, const T &rc), - * Add(const T &lc, const cytnx::UniTensor &Rt)` + * operator+(const cytnx::UniTensor &Lt, const T &rc), + * Add(const T &lc, const cytnx::UniTensor &Rt) */ template cytnx::UniTensor Add(const cytnx::UniTensor &Lt, const T &rc); @@ -328,8 +328,8 @@ namespace cytnx { * @return The result UniTensor. * @pre \p Lt and \p Rt must have the same shape. * @see - * `UniTensor::Sub(const cytnx::UniTensor &Rt) const, - * operator-(const cytnx::UniTensor &Lt, const cytnx::UniTensor &Rt)` + * UniTensor::Sub(const cytnx::UniTensor &Rt) const, + * operator-(const cytnx::UniTensor &Lt, const cytnx::UniTensor &Rt) */ cytnx::UniTensor Sub(const cytnx::UniTensor &Lt, const cytnx::UniTensor &Rt); @@ -362,8 +362,8 @@ namespace cytnx { * The inpute template type \p lc will be casted to the same type as * the UniTensor \p Rt. * @see - * `operator-(const T &lc, const cytnx::UniTensor &Rt), - * Sub(const T &lc, const cytnx::UniTensor &Rt)` + * operator-(const T &lc, const cytnx::UniTensor &Rt), + * Sub(const T &lc, const cytnx::UniTensor &Rt) */ template cytnx::UniTensor Sub(const T &lc, const cytnx::UniTensor &Rt); @@ -397,8 +397,8 @@ namespace cytnx { * The inpute template type \p rc will be casted to the same type as * the UniTensor \p Lt. * @see - * `operator-(const cytnx::UniTensor &Lt, const T &rc), - * Sub(const cytnx::UniTensor &Lt, const T &rc)` + * operator-(const cytnx::UniTensor &Lt, const T &rc), + * Sub(const cytnx::UniTensor &Lt, const T &rc) */ template cytnx::UniTensor Sub(const cytnx::UniTensor &Lt, const T &rc); @@ -424,8 +424,8 @@ namespace cytnx { * @return The result UniTensor. * @pre \p Lt and \p Rt must have the same shape. * @see - * `UniTensor::Mul(const cytnx::UniTensor &Rt) const, - * operator*(const cytnx::UniTensor &Lt, const cytnx::UniTensor &Rt)` + * UniTensor::Mul(const cytnx::UniTensor &Rt) const, + * operator*(const cytnx::UniTensor &Lt, const cytnx::UniTensor &Rt) */ cytnx::UniTensor Mul(const cytnx::UniTensor &Lt, const cytnx::UniTensor &Rt); @@ -458,8 +458,8 @@ namespace cytnx { * The inpute template type \p lc will be casted to the same type as * the UniTensor \p Rt. * @see - * `operator*(const T &lc, const cytnx::UniTensor &Rt), - * Mul(const T &lc, const cytnx::UniTensor &Rt)` + * operator*(const T &lc, const cytnx::UniTensor &Rt), + * Mul(const T &lc, const cytnx::UniTensor &Rt) */ template cytnx::UniTensor Mul(const T &lc, const cytnx::UniTensor &Rt); @@ -493,8 +493,8 @@ namespace cytnx { * The inpute template type \p rc will be casted to the same type as * the UniTensor \p Lt. * @see - * `operator*(const cytnx::UniTensor &Lt, const T &rc), - * Mul(const cytnx::UniTensor &Lt, const T &rc)` + * operator*(const cytnx::UniTensor &Lt, const T &rc), + * Mul(const cytnx::UniTensor &Lt, const T &rc) */ template cytnx::UniTensor Mul(const cytnx::UniTensor &Lt, const T &rc); @@ -520,8 +520,8 @@ namespace cytnx { * @return The result UniTensor. * @pre \p Lt and \p Rt must have the same shape. * @see - * `UniTensor::Div(const cytnx::UniTensor &Rt) const, - * operator/(const cytnx::UniTensor &Lt, const cytnx::UniTensor &Rt)` + * UniTensor::Div(const cytnx::UniTensor &Rt) const, + * operator/(const cytnx::UniTensor &Lt, const cytnx::UniTensor &Rt) */ cytnx::UniTensor Div(const cytnx::UniTensor &Lt, const cytnx::UniTensor &Rt); @@ -555,8 +555,8 @@ namespace cytnx { * the UniTensor \p Rt. * 2. The division by zero is not allowed. * @see - * `operator/(const T &lc, const cytnx::UniTensor &Rt), - * Div(const T &lc, const cytnx::UniTensor &Rt)` + * operator/(const T &lc, const cytnx::UniTensor &Rt), + * Div(const T &lc, const cytnx::UniTensor &Rt) */ template cytnx::UniTensor Div(const T &lc, const cytnx::UniTensor &Rt); @@ -591,8 +591,8 @@ namespace cytnx { * the UniTensor \p Lt. * 2. The division by zero is not allowed. * @see - * `operator/(const cytnx::UniTensor &Lt, const T &rc), - * Div(const cytnx::UniTensor &Lt, const T &rc)` + * operator/(const cytnx::UniTensor &Lt, const T &rc), + * Div(const cytnx::UniTensor &Lt, const T &rc) */ template cytnx::UniTensor Div(const cytnx::UniTensor &Lt, const T &rc); @@ -620,8 +620,8 @@ namespace cytnx { * 1. \p Lt and \p Rt must have the same shape. * 2. The input UniTensor \p Lt and \p Rt need to be integer type. * @see - * `UniTensor::Mod(const cytnx::UniTensor &Rt) const, - * operator%(const cytnx::UniTensor &Lt, const cytnx::UniTensor &Rt)` + * Tensor::Mod(const T &rhs), + * operator%(const cytnx::UniTensor &Lt, const cytnx::UniTensor &Rt) */ cytnx::UniTensor Mod(const cytnx::UniTensor &Lt, const cytnx::UniTensor &Rt); @@ -652,8 +652,8 @@ namespace cytnx { * The inpute template type \p lc will be casted to the same type as * the UniTensor \p Rt. * @see - * `operator%(const cytnx::UniTensor &Lt, const T &rc), - * Mod(const cytnx::UniTensor &Lt, const T &rc)` + * operator%(const cytnx::UniTensor &Lt, const T &rc), + * Mod(const cytnx::UniTensor &Lt, const T &rc) */ template cytnx::UniTensor Mod(const T &lc, const cytnx::UniTensor &Rt); @@ -685,8 +685,8 @@ namespace cytnx { * The inpute template type \p rc will be casted to the same type as * the UniTensor \p Lt. * @see - * `operator%(const cytnx::UniTensor &Lt, const T &rc), - * Mod(const cytnx::UniTensor &Lt, const T &rc)` + * operator%(const cytnx::UniTensor &Lt, const T &rc), + * Mod(const cytnx::UniTensor &Lt, const T &rc) */ template cytnx::UniTensor Mod(const cytnx::UniTensor &Lt, const T &rc); @@ -696,7 +696,7 @@ namespace cytnx { @details This function performs the Singular-Value decomposition on a UniTensor \p Tin. The result will depend on the rowrank of the UniTensor \p Tin. For more details, please refer to the documentation of the function Svd(const Tensor &Tin, const bool &is_UvT). - @see `Svd(const Tensor &Tin, const bool &is_UvT)` + @see Svd(const Tensor &Tin, const bool &is_UvT) */ std::vector Svd(const cytnx::UniTensor &Tin, const bool &is_UvT = true); @@ -706,8 +706,8 @@ namespace cytnx { The result will depend on the rowrank of the UniTensor \p Tin. For more details, please refer to the documentation of the functions Gesvd(const Tensor &Tin, const bool &is_U, const bool &is_vT) and Svd(const Tensor &Tin, const bool &is_UvT). - @see `Gesvd(const Tensor &Tin, const bool &is_U, const bool - &is_vT), Svd(const Tensor &Tin, const bool &is_UvT)`. + @see Gesvd(const Tensor &Tin, const bool &is_U, const bool + &is_vT), Svd(const Tensor &Tin, const bool &is_UvT). */ std::vector Gesvd(const cytnx::UniTensor &Tin, const bool &is_U = true, const bool &is_vT = true); @@ -717,11 +717,11 @@ namespace cytnx { * @details This function performs the Singular-Value decomposition of a UniTensor \p Tin and * truncates the singular values. The result will depend on the rowrank of the * UniTensor \p Tin. For more details, please refer to the references below. - * @see `Svd_truncate(const cytnx::UniTensor &Tin, const - cytnx_uint64 &keepdim, const std::vector min_blockdim, const double &err = 0., - const bool &is_UvT = true, const unsigned int &return_err = 0, const cytnx_uint64 &mindim = 1), + * @see Svd_truncate(const cytnx::UniTensor &Tin, const + cytnx_uint64 &keepdim, const std::vector min_blockdim, const double &err, + const bool &is_UvT, const unsigned int &return_err, const cytnx_uint64 &mindim), Svd_truncate(const Tensor &Tin, const cytnx_uint64 &keepdim, const double &err, const bool - &is_UvT, const unsigned int &return_err)` + &is_UvT, const unsigned int &return_err, const cytnx_uint64& mindim) */ std::vector Svd_truncate(const cytnx::UniTensor &Tin, const cytnx_uint64 &keepdim, const double &err = 0., @@ -768,8 +768,8 @@ namespace cytnx { * 4. If \p return_err is true, then the error will be pushed back to the vector. * @endparblock * @pre This function assumes a BlockUniTensor as input for \p Tin. - * @see `Svd_truncate(const Tensor &Tin, const cytnx_uint64 &keepdim, const double &err, - * const bool &is_UvT, const unsigned int &return_err)` + * @see Svd_truncate(const Tensor &Tin, const cytnx_uint64 &keepdim, const double &err, const + * bool &is_UvT, const unsigned int &return_err, const cytnx_uint64& mindim) * @note The truncated bond dimension can be larger than \p keepdim for degenerate singular * values: if the largest \f$ n \f$ truncated singular values would be exactly equal to the * smallest kept singular value, then the bond dimension is enlarged to \p keepdim \f$ + n \f$. @@ -789,10 +789,10 @@ namespace cytnx { * truncates the singular values. The result will depend on the rowrank of the * UniTensor \p Tin. This version uses the ?gesvd method. See references below for * more details. - * @see `Svd_truncate(const cytnx::UniTensor &Tin, const std::vector min_blockdim, - * const double &err, const bool &is_UvT, const unsigned int &return_err, const cytnx_uint64 - * &mindim), Gesvd(const cytnx::UniTensor &Tin, const bool &is_U = true, const bool &is_vT = - * true)` + * @see Svd_truncate(const cytnx::UniTensor &Tin, const cytnx_uint64 &keepdim, const + * std::vector min_blockdim, const double &err, const bool &is_UvT, const unsigned + * int &return_err, const cytnx_uint64 &mindim), Gesvd(const cytnx::UniTensor &Tin, const bool + * &is_U, const bool &is_vT) */ std::vector Gesvd_truncate(const cytnx::UniTensor &Tin, const cytnx_uint64 &keepdim, @@ -807,10 +807,10 @@ namespace cytnx { * @details This function performs the Singular-Value decomposition of a UniTensor \p Tin and * truncates the singular values. This version uses the ?gesvd method. See references below for * more details. - * @see `Svd_truncate(const cytnx::UniTensor &Tin, const cytnx_uint64 &keepdim, const + * @see Svd_truncate(const cytnx::UniTensor &Tin, const cytnx_uint64 &keepdim, const * std::vector min_blockdim, const double &err, const bool &is_UvT, * const unsigned int &return_err, const cytnx_uint64 &mindim), Gesvd(const - * cytnx::UniTensor &Tin, const bool &is_U = true, const bool &is_vT = true)` + * cytnx::UniTensor &Tin, const bool &is_U, const bool &is_vT) */ std::vector Gesvd_truncate( const cytnx::UniTensor &Tin, const cytnx_uint64 &keepdim, @@ -828,7 +828,7 @@ namespace cytnx { * @details This function performs the exponential function on a UniTensor \p Tin, which the * blocks are Hermitian matrix. For more details, please refer to the documentation of the * function ExpH(const Tensor &Tin, const T &a, const T &b). - * @see `ExpH(const Tensor &Tin, const T &a, const T &b)` + * @see ExpH(const Tensor &Tin, const T &a, const T &b) */ template cytnx::UniTensor ExpH(const cytnx::UniTensor &Tin, const T &a, const T &b = 0); @@ -838,7 +838,7 @@ namespace cytnx { * @details This function performs the exponential function on a UniTensor \p Tin. * For more details, please refer to the documentation of the * function ExpM(const Tensor &Tin, const T &a, const T &b). - * @see `ExpM(const Tensor &Tin, const T &a, const T &b)` + * @see ExpM(const Tensor &Tin, const T &a, const T &b) */ template cytnx::UniTensor ExpM(const cytnx::UniTensor &Tin, const T &a, const T &b = 0); @@ -849,7 +849,7 @@ namespace cytnx { * @details This function performs the exponential function on a UniTensor \p Tin, which the * blocks are Hermitian matrix. For more details, please refer to the documentation of the * function ExpH(const Tensor &Tin) - * @see `ExpH(const Tensor &Tin)` + * @see ExpH(const Tensor &Tin) */ cytnx::UniTensor ExpH(const cytnx::UniTensor &Tin); @@ -858,13 +858,13 @@ namespace cytnx { * @details This function performs the exponential function on a UniTensor \p Tin. * For more details, please refer to the documentation of the * function ExpM(const Tensor &Tin) - * @see `ExpM(const Tensor &Tin)` + * @see ExpM(const Tensor &Tin) */ cytnx::UniTensor ExpM(const cytnx::UniTensor &Tin); /** * @deprecated This function is deprecated, please use - * Trace(const cytnx::UniTensor &Tin, const string &a, const string &b) instead. + * Trace(const cytnx::UniTensor &Tin, const std::string &a, const std::string &b) instead. */ cytnx::UniTensor Trace(const cytnx::UniTensor &Tin, const cytnx_int64 &a = 0, const cytnx_int64 &b = 1); @@ -874,7 +874,7 @@ namespace cytnx { * @details This function performs trace over two legs of a UniTensor \p Tin. The two legs * are specified by \p a and \p b. For more details, please refer to the documentation of the * function Trace(const Tensor &Tin, const cytnx_int64 &a, const cytnx_int64 &b). - * @see `Trace(const Tensor &Tin, const cytnx_int64 &a, const cytnx_int64 &b)` + * @see Trace(const Tensor &Tin, const cytnx_uint64 &a, const cytnx_uint64 &b) */ cytnx::UniTensor Trace(const cytnx::UniTensor &Tin, const std::string &a, const std::string &b); @@ -884,7 +884,7 @@ namespace cytnx { * The result will depend on the rowrank of the UniTensor \p Tin. For more details, * please refer to the documentation of the function * Qr(const Tensor &Tin, const bool &is_tau). - * @see `Qr(const Tensor &Tin, const bool &is_tau)` + * @see Qr(const Tensor &Tin, const bool &is_tau) */ std::vector Qr(const cytnx::UniTensor &Tin, const bool &is_tau = false); @@ -894,7 +894,7 @@ namespace cytnx { * The result will depend on the rowrank of the UniTensor \p Tin. For more details, * please refer to the documentation of the function * Qdr(const Tensor &Tin, const bool &is_tau). - * @see `Qdr(const Tensor &Tin, const bool &is_tau)` + * @see Qdr(const Tensor &Tin, const bool &is_tau) */ std::vector Qdr(const cytnx::UniTensor &Tin, const bool &is_tau = false); @@ -910,7 +910,7 @@ namespace cytnx { @return UniTensor with the same shape as Tin, but with the elements are the power of Tin. @note Compare to the Pow_(UniTensor &Tin, const double &p) function, this function will not modify the input UniTensor and return a new UniTensor. - @see `Pow_(UniTensor &Tin, const double &p)` + @see Pow_(UniTensor &Tin, const double &p) */ UniTensor Pow(const cytnx::UniTensor &Tin, const double &p); @@ -923,7 +923,7 @@ namespace cytnx { * then \p p must be an integer. * @note Compare to the Pow function, this is an inplacely function, which * will modify the input UniTensor. - * @see `Pow(const cytnx::UniTensor &Tin, const double &p)` + * @see Pow(const cytnx::UniTensor &Tin, const double &p) */ void Pow_(UniTensor &Tin, const double &p); @@ -931,14 +931,14 @@ namespace cytnx { * @brief Elementwise conjugate of the UniTensor * @param[in] UT The input UniTensor. * @return [UniTensor] The UniTensor with all element being conjugated - * @see See `UniTensor.Conj()` for further details + * @see See UniTensor.Conj() for further details */ cytnx::UniTensor Conj(const cytnx::UniTensor &UT); /** * @brief Inplace elementwise conjugate of the UniTensor * @param[in] UT The input UniTensor. - * @see See `UniTensor.Conj_()` for further details + * @see See UniTensor.Conj_() for further details */ void Conj_(cytnx::UniTensor &UT); @@ -948,7 +948,7 @@ namespace cytnx { //==================================================================================== /** - * @bridf The addition function for Tensor. + * @brief The addition function for Tensor. * @details This is the addition function between two Tensor. It will perform * the element-wise addition. That means if the left Tensor \p Lt * is given as \f$ T_L \f$ and the right Tensor \p Rt is given as \f$ T_R \f$, @@ -965,10 +965,10 @@ namespace cytnx { * @return The result Tensor. * @pre The shape of \p Lt and \p Rt must be the same. * @see - * `Add(const T &lc, const Tensor &Rt), + * Add(const T &lc, const Tensor &Rt), * Add(const Tensor &Lt, const T &rc), * iAdd(Tensor &Lt, const Tensor &Rt), - * operator+(const Tensor &Lt, const Tensor &Rt)` + * operator+(const Tensor &Lt, const Tensor &Rt) */ Tensor Add(const Tensor &Lt, const Tensor &Rt); @@ -988,10 +988,10 @@ namespace cytnx { * @param[in] Rt The right Tensor. * @return The result Tensor. * @see - * `Add(const Tensor &Lt, const Tensor &Rt), + * Add(const Tensor &Lt, const Tensor &Rt), * Add(const Tensor &Lt, const T &rc), * iAdd(Tensor &Lt, const Tensor &Rt), - * operator+(const Tensor &Lt, const Tensor &Rt)` + * operator+(const Tensor &Lt, const Tensor &Rt) */ template Tensor Add(const T &lc, const Tensor &Rt); @@ -1012,10 +1012,10 @@ namespace cytnx { * @param[in] rc The right template type. * @return The result Tensor. * @see - * `Add(const Tensor &Lt, const Tensor &Rt), + * Add(const Tensor &Lt, const Tensor &Rt), * Add(const T &lc, const Tensor &Rt), * iAdd(Tensor &Lt, const Tensor &Rt), - * operator+(const Tensor &Lt, const Tensor &Rt)` + * operator+(const Tensor &Lt, const Tensor &Rt) */ template Tensor Add(const Tensor &Lt, const T &rc); @@ -1039,10 +1039,10 @@ namespace cytnx { * @note Compare to the function Add(const Tensor &Lt, const Tensor &Rt), * this is a inplace function and it will modify the left Tensor \p Lt. * @see - * `Add(const Tensor &Lt, const Tensor &Rt), + * Add(const Tensor &Lt, const Tensor &Rt), * Add(const T &lc, const Tensor &Rt), * Add(const Tensor &Lt, const T &rc), - * operator+(const Tensor &Lt, const Tensor &Rt)` + * operator+(const Tensor &Lt, const Tensor &Rt) */ void iAdd(Tensor &Lt, const Tensor &Rt); @@ -1065,10 +1065,10 @@ namespace cytnx { * @param[in] Rt The right Tensor. * @return The result Tensor. * @see - * `Sub(const T &lc, const Tensor &Rt), + * Sub(const T &lc, const Tensor &Rt), * Sub(const Tensor &Lt, const T &rc), * iSub(Tensor &Lt, const Tensor &Rt), - * operator-(const Tensor &Lt, const Tensor &Rt)` + * operator-(const Tensor &Lt, const Tensor &Rt) */ Tensor Sub(const Tensor &Lt, const Tensor &Rt); @@ -1088,10 +1088,10 @@ namespace cytnx { * @param[in] Rt The right Tensor. * @return The result Tensor. * @see - * `Sub(const Tensor &Lt, const Tensor &Rt), + * Sub(const Tensor &Lt, const Tensor &Rt), * Sub(const Tensor &Lt, const T &rc), * iSub(Tensor &Lt, const Tensor &Rt), - * operator-(const Tensor &Lt, const Tensor &Rt)` + * operator-(const Tensor &Lt, const Tensor &Rt) */ template Tensor Sub(const T &lc, const Tensor &Rt); @@ -1112,10 +1112,10 @@ namespace cytnx { * @param[in] rc The right template type. * @return The result Tensor. * @see - * `Sub(const Tensor &Lt, const Tensor &Rt), + * Sub(const Tensor &Lt, const Tensor &Rt), * Sub(const T &lc, const Tensor &Rt), * iSub(Tensor &Lt, const Tensor &Rt), - * operator-(const Tensor &Lt, const Tensor &Rt)` + * operator-(const Tensor &Lt, const Tensor &Rt) */ template Tensor Sub(const Tensor &Lt, const T &rc); @@ -1139,10 +1139,10 @@ namespace cytnx { * @note Compare to the function Sub(const Tensor &Lt, const Tensor &Rt), * this is a inplace function and it will modify the left Tensor \p Lt. * @see - * `Sub(const Tensor &Lt, const Tensor &Rt), + * Sub(const Tensor &Lt, const Tensor &Rt), * Sub(const T &lc, const Tensor &Rt), * Sub(const Tensor &Lt, const T &rc), - * operator-(const Tensor &Lt, const Tensor &Rt)` + * operator-(const Tensor &Lt, const Tensor &Rt) */ void iSub(Tensor &Lt, const Tensor &Rt); @@ -1165,10 +1165,10 @@ namespace cytnx { * @param[in] Rt The right Tensor. * @return The result Tensor. * @see - * `Mul(const T &lc, const Tensor &Rt), + * Mul(const T &lc, const Tensor &Rt), * Mul(const Tensor &Lt, const T &rc), * iMul(Tensor &Lt, const Tensor &Rt), - * operator*(const Tensor &Lt, const Tensor &Rt)` + * operator*(const Tensor &Lt, const Tensor &Rt) */ Tensor Mul(const Tensor &Lt, const Tensor &Rt); @@ -1188,10 +1188,10 @@ namespace cytnx { * @param[in] rc The right template type. * @return The result Tensor. * @see - * `Mul(const Tensor &Lt, const Tensor &Rt), + * Mul(const Tensor &Lt, const Tensor &Rt), * Mul(const T &lc, const Tensor &Rt), * iMul(Tensor &Lt, const Tensor &Rt), - * operator*(const Tensor &Lt, const Tensor &Rt)` + * operator*(const Tensor &Lt, const Tensor &Rt) */ template Tensor Mul(const T &lc, const Tensor &Rt); @@ -1212,10 +1212,10 @@ namespace cytnx { * @param[in] rc The right template type. * @return The result Tensor. * @see - * `Mul(const Tensor &Lt, const Tensor &Rt), + * Mul(const Tensor &Lt, const Tensor &Rt), * Mul(const T &lc, const Tensor &Rt), * iMul(Tensor &Lt, const Tensor &Rt), - * operator*(const Tensor &Lt, const Tensor &Rt)` + * operator*(const Tensor &Lt, const Tensor &Rt) */ template Tensor Mul(const Tensor &Lt, const T &rc); @@ -1239,10 +1239,10 @@ namespace cytnx { * Compare to Mul(const Tensor &Lt, const Tensor &Rt), this is inplace function * and will modify the left Tensor \p Lt. * @see - * `Mul(const Tensor &Lt, const Tensor &Rt), + * Mul(const Tensor &Lt, const Tensor &Rt), * Mul(const T &lc, const Tensor &Rt), * Mul(const Tensor &Lt, const T &rc), - * operator*(const Tensor &Lt, const Tensor &Rt)` + * operator*(const Tensor &Lt, const Tensor &Rt) */ void iMul(Tensor &Lt, const Tensor &Rt); @@ -1266,10 +1266,10 @@ namespace cytnx { * @return The result Tensor. * @pre the right Tensor \p Rt should not contain any zero element. * @see - * `Div(const T &lc, const Tensor &Rt), + * Div(const T &lc, const Tensor &Rt), * Div(const Tensor &Lt, const T &rc), * iDiv(Tensor &Lt, const Tensor &Rt), - * operator/(const Tensor &Lt, const Tensor &Rt)` + * operator/(const Tensor &Lt, const Tensor &Rt) */ Tensor Div(const Tensor &Lt, const Tensor &Rt); @@ -1290,10 +1290,10 @@ namespace cytnx { * @return The result Tensor. * @pre the right tensor \p Rt should not contain any zero element. * @see - * `Div(const Tensor &Lt, const Tensor &Rt), + * Div(const Tensor &Lt, const Tensor &Rt), * Div(const Tensor &Lt, const T &rc), * iDiv(Tensor &Lt, const Tensor &Rt), - * operator/(const Tensor &Lt, const Tensor &Rt)` + * operator/(const Tensor &Lt, const Tensor &Rt) */ template Tensor Div(const T &lc, const Tensor &Rt); @@ -1315,10 +1315,10 @@ namespace cytnx { * @return The result Tensor. * @pre the right template type \p rc should not be zero. * @see - * `Div(const Tensor &Lt, const Tensor &Rt), + * Div(const Tensor &Lt, const Tensor &Rt), * Div(const T &lc, const Tensor &Rt), * iDiv(Tensor &Lt, const Tensor &Rt), - * operator/(const Tensor &Lt, const Tensor &Rt)` + * operator/(const Tensor &Lt, const Tensor &Rt) */ template Tensor Div(const Tensor &Lt, const T &rc); @@ -1342,10 +1342,10 @@ namespace cytnx { * @note compare to the Div(const Tensor &Lt, const Tensor &Rt) function, * this is a inplace function, which will modify the left Tensor \p Lt. * @see - * `Div(const Tensor &Lt, const Tensor &Rt), + * Div(const Tensor &Lt, const Tensor &Rt), * Div(const T &lc, const Tensor &Rt), * Div(const Tensor &Lt, const T &rc), - * operator/(const Tensor &Lt, const Tensor &Rt)` + * operator/(const Tensor &Lt, const Tensor &Rt) */ void iDiv(Tensor &Lt, const Tensor &Rt); @@ -1371,8 +1371,8 @@ namespace cytnx { * @pre The input tensors \p Lt and \p Rt should have the same shape and * need to be integer type. * @see - * `Mod(const T &lc, const Tensor &Rt), - * Mod(const Tensor &Lt, const T &rc)` + * Mod(const T &lc, const Tensor &Rt), + * Mod(const Tensor &Lt, const T &rc) */ Tensor Mod(const Tensor &Lt, const Tensor &Rt); @@ -1393,8 +1393,8 @@ namespace cytnx { * @return The result Tensor. * @pre the right template type \p rc should be integer type. * @see - * `Mod(const Tensor &Lt, const Tensor &Rt), - * Mod(const Tensor &Lt, const T &rc)` + * Mod(const Tensor &Lt, const Tensor &Rt), + * Mod(const Tensor &Lt, const T &rc) */ template Tensor Mod(const T &lc, const Tensor &Rt); @@ -1416,8 +1416,8 @@ namespace cytnx { * @return The result Tensor. * @pre the right template type \p rc should be integer type. * @see - * `Mod(const Tensor &Lt, const Tensor &Rt), - * Mod(const T &lc, const Tensor &Rt)` + * Mod(const Tensor &Lt, const Tensor &Rt), + * Mod(const T &lc, const Tensor &Rt) */ template Tensor Mod(const Tensor &Lt, const T &rc); @@ -1446,8 +1446,8 @@ namespace cytnx { * @return The result Tensor. * @pre The input tensors \p Lt and \p Rt should have the same shape. * @see - * `Cpr(const T &lc, const Tensor &Rt), - * Cpr(const Tensor &Lt, const T &rc)` + * Cpr(const T &lc, const Tensor &Rt), + * Cpr(const Tensor &Lt, const T &rc) */ Tensor Cpr(const Tensor &Lt, const Tensor &Rt); @@ -1472,8 +1472,8 @@ namespace cytnx { * @param[in] Rt The right Tensor. * @return The result Tensor. * @see - * `Cpr(const Tensor &Lt, const Tensor &Rt), - * Cpr(const Tensor &Lt, const T &rc)` + * Cpr(const Tensor &Lt, const Tensor &Rt), + * Cpr(const Tensor &Lt, const T &rc) */ template Tensor Cpr(const T &lc, const Tensor &Rt); @@ -1499,8 +1499,8 @@ namespace cytnx { * @param[in] rc The right template type. * @return The result Tensor. * @see - * `Cpr(const Tensor &Lt, const Tensor &Rt), - * Cpr(const T &lc, const Tensor &Rt)` + * Cpr(const Tensor &Lt, const Tensor &Rt), + * Cpr(const T &lc, const Tensor &Rt) */ template Tensor Cpr(const Tensor &Lt, const T &rc); @@ -1548,7 +1548,7 @@ namespace cytnx { /** @brief Perform Singular-Value decomposition on a rank-2 Tensor (a @em matrix). @details This function will perform Singular-Value decomposition on a matrix (a rank-2 - Tensor). That means givent a matrix \p Tin as \f$ M \f$, then the result will be: + Tensor). That means given a matrix \p Tin as \f$ M \f$, then the result will be: \f[ M = U S V^\dagger, \f] @@ -1567,8 +1567,8 @@ namespace cytnx { 2. If \p is_UvT is true, then the tensors \f$ U,V^\dagger \f$ will be pushed back to the vector. @endparblock @pre The input tensor should be a rank-2 tensor (matrix). - @see \ref `Svd_truncate(const Tensor &Tin, const cytnx_uint64 &keepdim, const double &err, const - bool &is_UvT, const unsigned int &return_err)` + @see Svd_truncate(const Tensor &Tin, const cytnx_uint64 &keepdim, const double &err, const + bool &is_UvT, const unsigned int &return_err, const cytnx_uint64& mindim) */ std::vector Svd(const Tensor &Tin, const bool &is_UvT = true); @@ -1577,7 +1577,7 @@ namespace cytnx { /** @brief Perform Singular-Value decomposition on a rank-2 Tensor (a @em matrix). @details This function will perform Singular-Value decomposition on a matrix (a rank-2 - Tensor). That means givent a matrix \p Tin as \f$ M \f$, then the result will be: + Tensor). That means given a matrix \p Tin as \f$ M \f$, then the result will be: \f[ M = U S V^\dagger, \f] @@ -1598,8 +1598,8 @@ namespace cytnx { is_vT is true, \f$ V^\dagger \f$ will be pushed back to the vector. @endparblock @pre The input tensor should be a rank-2 tensor (matrix). - @see `Gesvd_truncate(const Tensor &Tin, const cytnx_uint64 &keepdim, const double &err, - const bool &is_U, const bool &is_vT, const unsigned int &return_err)` + @see Gesvd_truncate(const Tensor &Tin, const cytnx_uint64 &keepdim, const double &err, + const bool &is_U, const bool &is_vT, const unsigned int &return_err, const cytnx_uint64& mindim) */ std::vector Gesvd(const Tensor &Tin, const bool &is_U = true, const bool &is_vT = true); @@ -1642,7 +1642,7 @@ namespace cytnx { 4. If \p return_err is true, then the error will be pushed back to the vector. @endparblock @pre The input tensor should be a rank-2 tensor (matrix). - @see `Svd(const Tensor &Tin, const bool &is_U, const bool &is_vT)` + @see Svd(const Tensor &Tin, const bool &is_UvT) @note The truncated bond dimension can be larger than \p keepdim for degenerate singular values: if the largest \f$ n \f$ truncated singular values would be exactly equal to the smallest kept singular value, then the bond dimension is enlarged to \p keepdim \f$ + n \f$. Example: if the @@ -1690,7 +1690,7 @@ namespace cytnx { 4. If \p return_err is true, then the error will be pushed back to the vector. @endparblock @pre The input tensor should be a rank-2 tensor (matrix). - @see `Gesvd(const Tensor &Tin, const bool &is_U, const bool &is_vT)` + @see Gesvd(const Tensor &Tin, const bool &is_U, const bool &is_vT) @note The truncated bond dimension can be larger than \p keepdim for degenerate singular values: if the largest \f$ n \f$ truncated singular values would be exactly equal to the smallest kept singular value, then the bond dimension is enlarged to \p keepdim \f$ + n \f$. Example: if the @@ -1713,7 +1713,7 @@ namespace cytnx { /** @brief Perform QR decomposition on a rank-2 Tensor. @details This function will perform QR decomposition on a matrix (a rank-2 Tensor). That means - givent a matrix \p Tin as \f$ M \f$, then the result will be: + given a matrix \p Tin as \f$ M \f$, then the result will be: \f[ M = Q R, \f] @@ -1733,7 +1733,7 @@ namespace cytnx { This tensor will only return when \p is_tau = @em true. @endparblock @pre The input tensor should be a rank-2 tensor (matrix). - @see `Qdr(const Tensor &Tin, const bool &is_tau)` + @see Qdr(const Tensor &Tin, const bool &is_tau) */ std::vector Qr(const Tensor &Tin, const bool &is_tau = false); @@ -1756,7 +1756,7 @@ namespace cytnx { This tensor will only return when \p is_tau = @em true. @endparblock @pre The input tensor should be a rank-2 tensor (matrix). - @see `Qr(const Tensor &Tin, const bool &is_tau)` + @see Qr(const Tensor &Tin, const bool &is_tau) */ std::vector Qdr(const Tensor &Tin, const bool &is_tau = false); @@ -2266,7 +2266,7 @@ namespace cytnx { *@warning If \p in is not a Hermitian matrix, only the lower triangular matrix will be used. (This is strongly not recommended, please use ExpM(const Tensor &in) instead). - * @see `ExpH(const Tensor &in, const T &a, const T &b = 0)` + * @see ExpH(const Tensor &in, const T &a, const T &b) */ Tensor ExpH(const Tensor &in); @@ -2295,7 +2295,7 @@ namespace cytnx { * \f] * @param[in] in input Tensor, should be a square rank-2. * @return [Tensor] - * @see `ExpM(const Tensor &in, const T &a, const T &b = 0)` + * @see ExpM(const Tensor &in, const T &a, const T &b) */ Tensor ExpM(const Tensor &in); @@ -2366,7 +2366,7 @@ namespace cytnx { To use, define a linear operator with LinOp class either by assign a custom function or create a class that inherit LinOp (see LinOp for further details) - @pre + @pre 1. The initial UniTensor cannot be empty. 2. The UniTensor version of the Arnoldi not support \p which = 'SM'. */ @@ -2744,7 +2744,7 @@ namespace cytnx { * @param[in] Rt Right Tensor. * @return [Tensor] the result of addition. * @pre \p Lt and \p Rt must have the same shape. - * @see `linalg::Add(const Tensor &Lt, const Tensor &Rt)` + * @see linalg::Add(const Tensor &Lt, const Tensor &Rt) */ Tensor operator+(const Tensor &Lt, const Tensor &Rt); @@ -2755,7 +2755,7 @@ namespace cytnx { * @param[in] lc Left template type. * @param[in] Rt Right Tensor. * @return [Tensor] the result of addition. - * @see `linalg::Add(const T &lc, const Tensor &Rt)` + * @see linalg::Add(const T &lc, const Tensor &Rt) */ template Tensor operator+(const T &lc, const Tensor &Rt); @@ -2767,7 +2767,7 @@ namespace cytnx { * @param[in] Lt Left Tensor. * @param[in] rc Right template type. * @return [Tensor] the result of addition. - * @see `linalg::Add(const Tensor &Lt, const T &rc)` + * @see linalg::Add(const Tensor &Lt, const T &rc) */ template Tensor operator+(const Tensor &Lt, const T &rc); @@ -2781,7 +2781,7 @@ namespace cytnx { * @param[in] Rt Right Tensor. * @return [Tensor] the result of subtraction. * @pre \p Lt and \p Rt must have the same shape. - * @see `linalg::Sub(const Tensor &Lt, const Tensor &Rt)` + * @see linalg::Sub(const Tensor &Lt, const Tensor &Rt) */ Tensor operator-(const Tensor &Lt, const Tensor &Rt); @@ -2792,7 +2792,7 @@ namespace cytnx { * @param[in] lc Left template type. * @param[in] Rt Right Tensor. * @return [Tensor] the result of subtraction. - * @see `linalg::Sub(const T &lc, const Tensor &Rt)` + * @see linalg::Sub(const T &lc, const Tensor &Rt) */ template Tensor operator-(const T &lc, const Tensor &Rt); @@ -2804,7 +2804,7 @@ namespace cytnx { * @param[in] Lt Left Tensor. * @param[in] rc Right template type. * @return [Tensor] the result of subtraction. - * @see `linalg::Sub(const Tensor &Lt, const T &rc)` + * @see linalg::Sub(const Tensor &Lt, const T &rc) */ template Tensor operator-(const Tensor &Lt, const T &rc); @@ -2818,7 +2818,7 @@ namespace cytnx { * @param[in] Rt Right Tensor. * @return [Tensor] the result of multiplication. * @pre \p Lt and \p Rt must have the same shape. - * @see `linalg::Mul(const Tensor &Lt, const Tensor &Rt)` + * @see linalg::Mul(const Tensor &Lt, const Tensor &Rt) */ Tensor operator*(const Tensor &Lt, const Tensor &Rt); @@ -2829,7 +2829,7 @@ namespace cytnx { * @param[in] lc Left template type. * @param[in] Rt Right Tensor. * @return [Tensor] the result of multiplication. - * @see `linalg::Mul(const T &lc, const Tensor &Rt)` + * @see linalg::Mul(const T &lc, const Tensor &Rt) */ template Tensor operator*(const T &lc, const Tensor &Rt); @@ -2841,7 +2841,7 @@ namespace cytnx { * @param[in] Lt Left Tensor. * @param[in] rc Right template type. * @return [Tensor] the result of multiplication. - * @see `linalg::Mul(const Tensor &Lt, const T &rc)` + * @see linalg::Mul(const Tensor &Lt, const T &rc) */ template Tensor operator*(const Tensor &Lt, const T &rc); @@ -2854,7 +2854,7 @@ namespace cytnx { * @param[in] Lt Left Tensor. * @param[in] Rt Right Tensor. * @return [Tensor] the result of division. - * @see `linalg::Div(const Tensor &Lt, const Tensor &Rt)` + * @see linalg::Div(const Tensor &Lt, const Tensor &Rt) * @pre * 1. The divisor cannot be zero. * 2. \p Lt and \p Rt must have the same shape. @@ -2868,7 +2868,7 @@ namespace cytnx { * @param[in] lc Left template type. * @param[in] Rt Right Tensor. * @return [Tensor] the result of division. - * @see `linalg::Div(const T &lc, const Tensor &Rt)` + * @see linalg::Div(const T &lc, const Tensor &Rt) * @pre The divisor cannot be zero. */ template @@ -2881,7 +2881,7 @@ namespace cytnx { * @param[in] Lt Left Tensor. * @param[in] rc Right template type. * @return [Tensor] the result of division. - * @see `linalg::Div(const Tensor &Lt, const T &rc)` + * @see linalg::Div(const Tensor &Lt, const T &rc) * @pre The divisor cannot be zero. */ template @@ -2896,7 +2896,7 @@ namespace cytnx { * @param[in] Rt Right Tensor. * @return [Tensor] the result of mode. * @pre \p Lt and \p Rt must have the same shape. - * @see `linalg::Mod(const Tensor &Lt, const Tensor &Rt)` + * @see linalg::Mod(const Tensor &Lt, const Tensor &Rt) */ Tensor operator%(const Tensor &Lt, const Tensor &Rt); @@ -2907,7 +2907,7 @@ namespace cytnx { * @param[in] lc Left template type. * @param[in] Rt Right Tensor. * @return [Tensor] the result of mode. - * @see `linalg::Mod(const T &lc, const Tensor &Rt)` + * @see linalg::Mod(const T &lc, const Tensor &Rt) */ template Tensor operator%(const T &lc, const Tensor &Rt); @@ -2919,7 +2919,7 @@ namespace cytnx { * @param[in] Lt Left Tensor. * @param[in] rc Right template type. * @return [Tensor] the result of mode. - * @see `linalg::Mod(const Tensor &Lt, const T &rc)` + * @see linalg::Mod(const Tensor &Lt, const T &rc) */ template Tensor operator%(const Tensor &Lt, const T &rc); @@ -2932,7 +2932,7 @@ namespace cytnx { * @param[in] Lt Left Tensor. * @param[in] Rt Right Tensor. * @return [Tensor] the result of comparison. - * @see `linalg::Cpr(const Tensor &Lt, const Tensor &Rt)` + * @see linalg::Cpr(const Tensor &Lt, const Tensor &Rt) */ Tensor operator==(const Tensor &Lt, const Tensor &Rt); @@ -2943,7 +2943,7 @@ namespace cytnx { * @param[in] lc Left template type. * @param[in] Rt Right Tensor. * @return [Tensor] the result of comparison. - * @see `linalg::Cpr(const T &lc, const Tensor &Rt)` + * @see linalg::Cpr(const T &lc, const Tensor &Rt) */ template Tensor operator==(const T &lc, const Tensor &Rt); @@ -2955,7 +2955,7 @@ namespace cytnx { * @param[in] Lt Left Tensor. * @param[in] rc Right template type. * @return [Tensor] the result of comparison. - * @see `linalg::Cpr(const Tensor &Lt, const T &rc)` + * @see linalg::Cpr(const Tensor &Lt, const T &rc) */ template Tensor operator==(const Tensor &Lt, const T &rc); diff --git a/misc_doc/version.log b/misc_doc/version.log index aacdc5d93..3dabaa2b0 100644 --- a/misc_doc/version.log +++ b/misc_doc/version.log @@ -1,5 +1,56 @@ -v0.7.8 - +v1.0.0 +1. [Important] This is the first stable release of the project. +2. [Change] Merge Contract and Contracts into Contract, and Contract_ and Contracts_ ito Contract_. +3. [Change] Merge relabel and relabels into relabel, and relabel_ and relabels_ into relabel_. +4. [New] Add an optional argument min_blockdim to svd_truncate to define a minimum dimension for each block. +5. [New] Add Eig/Eigh functions for Block UniTensor. +6. [New] Add Lancos-like algoirthm, Lanczos_Exp, to approximate exponential operator acting on a state. +7. [Change] Migrate cuTENSOR APIs to the version 2. +8. [Change] reshape_ and permute_ to return the object itself instead of None. +9. [Change] Remove the magma dependency. +10. [Enhance] Optimize the contraction order finding algorithm. + +v0.9.7 +1. [New] Add the identity/eye functions for UniTensor to generate an identity UniTensor. + +v0.9.6 +1. [Fix] Resolve an issue where Trace did not work for is_diag=True. +2. [New] Add support for the UniTensor type for linear algebra function Eig, Eigh and InvM. +3. [Fix] Fix incoorected behavior in uniform random and isub. +4. [New] Add beartype checking for ovld warpper. +5. [Fix] Fix CICD tools. + +v0.9.5 +Same content as 094, fix CD to conda-forge + +v0.9.4 +This has the same content as v0.9.3 + +v0.9.3 +1. improvements of GPUs, integrated with cutensor/cuquantum. +2. extra Svd methods for both CPU and GPU. +3. Other improvements of user API including conversion between symmetric and non-sym UniTensors. + +v0.9.2 +1. [Important] [Change] Remove all deprecated APIs and old SparseUniTensor data structure +2. [Fix] Bugs when batch_matmul when no MKL +3. [Update] Update examples to match new APIs +4. [New] add labels options when creating UniTensor from Tensor. +5. [New] change MKL to mkl_rt instead of fixed interface ilp64/lp64 + +v0.9.1 +1. [New] Add additional argument share_mem for Tensor.numpy() python API. +2. [Fix] UniTensor.at() python API not properly wrapped. +3. [Fix] Bug in testing for BlockUniTensor. +4. [Fix] Bug in UniTensor print info (duplicate name, is_diag=true BlockUniTensor dimension display) +5. [Change] Svd now using gesdd instead of gesvd. +6. [New] Add linalg.Gesvd function, along with Gesvd_truncate. +7. [Fix] Strict casting rule cause compiling fail when compile with icpc +8. [New] Add additional argument for Network.PutUniTensor to match the label. +9. [Fix] Network TOUT string lbl bug +10. [Fix] #156 storage python wrapper cause not return. +11. [Add] linalg.Gemm/Gemm_() +12. [Add] UniTensor.normalize()/normalize_() v0.7.7 1. [Enhance][WARNING] rowrank option now has default value when converting from Tensor. Which is half number of the bonds. Notice that the order of argument are changed between (rowrank) and (is_diag)! @@ -7,8 +58,6 @@ v0.7.7 3. [Enhance] Internal Syntax format change to clang format. 4. [Change] USE_OMP option gives openmp access only for in-house implementation. Any linalg funciton calling MKL will be parallel. - - v0.7.6 1. [Enhance] Adding alias BD_IN=BD_KET, BD_BRA=BD_OUT, BD_NONE=BD_REG. 2. [New] Add Contracts for multiple UniTensors contraction. @@ -89,7 +138,6 @@ v0.7.4 55. [Enhance] Add additional feature Svd_truncate with truncation_err (err) and return_err option for DUTen 56. [Enhance] Add python dmrg example for using tn_algo - v0.7.3 1. [Fix] bug for Get slice does not reduce when dim=1. 2. [Enhance] checking the memory alloc failing for EL. @@ -123,13 +171,11 @@ v0.7.2 4. [Fix] bug for set partial elements on Tensor with slicing issue. 5. [Fix][DenseUniTensor] set_rowrank cannot set full rank issue #24 - v0.7.1 1. [Enhance] Finish UniTensor arithmetic. 2. [Fix] bug when using Tensor.get() accessing only single element 3. [Enhance] Add default argument is_U = True and is_vT = True for Svd_truncate() python API - v0.7 1. [Enhance] add binary op. -Tensor. 2. [Enhance] New introduce Scalar class, generic scalar placeholder. @@ -218,7 +264,6 @@ v0.6.0 6. [Fix] reshape() does not share memory 7. [Fix] BoolStorage print_elem does not show the first element in shape - v0.5.6a 1. [Enhance] change linalg::QR -> linalg::Qr for unify the function call 2. Fix bug in UniTensor Qr, R UniTensor labels bug. @@ -239,7 +284,6 @@ v0.5.5a 6. Fix small bug in return ref of Tproxy 7. Fix bug in buffer size allocation in Svd_internal - v0.5.4a-build1 1. [Important] Fix Subtraction real - complex bug. @@ -273,13 +317,11 @@ v0.5.3a 15. Fix bug in diagonal CyTensor reshape/reshape_ cause mismatch. 16. Add a is_diag option for convert Tensor to CyTensor. - v0.5.2a-build1 1. example/iTEBD, please modify the argument rowrank->Rowrank if you encounter error in running them. 2. Fix bug in cytnx.linalg.Abs truncate floating point part. ---> v0.5.2a-build1 3. Fix bug in mkl blas package import bug with numpy. ---> v0.5.2a-build1 - v0.5.2a 1. add Trace and Trace_ for CyTensor. 2. fix bug in Network.Launch does not return the output CyTensor @@ -335,7 +377,6 @@ v0.5.2a 52. Add Symmetry.Save/Load 53. Symmetry/Tensor/Storage/Bond/CyTensor Save/Load re-invented for more simple usage - v0.5.1a 1. add Norm() for CPU and GPU, add to call by Tn 2. add Dot() for CPU and GPU, with unify API for Vec-Vec/Mat-Vec/Mat-Mat/Ten-Vec product. diff --git a/version.cmake b/version.cmake index 1b18621b9..d4d48fbe4 100644 --- a/version.cmake +++ b/version.cmake @@ -1,3 +1,3 @@ -set(CYTNX_VERSION_MAJOR 0) -set(CYTNX_VERSION_MINOR 9) -set(CYTNX_VERSION_PATCH 7) +set(CYTNX_VERSION_MAJOR 1) +set(CYTNX_VERSION_MINOR 0) +set(CYTNX_VERSION_PATCH 0)