Skip to content

Commit

Permalink
linear algebra md -> notebook
Browse files Browse the repository at this point in the history
  • Loading branch information
JohnnyGOX17 committed Dec 21, 2024
1 parent b19252c commit 6558063
Show file tree
Hide file tree
Showing 4 changed files with 149 additions and 83 deletions.
2 changes: 2 additions & 0 deletions kb/math_and_signal_processing/.gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -3,3 +3,5 @@ PNT.md
diff_eq.md
diff_eq_files/
statistics.md
linear_algebra.md
linear_algebra_files/
147 changes: 147 additions & 0 deletions kb/math_and_signal_processing/linear_algebra.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,147 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "eebf3f5a-d1a1-4ca9-bf79-483638e304a9",
"metadata": {},
"source": [
"# Linear Algebra"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d515268e-5ac3-4ce7-b04e-c39473b80d8f",
"metadata": {},
"outputs": [],
"source": [
"from sympy import *\n",
"init_printing()"
]
},
{
"cell_type": "markdown",
"id": "c83ff986-c53f-4b5c-b82d-9f165ce97612",
"metadata": {},
"source": [
"## Overview\n",
"\n",
"### Notation and Types\n",
"\n",
"- **Scalars:** a single value/number, usually denoted with lowercase variable letters, also specified with what type of number; for instance, a real-valued scalar $s$ can be shown as: $s \\in \\mathbb{R}$\n",
"\n",
"- **Vectors:** a one-dimensional array of numbers, typically shown in bold lowercase variables like:\n",
"$ \\boldsymbol{x} = \\begin{bmatrix}\n",
" x_{1} \\\\\n",
" x_{2} \\\\\n",
" \\vdots \\\\\n",
" x_{n}\n",
"\\end{bmatrix} $\n",
" + To say each element of the above vector is real, with $n$ elements, the vector lies in the set formed by $n$ times the Cartesian product or $ \\boldsymbol{x} \\in \\mathbb{R}^{n} $\n",
" + To index a set of elements of the vector- for instance elements $x_{1}, x_{3} \\text{ and } x_{6}$- we can define a set $ S = {1,3,6}$ and denote the subset of $\\boldsymbol{x}$ for those elements as $\\boldsymbol{x}_{S}$. As an inverse, to exclude a set of indices of a vector with a given set, one can denote $\\boldsymbol{x}_{-S}$ which is the vector containing all elements except $x_{1}, x_{3} \\text{ and } x_{6}$.\n",
"- **Matrices:** a 2-D array of numbers, denoted by a variable with bold & capitalized typeface such as:\n",
"$\\boldsymbol{A}= \\begin{bmatrix}\n",
" A_{1,1} & A_{1,2} \\\\\n",
" A_{2,1} & A_{2,2}\n",
"\\end{bmatrix}$\n",
"where the subscript numbers of each element identify the 2-D position within the matrix.\n",
" + A real-valued matrix $\\boldsymbol{A}$ with a height of $m$ and width $n$ can be shown as $\\boldsymbol{A} \\in \\mathbb{R}^{m\\times n}$\n",
" + A \"$:$\" symbol can be used to identify a cross section of a matrix in one dimension; for instance, to index all elements in the first row of the matrix $\\boldsymbol{A}$, we can use the notation $\\boldsymbol{A}_{1,:}$\n",
" + $f\\left( \\boldsymbol{A} \\right)_{i,j}$ gives element $(i,j)$ of the matrix computed by applying the function $f$ to $\\boldsymbol{A}$\n",
"- **Tensors:** in arrays with more than two dimensions/axes, tensors can be used with the typeface $\\boldsymbol{\\mathsf{A}}$\n",
" + The element at coordinates $(i,j,k)$ of the tensor $\\boldsymbol{\\mathsf{A}}$ is $\\mathsf{A}_{i,j,k}$"
]
},
{
"cell_type": "markdown",
"id": "cf1b557a-629b-4e42-8415-aee381b12227",
"metadata": {},
"source": [
"### Vector and Matrix Operations\n",
"\n",
"- **Transpose:** for a matrix, the transpose is the mirror image of the matrix across the _main diagonal_ line, running from the top left corner to the bottom right. The transpose operation of a matrix $\\boldsymbol{A}$ is shown as $\\boldsymbol{A}^\\intercal$\n",
" + Since vectors are essentially 1-D matrices with one column, the transpose of a vector is a matrix with only one row. Similarly, we can write a vector as a row vector, and then turn it into a column vector with a transpose, as $\\boldsymbol{x} = \\left[ x_{1}, x_{2}, x_{3} \\right]^\\intercal$\n",
"<center><img src=\"Matrix_transpose.gif\" width=\"200\"></center>\n",
"<center><i><a href=\"https://en.wikipedia.org/wiki/Transpose\">Source: Transpose- Wikipedia</a></i></center>\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "04fed89a-bcb4-4813-8f53-2a74eef932d0",
"metadata": {},
"outputs": [],
"source": [
"M = Matrix([[1,2,3], [4,5,6]])\n",
"M"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bef00d9f-a41c-43db-868c-f3a97f535ce8",
"metadata": {},
"outputs": [],
"source": [
"M.T"
]
},
{
"cell_type": "markdown",
"id": "0fc6c631-3a82-47cc-b44b-0b7dab4984b9",
"metadata": {},
"source": [
"- **Scalar Addition/Multiplication:** scalars can be simply added, subtracted, multiplied or divided into a matrix by performing the operation with the scalar on each element of the matrix. For instance, to multiply a matrix $\\boldsymbol{D}$ by scalar $a$ and add by scalar $c$:\n",
"\n",
"$ a \\cdot D_{i,j} + c $\n",
"\n",
"- **Matrix Addition:** matrices can be added together if they have the same dimensions/shape by simply adding their corresponding elements:\n",
"\n",
"$ \\boldsymbol{C} = \\boldsymbol{A} + \\boldsymbol{B} \\text{ where } C_{i,j} = A_{i,j} + B_{i,j}$\n",
"\n",
"- **Vector Broadcasting:** a simplification found in some deep learning writings shows the simple addition of a matrix and a vector, yielding another matrix, for instance $ \\boldsymbol{C} = \\boldsymbol{A} + \\boldsymbol{b} \\text{, where } C_{i,j} = A_{i,j} + b_{j}$. This denotes the vector $\\boldsymbol{b}$ is added to each row of the matrix $\\boldsymbol{A}$, which is shorthand for an implicit step of defining another intermediate matrix with vector $\\boldsymbol{b}$ copied in each row before performing the addition.\n",
"\n",
"- **Dot Product:**\n",
"\n",
"- **Vector & Matrix Multiplication:**\n",
" + In matrix multiplication, the order of operations matter, as well as the matrix sizes; given two matrices, $\\boldsymbol{A} \\in \\mathbb{R}^{m\\times n}$ and $\\boldsymbol{B} \\in \\mathbb{R}^{n\\times p}$, the matrix multiplication $\\boldsymbol{C} = \\boldsymbol{A} \\times \\boldsymbol{B}$ results in output matrix $\\boldsymbol{C} \\in \\mathbb{R}^{m\\times p}$. This shows two important properties of matrix multiplication:\n",
" 1. The inner dimensions of both matrices (number of columns in first matrix operand, and number of rows in second matrix operand) must be equal.\n",
" 2. The output matrix size matches the dimensions of the outer sizes of both matrix arguments (the number of rows in first matrix operand by the number of columns in the second matrix operand).\n",
" + As another example, when multiplying a 1x3 row vector with a 3x1 column vector, a scalar (1x1) is the result, but when the operands are reversed, a 3x3 matrix is the result:\n",
"\n",
"$\\begin{bmatrix} 2 & 3 & 4 \\end{bmatrix} \\times \\begin{bmatrix} 6 \\\\ 4 \\\\ 3 \\\\ \\end{bmatrix} = 36, \\quad \\begin{bmatrix} 2 \\\\ 3 \\\\ 4 \\end{bmatrix} \\times \\begin{bmatrix} 6 & 4 & 3 \\end{bmatrix} = \\begin{bmatrix} 12 & 8 & 6 \\\\ 18 & 12 & 9 \\\\ 24 & 16 & 12 \\end{bmatrix}$\n",
"\n",
"<center><img src=\"mat_mul.png\" width=\"400\"></center>\n",
"<center><i><a href=\"https://www.khanacademy.org/math/precalculus/x9e81a4f98389efdf:matrices/x9e81a4f98389efdf:multiplying-matrices-by-matrices/a/multiplying-matrices\">Source: Multiplying matrices- Khan Academy</a></i></center>"
]
},
{
"cell_type": "markdown",
"id": "11b3d1d1-e90c-4ff4-b83f-2f2761c25370",
"metadata": {},
"source": [
"## References\n",
"\n",
"* [3Blue1Brown](https://www.3blue1brown.com/)\n",
"* [Linear Algebra Done Right](https://linear.axler.net/)\n",
"* [Linear Algebra Done Wrong - Sergei Treil](https://www.math.brown.edu/streil/papers/LADW/LADW_2017-09-04.pdf)\n",
"* [kenjihiranabe/The-Art-of-Linear-Algebra](https://github.com/kenjihiranabe/The-Art-of-Linear-Algebra): Graphic notes on Gilbert Strang's \"Linear Algebra for Everyone\"\n",
"* [Deep Learning Book- Linear Algebra](https://www.deeplearningbook.org/contents/linear_algebra.html)\n",
"* [The Matrix Cookbook](https://www.math.uwaterloo.ca/~hwolkowi/matrixcookbook.pdf)\n",
"* [Toeoplitz matrix for correlation](https://dsp.stackexchange.com/questions/52726/correlation-of-a-signal/52735#52735)\n",
"* [Algebra, Topology, Differential Calculus, and Optimization Theory For Computer Science and Machine Learning](https://www.cis.upenn.edu/~jean/math-deep.pdf)\n",
"* [Making sense of principal component analysis, eigenvectors & eigenvalues](https://stats.stackexchange.com/questions/2691/making-sense-of-principal-component-analysis-eigenvectors-eigenvalues)\n",
"* [Matrix Methods in Data Analysis, Signal Processing, and Machine Learning- MIT OpenCourseWare](https://ocw.mit.edu/courses/mathematics/18-065-matrix-methods-in-data-analysis-signal-processing-and-machine-learning-spring-2018/)\n",
"* [The matrix calculus you need for deep learning](http://explained.ai/matrix-calculus/index.html)\n",
"* [Introduction to Linear Algebra for Applied Machine Learning with Python](https://pabloinsente.github.io/intro-linear-algebra)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
71 changes: 0 additions & 71 deletions kb/math_and_signal_processing/linear_algebra.md

This file was deleted.

12 changes: 0 additions & 12 deletions learning_bookmarks.md
Original file line number Diff line number Diff line change
Expand Up @@ -804,18 +804,6 @@
* [rwnobrega/komm: An open-source library for Python 3 providing tools for analysis and simulation of analog and digital communication systems.](https://github.com/rwnobrega/komm)
* [CCSDS - TM SYNCHRONIZATION AND CHANNEL CODING— SUMMARY OF CONCEPT AND RATIONALE](https://public.ccsds.org/Pubs/130x1g3.pdf)

## Linear Algebra, Statistics and Other Math
* [3Blue1Brown](https://www.3blue1brown.com/)
* [Algebra, Topology, Differential Calculus, and Optimization Theory For Computer Science and Machine Learning](https://www.cis.upenn.edu/~jean/math-deep.pdf)
* [Making sense of principal component analysis, eigenvectors & eigenvalues](https://stats.stackexchange.com/questions/2691/making-sense-of-principal-component-analysis-eigenvectors-eigenvalues)
* [Matrix Methods in Data Analysis, Signal Processing, and Machine Learning | Mathematics | MIT OpenCourseWare](https://ocw.mit.edu/courses/mathematics/18-065-matrix-methods-in-data-analysis-signal-processing-and-machine-learning-spring-2018/)
* [The matrix calculus you need for deep learning](http://explained.ai/matrix-calculus/index.html)
* [Introduction to Linear Algebra for Applied Machine Learning with Python](https://pabloinsente.github.io/intro-linear-algebra)
* [Linear Algebra Done Wrong - Sergei Treil](https://www.math.brown.edu/streil/papers/LADW/LADW_2017-09-04.pdf)
* [kenjihiranabe/The-Art-of-Linear-Algebra](https://github.com/kenjihiranabe/The-Art-of-Linear-Algebra): Graphic notes on Gilbert Strang's "Linear Algebra for Everyone"
* [The Matrix Cookbook](https://www.math.uwaterloo.ca/~hwolkowi/matrixcookbook.pdf)
* [Linear Algebra Done Right](https://linear.axler.net/)

# rust

* [Rust for professionals](https://overexact.com/rust-for-professionals/)
Expand Down

0 comments on commit 6558063

Please sign in to comment.