-
Notifications
You must be signed in to change notification settings - Fork 2
/
math110.txt
283 lines (229 loc) · 17.2 KB
/
math110.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
= before midterm 1 =
Definitions:
vector space: a set V along with an addition and scalar multiplication on V such that
commutativity, associativity, additive identity, additive inverse, multiplicative identity, distributive properties
P_m(F): degree at most m
subspace: additive identity, closed under addition, closed under scalar multiplication
direct sum: V is the direct sum of subspaces U_1, ..., U_m if each element of v can be written uniquely as a sum u_1 + ... + u_m
linearly independent (including empty list)
basis: a list of linearly independent vectors that spans V
linear map: additivity, homogeneity ( aT(v) = T(av))
prop: V = U1 direct U2 direct ... direct Um iff
V = U1 + ... + Um
and the only way to write 0 is by taking all ui 0
prop: V = U1 direct U2 iff V = U1 + U2 and U1 intersect U2 = {0}
Linear Dependence Lemma: If (v1, ... ,vm) is linearly dependent in V and v1 != 0, then exists j in {2, ..., m} such that:
vj in Span(v1, ..., v_{j-1})
if the jth term is removed from (v1, ..., vm), the span of the remaining list still equals span(v1, ..., vm)
Theorem: In a finite-dimensional vector space, the length of every linearly independent list of vectors is less than or equal to the length of every spanning list of vectors.
Proof by adjoining vi to wi
Prop: Every subspace of a finite-dimensional vector space is finite dimensional
Prop: A list (v1, ..., vn) of vectors in V is a basis of V iff every v can be written uniquely.
Theorem: Every spanning list in a vector space can be reduced to a basis of the vector space.
Proof by discarding vectors not in the span of previous vectors
Theorem: Every linearly independent list of vectors in a finite-dimensional vector space can be extended to a basis of the vector space.
Prop: Suppose V is finite dimensional and U is a subspace of V. Then there is a subspace W of V such that V = U direct W.
Proof by extending a basis of U to a basis of V
Prop: If V is finite dimensional, then every spanning list of vectors in V with length dim V is a basis for V
Prop: If V is finite dimensional, then every linearly independent list of vectors in V with length dim V is a basis of V.
Theorem: If U1 U2 subspaces, then dim(U1+U2) = dim U1 + dim U2 - dim(U1 intersect U2)
Prop: Suppose U1, ..., Um subspaces s.t. V = U1 + ... + Um and dim V = dim U1 + ... + dim Um, then V = U1 direct ... direct Um
Prop: Linear map T is injective iff null T = {0}
Theorem: V finite and T linear map form V to W, then dim V = dim null T + dim range T
Corollary: If V and W are finite-dimensional vector spaces such that dim V > dim W , then no linear map from V to W is injective.
Corollary: If V and W are finite-dimensional vector spaces such that dim V < dim W , then no linear map from V to W is surjective.
Prop: a linear map is invertible (exists S such that ST = TS = I) iff it is injective and surjective
Prop: Suppose that (v1, ... , vn) is a basis of V and (w1, ... ,wm) is a basis of W. Then M is an invertible linear map between L(V , W ) and Mat(m, n, F).
Proof by injectivity and surjectivity
dim L(V,W) = (dimV)(dimW)
Suppose V is finite dimensional. If T in L(V), then the following are equivalent:
(a) T is invertible; (b) T is injective; (c) T is surjective.
General note:
Consider the special case of zero space.
= before midterm 2 =
Definition:
linear map: additivity, homogeneity ( aT(v) = T(av))
matrix of vector: basis coefficients representation
operator: a linear map from a vector space to itself L(V)
invertible: T linear map invertible if exists S s.t. ST = TS = I
invariant subspace under T: T(U) subset of U (condition that restriction is an operator)
eigenvalue: exists a *nonzero* vector s.t. ...
M(T, (v1, ..., vn)): default: standard basis
P_{U,W}(v) = u: v = u + w; V = U oplus W
inner product: positivity (v,v), definiteness (v,v), additivity in first slot, homogeneity in first slot, conjugate symmetry
Euclidean inner product on F^n: sum ui * bar(wi)
orthonormal list of vectors, orthonormal basis
Prop:
Linear map T is injective iff null T = {0}
Theorem: V finite and T linear map form V to W, then dim V = dim null T + dim range T
Prop: Suppose that (v1, ... , vn) is a basis of V and (w1, ... ,wm) is a basis of W. Then M is an invertible linear map between L(V , W ) and Mat(m, n, F).
dim L(V,W) = (dimV)(dimW)
Suppose V is finite dimensional. If T in L(V), then the following are equivalent:
(a) T is invertible; (b) T is injective; (c) T is surjective.
λ root of p (deg m) iff exists q (deg m-1) s.t. p(z) = (z-λ) * q(z)
p polynomial deg m => at most m roots
Foundamental Theorem of Algebra: every nonconstant polynomial with complex coefficients has a root
root => complex conjugate also root
p real polynomial nonconstant => unique factorization into deg <= 2 polynomials
l1, ..., lm distinct eigenvalues, v1, ..., vm nonzero eigenvectors => linearly independent
proof technique: contradiction. pick the first vk in span(v1,...,vk-1)
corollary: operator at most dim V eigenvalues
Every operator on a finite dimensional, nonzero, complex vector space has an eigenvalue
proof technique: pick nonzero v, construct (v, Tv, ..., T^n(v)). must be linearly dependent
factor the operator polynomial (at least one factor not injective) and get a root => eigenvalue
Prop: T in L(V), (v1, ..., vn) basis then
matrix upper triangular
<=> Tvk in Span(v1, ..., vk)
<=> span(v1, ..., vk) invariant under T
V complex vector space, T in L(V), then T has an upper triangular matrix w.r.t some basis of V
proof technique: use induction. partition V by U = range(T - λ I), λ eigenvalue. U invariant.
dim U < dim V. extend basis of U to V. proof property of upper triangular matrix
Suppose T in L(V) has an upper triangular matrix w.r.t some basis of V. T invertible <=> all entries on diagonal of u.t.m. nonzero.
proof technique: prove contrapositive.
<=: if exists zero entry, Tvk in Span(v1, ..., vk-1). Construct linear map S: Span(v1,...,vk) -> Span(v1,...,vk-1) by Sv=Tv. not injective => exists v Tv = 0 => T not invertible
=>: if not invertible, exists v, Tv = 0 => T(a1v1+...+akvk) = 0, ak!=0 => Tvk in Span(v1,...,vk-1) => λ_k must be 0
corollary: T in L(V) has u.t.m. Then eigenvalues precisely diagonal entries
distinct eigenvalues & # = dim V => exists diagonal matrix
T in L(V), l1, ..., lm distinct eigenvalues.
T has diagonal matrix <=>
V has a basis of eigenvectors <=>
exists 1-dim subspaces U1,...,Un s.t. V = U1+...+Un <=>
V = null(T-l1I) + ... + null(T-lmI) <=>
dim V = dim null(T-l1I) + ... + dim null(T-lmI)
prove last to second: group basis of each subspace, sum of them an eigenvector. However eigenvector with different eigenvalues l.i.
Every operator on a finite-dimensional, nonzero, real vector space has an invariant subspace of dimension 1 or 2
proof technique: if quadratic, consider Span(u, Tu)
Every operator on an odd-dimensional real vector space has an eigenvalue
proof technique: induction on dimension. T = U oplus W, if U dim 2 , W not an invariant subspace
define Sw = P_{W,U} T(w). S has eigenvalue λ. goal: find eigenvector in U+span(w)
consider u + aw (l eigenvalue of U): (T-lI)(u+aw) = Tu-lu + a(Tw-lw) = Tu-lu + a(P_{U,W}T(w)+P_{W,U}T(w)-lw) = Tu-lu + aP_{U,W}(Tw) in U. So maps U+span(w) to U. not injective
M(ST, (u1,...), (w1...)) = M(S, (v1...), (w1...))M(T, (u1...), (v1...))
Theorem: Suppose T in L(V). (u1..), (v1..) bases of V. Let A = M(I, (u1..), (v1..)). Then M(T, (u1..)) = A-1M(T, (v1..))A
ST = I => T injective & S surjective
Triangle Inequality (prove by squaring both sides and apply Cauchy), Pythagorean Theorem
Cauchy-Schwarz Inequality (|<u,v>| <= |u||v|, eq holds iff one is scalar multiple of the other) prove by orthogonal decomposition
parallelogram equality: |u+v|^2 + |u-v|^2 = 2(|u|^2 + |v|^2)
orthonormal list of vectors => linearly independent (prove by checking norm)
(e1...) orthonormal basis => v = <v,e1>e1 + ..., |v|^2 = |<v,e1>|^2 + ...
Gram-Schmidt: (v1,...) l.i. list of vectors => exists orthonormal (e1,...) s.t. span equal
ej = orthonormalize(vj - <vj,e1>e1 - ... - <vj, e{j-1}>e{j-1})
every orthonormal list of vectors can be extended to an orthonormal basis (first extend & then G-S)
T has u.t.m => T has u.t.m. w.r.t. some orthonormal basis
orthogonal projection: |v - P_U v| <= |v - u| for all u in U, eq holds iff u = P_U v
Example of real linear operator with no real eigenvalue (R^4): T(z1,z2,z3,z4) = (z2,-z1,z4,-z3)
Prove / disprove existence of inner product: try parallelogram equality
= Chapter 6 =
inner product
positivity (v,v), definiteness (v,v), additivity in first slot, homogeneity in first slot, conjugate symmetry
Euclidean inner product on F^n: sum{ui * conjugate(wi)}
Triangle Inequality (prove by squaring both sides and apply Cauchy), Pythagorean Theorem
Cauchy-Schwarz Inequality (|<u,v>| <= |u||v|, eq holds iff one is scalar multiple of the other) prove by orthogonal decomposition
parallelogram equality: |u+v|^2 + |u-v|^2 = 2(|u|^2 + |v|^2)
orthonormal list of vectors => linearly independent (prove by checking norm)
(e1...) orthonormal basis => v = <v,e1>e1 + ..., |v|^2 = |<v,e1>|^2 + ...
Gram-Schmidt: (v1,...) l.i. list of vectors => exists orthonormal (e1,...) s.t. span equal
ej = orthonormalize(vj - <vj,e1>e1 - ... - <vj, e{j-1}>e{j-1})
every orthonormal list of vectors can be extended to an orthonormal basis (first extend & then G-S)
T has u.t.m => T has u.t.m. w.r.t. some orthonormal basis
orthogonal complement: U^perp
[U subspace of V] V = U oplus U^perp
U = U^perp^perp
P_U is called the orthogonal projection of V onto U
orthogonal projection: |v - P_U v| <= |v - u| for all u in U, eq holds iff u = P_U v
unitary / orthogonal matrix: matrix over C / R s.t. columns form an orthonormal basis of R^n with the Euclidean inner product
Q unitary / orthogonal <=> T(x) = Qx isometry <=> QQ* = I <=> Q* unitary / orthogonal <=> rows of Q form an orthonormal basis
adjoint: T in L(V, w), T^* maps from W to V s.t. <Tv, w> = <v, T^*w>.
(aT)* = conjugate(a)T^*
null T^* = (range T)^perp
range T^* = (null T)^perp
Suppose (e1, .., en) orthonormal basis of V, (f1, ..., fn) orthonormal basis of W, then M(T^*, (f1, ..., fn), (e1, ..., en)) = conjugate_transpose(M(T, (e1, ..., en), (f1, ..., fn)))
T injective <=> T* surjective
= Chapter 7 =
self-adjoint: operator T in L(V) s.t. T = T^*
every eigenvalue real
[V complex, <Tv, v> = 0 for all v in V] => T = 0
proof technique: <Tu, w> = (<T(u+w), u+w> - <T(u-w), u-w>) / 4 + i(<T(i((u+w)), i(u+w)> - <T(i((u-w)), i(u-w)>) / 4
[V real, T self-adjoint, <Tv, v> = 0 for all v in V] => T = 0
proof technique: <Tu, w> = (<T(u+w), u+w> - <T(u-w), u-w>) / 4
[V complex] T self-adjoint <=> <Tv, v> in R for all v in V
proof technique: <Tv, v> - conjugate(<Tv, v>) = <(T - T^*)v, v>
<u, v> = 0 iff norm(u) <= norm(u+av) for all a in F
[P in L(V), P^2 = P]
P orthogonal projection <=> P self-adjoint
norm(Pv) <= norm(v) for every v in V => P orthogonal projection
every vector in null P orthogonal to every vector in range P => P orthogonal projection
normal: T operator in L(V), TT^* = T^*T
T normal <=> norm(Tv) = norm(T^*v) for all v in V
T normal => [v in V eigenvector of T with eigenvalue λ <=> eigenvector of T^* with eigenvalue conjugate(λ)]
T normal => eigenvectors of T corresponding to distinct eigenvalues are orthogonal
[T normal, U invariant subspace] =>
U^perp invariant under T
U invariant under T^*
(T|U)^* = (T^*)|U
T|U normal operator on U
T|U^perp normal operator on U^perp
[T normal, V complex inner-product space] T self-adjoint <=> eigenvectors all real
Spectral Theorem:
[V complex / real], V has an orthonormal basis consisting of eigenvectors of T <=> T is normal / self-adjoint
proof technique:
if V complex, exists orthonormal basis s.t. M(T) upper triangular, argue actually diagonal
if V real, prove by induction on dimension of V, pick any eigenvector with norm 1, argue T|_({v}^perp) self-adjoint
Lemma: [T in L(V) self-adjoint], a, b in R s.t. a^2 < 4b => T^2 + aT + bI invertible
Lemma: [T in L(V) self-adjoint] => T has an eigenvalue
Corollary:
[F = R && T self-adjoint || F = C && T normal], λ_1 .. λ_n distinct eigenvalues of T => V = null(T-λ_1 I) directsum ... directsum null(T-λ_n I). Furthermore, each vector in each null(T-λ_j I) is orthogonal to all vectors in the other subspaces of this decomposition
positive operator: (V real && T self-adjoint || V complex) and <Tv, v> >= 0 for all v in V
T is positive <=>
T is self-adjoint and all the eigenvalues of T are nonnegative <=>
T has a positive square root <=>
T has a self-adjoint square root <=>
exists S in L(V) s.t. T = S*S
Has unique positive square root
isometry: norm(Tv) = norm(v) for all v in V
S is an isometry <=> <Su, Sv> = <u, v> for all u, v in V <=> S*S = I <=> (Se1, ..., Sen) is orthonormal whenever (e1, ..., en) orthonormal in V <=> exists (e1, ..., en) s.t. (Se1, ..., Sen) orthonormal <=> S* isometry
isometry => normal
[V complex, S in L(V)]. S isometry <=> exists orthonormal basis of V consisting of eigenvectors of S all of whose corresponding eigenvalues have absolute value 1
//[V real, S in L(V)]. S isometry <=> exists orthonormal basis of V w.r.t. which S has a block diagonal matrix where each block
polar decomposition: [T in L(V)]. exists isometry S in L(V) s.t. T = S*sqrt(T^*T)
proof technique: observe that norm(Tv) = norm(sqrt(T^*T)v) for all v. define S1: range(sqrt(T^*T)) -> range(T), S2: range(sqrt(T^*T))^perp -> range(T)^perp. define S: Sv = S1u + S2w where v = u + w
singular value: eigenvalue of sqrt(T^*T)
singular-value decomposition: suppose T in L(V) has singular values s1, ..., sn. Then there exist orthonormal bases (e1, ..., en) and (f1, ..., fn) of V such that Tv = s1<v, e1>f1 + ... + sn<v, en>fn for every v in V
example of singular_value(T^2) != square(singular_value(T))
T = ((0,1),(2,0))
T1, T2 same singular values <=> exists isometries S1, S2 s.t. T1 = S1T2S2
= Chapter 8 =
generalized eigenvector: v generalized eigenvector if (T-λ I)^j v = 0 for some positive integer j
//if T in L(V) and m is a nonnegative integer s.t. null T^m = null T^(m+1), then null T^0 subset null T^1 subset ... subset null T^m = null T^(m+1) = null T^(m+2) = ...
//[T in L(V)] null T^(dimV) = null T^(dimV + 1) = ...
//[T in L(V), λ eigenvalue of T]. {v | (T-λ I)^j v = 0 for some j > 0} = null(T-λ I)^(dim V)
nilpotent: N^j = 0 for some j > 0
N in L(V) nilpotent => N^(dim V) = 0
[T in L(V), λ in F]. foreach basis s.t. T has upper-tri matrix, λ appears on diagonal dim(null((T - λ I)^(dim V))) times.
proof technique: WLOG only consider λ = 0. induction on dimension. break down into cases λ_n != 0, λ_n = 0
multiplicity: = dim null(T-λ*I)^dimV
V complex vector space. sum of multiplicities of all the eigenvalues of T equals V
characteristic polynomial: p(z) = (z-λ_1)^d1 * ... * (z-λ_m)^dm [V complex, T in L(V), λ_1, ..., λ_m distinct eigenvalues of T, dj = multiplicity of λ_j].
also applies to diagonal block matrix
Cayley-Hamilton Theorem: [V complex, T in L(V)]. q := characteristic polynomial of T. => q(T) = 0
prove by induction on j: (T-l1I)...(T-ljI)vj = 0. Note that (T-ljI)vj in span(v1,...,vj-1)
[T in L(V). p in P(F)]. => null p(T) invariant under T
[V complex, T in L(V), λ_1, ..., λ_m distinct eigenvalues of T, U_1...U_m corresponding subspaces of generalized eigenvectors]
V = U_1 oplus ... oplus U_m <=>
each U_j is invariant under T <=>
each (T - λ_j I)|U_j is nilpotent
remark: null T^n = generalized eigenspace of eigenval 0; range T^n rest.
[V complex, T in L(V)] => exists basis of V consisting of generalized eigenvectors of T
Lemma: [N nilpotent operator on V] => exists basis of V with respect to which the matrix of N has all entries on and below diagonal 0's.
Theorem: [V complex, T in L(V), λ_1, ..., λ_m distinct eigenvalues of T] => exists basis of V w.r.t which T has a block diagonal matrix of the form Diagonal(A1, ..., Am) where each A_j is an upper triangular matrix, with λ_j along the diagonal.
Jordan basis: if with respect to this basis T has a block diagonal matrix Diagonal(A_1, ..., A_m), where each A_j is an upper-triangular matrix with diagonal filled with some eigenvalue λ_j of T, and the line directly above it filled with 1's, and all other 0's.
m(v): the largest nonnegative integer such that N^(m(v)) v != 0
Lemma: If N in L(V) is nilpotent, then there exists vectors v1, ..., vk s.t.
(a) (v1, Nv1, ..., N^(m(v1))v1, ..., vk, Nvk, ..., N^(m(vk))vk) is a basis of V
(b) (N^(m(v1))v1, ..., N^(m(vk))vk) is a basis of null N
proof technique: induction on dim. find basis for range(N) and null(N) cap range(N). choose basis for W where null(N) = (null(N) cap range(N)) oplus W
Theorem: [V complex, T in L(V)]. exists a basis of V that is a Jordan basis for T
ST nilpotent <=> TS nilpotent
N nilpotent => only eigenvalue 0
V complex, N only eigenvalue 0 => N nilpotent
N self-adjoint & nilpotent => N = 0
det(-A) = (-1)^n det(A), A=M_{n*n}