-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathchapter3.tex
1622 lines (1573 loc) · 91.3 KB
/
chapter3.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
\chapter{Curvature of space}
\pagebreak[4]
\section{p82 - Exercise}
\begin{tcolorbox}
Explain why the surfaces of an ordinary cylinder and an ordinary cone are to be regarded as ``flat'' in the sense of our definition.
\end{tcolorbox}
The reason is because those surfaces can be ``unwrapped'' like the figure below shows.
\begin{figure}[h]
%\includegraphics[scale=.5]{Conemapping.jpg}
\input{./images/fig_p82_21_a.tex}
\caption{Unwrapping of a cone}
\end{figure}\\
For the cone, we can for each point $\ P $ on the cone, lying on a distance $\ h $ from the apex and making an angle $\theta $, associate on a plane, tangent to the cone, a point $ P^* $ lying at the same distance $ h $ from the apex, taken as origin for the coordinate system, and making an angle $\phi = \theta \sin{\alpha} $ with $ \alpha $ the angle of the cone. This pair of coordinates are polar coordinates with $ r \in (-\infty, + \infty)$ and $\theta \in [0, 2k\pi)$.
The same reasoning can be applied to a cylinder which is a cone with the apex at $\infty$. In that case the coordinate system becomes a Cartesian coordinate system. \\
As a continuous mapping exist from polar to orthogonal Cartesian coordinates both coordinate system can be written under the required form (3.101) and so can be called ``flat''.
$$\blacklozenge$$
\newpage
\section{p83 - Exercise}
\begin{tcolorbox}
What are the values of $R^s_{.rmn}$ in an Euclidean plane, the coordinates being rectangular Cartesians? Deduce the values of the components of this tensor for polar coordinates from its tensor character, or else by direct calculation.
\end{tcolorbox}
$$R^s_{rmn}=0$$
also in polar coordinates (see exercise 18 of Chapter $\RomanNumeralCaps{2}$).
$$\blacklozenge$$
\newpage
\section{p86 - Exercise}
\begin{tcolorbox}
Show that in a $V_2$ all the components of the covariant curvature tensor either vanish or are expressible in terms of $R_{1212}$.
\end{tcolorbox}
We have (3.115) and (3.116)
\begin{align}
\left \{ \begin{array}{l}
\ R_{rsmn} = - R_{srmn}\\
\ R_{rsmn} = - R_{rsnm}\\
\ R_{rsmn} = R_{mnrs}\\
\ R_{rsmn} + R_{rmns}+R_{rnsm}=0
\end{array} \right.
\end{align}
It is clear from the two first identities that in the tuples $(rs)$ and $(mn)$ both indices have to be different when the tensor is not $0$. So we only have to consider $R_{1212}$, $R_{1221}$, $R_{2112}$ and $R_{2121}$.\\
The two first identities gives us:
\begin{align}
R_{1221}= -R_{1212}\\
R_{2112}= -R_{1212}\\
R_{2121}= -R_{2112} = R_{1212}
\end{align}
The third identity doesn't give us any additional information.
The fourth identity gives us only trivial statements:
\begin{align}
R_{1212} + \underbrace{R_{1122}}_{=0}+\underbrace{R_{1221}}_{=- R_{1212} } = 0\\
\underbrace{R_{1221}}_{=- R_{1212}} + R_{1212}+\underbrace{R_{1122}}_{=0 } = 0\\
\underbrace{R_{2112}}_{=- R_{1212}} + \underbrace{R_{2121}}_{=R_{1212}}+\underbrace{R_{2211}}_{=0} = 0\\
\underbrace{R_{2121}}_{= R_{1212}} + \underbrace{R_{2211}}_{=0}+\underbrace{R_{2112}}_{= -R_{1212}} = 0
\end{align}
$\textbf{Conclusion:}$\\
We get the identities (2), (3) and (4) in function of $R_{1212}$ and all vanish if $R_{1212} = 0$
$$\blacklozenge$$
\newpage
\section{p86-87 - clarification}
\begin{tcolorbox}
\textit{The number of independent components of the covariant curvature tensor in a space of N dimensions is} $$\frac{1}{12}N^2\left(N^2-1\right)$$
\end{tcolorbox}
We have (3.115) and (3.116)
\begin{align}
\left \{ \begin{array}{l}
R_{rsmn} = - R_{srmn}\\
R_{rsmn} = - R_{rsnm}\\
R_{rsmn} = R_{mnrs}\\
R_{rsmn} + R_{rmns}+R_{rnsm}=0
\end{array}\right.
\end{align}
It is clear from the two first identities that in the tuple $(rs)$ and $(mn)$ both indices have to be different when the component is not $0$. So we only have to consider the component with the pair of tuples $(r,s)$ and $(m,n)$ with $r \neq s$ and $m \neq n$.
For the tuple $(r,s)$ we have $N$ possibilities to draw an index for $r$ but for $s$ only $N-1$ indices remain as $r \neq s$. So for the tuple $(r,s)$ we get $N(N-1)$ possibilities. But note by the first identity $R_{rsmn} = - R_{srmn}$ that we only have to consider the half of this quantity as once we have chosen a tuple $(r,s)$ we also know the component for the tuple $(s,r)$. So the total number of possibilities we have for $(r,s)$ is $M = \half N(N-1)$. The same yields for the tuple $(mn)$. So, we get in total $M^2$ possibilities according to the two first identities.\\
The third identity $R_{rsmn} = R_{mnrs}$ puts an extra constraint on the number of possibilities as we have to subtract from $M^2$ the number of possibilities covered by this third identity. Note that, once we have chosen a tuple $(rs)$ we have to exclude the tuple $(m,n) = (r,s)$ as the identity $R_{rsrs} = R_{rsrs}$ becomes trivial.. So for the first tuple we have $M$ possibilities, but once chosen, only $M-1$ remain for the second tuple. So we get $M(M-1)$ possibilities. But, again we only have to take half of these possibilities as the identities $R_{rsmn} = R_{mnrs}$ and $ R_{mnrs} = R_{rsmn}$ are equivalent.\\
So the total number of possibilities reduces to $$ M^2 - \half M(M-1) \quad \text{with} \quad M=\half N(N-1) $$
What about the fourth identity $$R_{rsmn} + R_{rmns}+R_{rnsm}=0$$
First we note that this identity implies that all indices are different as it becomes trivial in the other cases. This is a consequence of the first 3 identities. Indeed, we know already that
\begin{align}
\left \{ \begin{array}{l}
\ r \neq s\\
\ m \neq n\\
\ (r,s) \neq (m,n)\\
\end{array} \right.
\end{align}
Let's consider the following cases
\begin{align*}
\left \{ \begin{array}{llll}
\ r = m&\rightarrow m\neq s \ m\neq n \ r\neq n&\rightarrow & R_{rsrn} + \underbrace{R_{rrns}}_{=0}+\underbrace{R_{rnsr}}_{= -R_{rnrs}= -R_{rsrn}}=0\\
\ r = n&\rightarrow n\neq s \ m\neq n \ r\neq s&\rightarrow & R_{rsmr} + \underbrace{R_{rmrs}}_{= -R_{mrrs}= -R_{rsmr}}+\underbrace{R_{rrsm}}_{=0}=0\\
\ s = m&\rightarrow m\neq r \ n\neq s \ r\neq s&\rightarrow & R_{rssn} + \underbrace{R_{rsns}}_{= -R_{rssn}}+\underbrace{R_{rnss}}_{=0}=0\\
\ s = n&\rightarrow r\neq s \ m\neq s \ m\neq n&\rightarrow & R_{rsms} + \underbrace{R_{rmss}}_{=0}+\underbrace{R_{rssm}}_{= -R_{rsms}}=0
\end{array} \right.
\end{align*}
So indeed, once two indices are equal, the fourth identity becomes trivial and does not put extra constraints to the number of possibilities. For the tuple $(r,s,m,n)$ we have $N$ possibilities to draw an index for $r$, for $s$ only $N-1$, for $m$ only $N-2$ and for $n$ only $N-3$ indices remain as $r \neq s\neq m\neq n$. The maximum number of constraint generated by the fourth identity is thus $$N(N-1)(N-2)(N-3)$$
But here again double counts occur. Indeed the fourth identity is true for the $6$ tuples $$(rsmn),(rmsn) ,(rmns),(rsnm),(rnsm),(rnms)$$ as first entry in the identity. The same reasoning is valid for the tuples $(n...), \ (s...) \ (m...) $. \\So in total we get $6\times4 = 24$ equivalent identities and the number of constraints generated by the fourth identity reduces to $$\frac{1}{24}N(N-1)(N-2)(N-3)$$
Note that this number of constraints vanish for $N \leq 3$.\\
Putting it all together the number of independent components of $R_{rsmn}$ becomes
\begin{align*}
\mho &= M^2 - \half M(M-1)-\frac{1}{24}N(N-1)(N-2)(N-3)\\
&= \half M(M+1)-\frac{1}{24}N(N-1)(N-2)(N-3)\\
&= \frac{1}{8} N(N-1) \left(N(N-1)+2\right)-\frac{1}{24}N(N-1)(N-2)(N-3)\\
&= \frac{N}{24} \left( 3N^2-6N^2+9N-6-N^3+3N^2+3N^2-9N-2N+6 \right)\\
&= \frac{1}{12}N^2 \left( N^2-1\right)
\end{align*}
$$\blacklozenge$$
\newpage
\section{p87 - Exercise}
\begin{tcolorbox}
Using the fact that the absolute derivative of the fundamental tensor vanishes, prove that 3.107 may be written $$ \frac{\delta^2 T_r}{\delta u \delta v} - \frac{\delta^2 T_r}{\delta v \delta u} = R_{rpmn}T^p\partial_u x^m \partial_v x^n $$
\end{tcolorbox}
By 2.519 and 2.619 we have
\begin{align*}
\fdv{T_r}{u} &= \fdv{(a_{rk}T^k)}{u}
= \underbrace{\fdv{(a_{rk})}{u}}_{=0}T^k+a_{rk}\fdv{(T^k)}{u}\\
&= a_{rk}T^k_{|n} \partial_u x^n\\
\Rightarrow\quad \frac{\delta^2 T_r}{\delta u \delta v} &= \frac{\delta (a_{rk}T^k_{|n} \partial_u x^n)}{\delta v}\\
& = \underbrace{\frac{\delta (a_{rk})}{\delta v}}_{=0}T^k_{|n} \partial_u x^n+a_{rk}\frac{\delta (T^k_{|n} )}{\delta v}\partial_u x^n+a_{rk}T^k_{|n} \delta \frac{(\partial_u x^n)}{\delta v}\\
&= a_{rk}\underbrace{\frac{\delta (T^k_{|n} )}{\delta v}}_{=T^k_{|nm}\partial_v x^m} \partial_u x^n+a_{rk}T^k_{|n} \underbrace{\delta \frac{(\partial_u x^n)}{\delta v}}_{=\left(\partial_u x^n \right)_{|m}\partial_v x^m}\\
&= a_{rk}T^k_{|nm}\partial_v x^m \partial_u x^n+a_{rk}T^k_{|n} \underbrace{\left(\partial_u x^n \right)_{|m}}_{ = \partial_m \left(\partial_u x^n \right) + \Gamma^n_{pm}\partial_u x^p}\partial_v x^m\\
&= a_{rk}T^k_{|nm}\partial_v x^m \partial_u x^n+a_{rk}T^k_{|n} \left( \underbrace{\partial_m \left(\partial_u x^n \right)}_{=\partial_u \left( \delta_m^n \right)=0} + \Gamma^n_{pm}\partial_u x^p \right)\partial_v x^m\\
&= a_{rk}T^k_{|nm}\partial_v x^m \partial_u x^n+a_{rk}T^k_{|n} \Gamma^n_{pm}\partial_u x^p \partial_v x^m
\end{align*}
Hence we have
\begin{align*}
\frac{\delta^2 T_r}{\delta v \delta u} &=a_{rk}T^k_{|nm}\partial_u x^m \partial_v x^n+a_{rk}T^k_{|n} \Gamma^n_{pm}\partial_v x^p \partial_u x^m\\
&=a_{rk}T^k_{|mn}\partial_u x^n \partial_v x^m+a_{rk}T^k_{|n} \Gamma^n_{mp}\partial_v x^m \partial_u x^p\\
\Rightarrow\quad \frac{\delta^2 T_r}{\delta u \delta v} - \frac{\delta^2 T_r}{\delta v \delta u} &= a_{rk}T^k_{|nm}\partial_v x^m \partial_u x^n - a_{rk}T^k_{|mn}\partial_u x^n \partial_v x^m\\
&= \left(a_{rk}T^k_{|nm} - a_{rk}T^k_{|mn}\right)\partial_u x^n \partial_v x^m\\
&= \left(\underbrace{T_{r|nm} - T_{r|mn}}_{= -R_{rpmn}T^p}\right)\partial_u x^n \partial_v x^m\\
&= -\underbrace{R_{rpmn}}_{=-R_{rpnm}} T^p\partial_u x^n \partial_v x^m =
R_{rpmn} T^p\partial_u x^m \partial_v x^n
\end{align*}
$$\blacklozenge$$
\newpage
\section{p91 - Exercise}
\begin{tcolorbox}
Would the study of geodesic deviation enable us to distinguish between a plane and a right circular cylinder?
\end{tcolorbox}
For geodesic lines we have
\begin{align*}
\dv[2]{x^r}{u} + \Gamma^r_{mn}\dv{x^m}{u}\dv{x^n}{u} = 0\\
\end{align*}
with the fundamental tensor for a cylinder (see exercise page 27)
\begin{align*}
(a_{mn}) = \begin{pmatrix}
1& 0 \\
0 & r^2 \\
\end{pmatrix}
\end{align*}
As no element of this tensor is a function of the coordinates, it is clear that all Christoffel symbols vanish and the geodesic curve are solutions of the simple system of $2^{nd}$ order differential equations
\begin{align*}
\dv[2]{x^r}{u} = 0
\end{align*}
Hence
\begin{align}
x^r &= \kappa^r u + \mu^r\\
\text{or}\quad x^r &= \kappa^r u
\end{align}
by choosing the origin of the coordinates system with the initial condition position of the point.
Choosing polar coordinates $\phi, z$ as coordinates, the distance one walks when following a geodesic is given by
\begin{align*}
s= \int_{u_0}^{u_1}\sqrt{(r^2(d\phi)^2 + (dz)^2)}
\end{align*}
and if we take $u=\phi$ as independent a parameter, by (2) we get
\begin{align*}
s-s_0 &= \int_{\psi_0}^{\psi_1}\sqrt{(r^2 + \kappa^2)}d\psi \equiv g(\psi - \psi_0)\\
\text{or}\quad s&= \int_{\psi_0}^{\psi_1}\sqrt{(r^2 + \kappa^2)}d\psi \equiv g\psi
\end{align*}
by introducing transformed coordinates $s^{'}= s-s_0$ and $\psi^{'} =\psi - \psi_0$.\\
and thus we get
\begin{align}
z = m s^{'}
\end{align}
\begin{figure}[H]
%\centering
\begin{minipage}[t]{.4\textwidth}
%\centering
\vspace{0pt}
%\includegraphics[scale=.5]{p85_ex1.png}
\input{./images/fig_p91_153_a.tex}
\end{minipage}\hfill
\caption{Geodesics on a cylinder}
\label{fig:fig_p91_153_a}
\end{figure}
The above figure illustrates what an observer living in the manifold $\Omega$ sees when walking along geodesics on the cylinder. He only can measure the distance $s^{'}$ and the displacement along $z$ and by (3) can only draw a chart like the one seen on the right of the cylinder. A "flatlander" living in the mapped manifold $M(\Omega)$ would see the same chart when walking along geodesics in his plane.\\\\
\textbf{Conclusion:} No, studying the geodesic deviation on a right circular cylinder does not enable us to say on which surface we are.
$$\blacklozenge$$
\newpage
\section{p93 - Exercise}
\begin{tcolorbox}
For rectangular cartesians in Euclidean 3-space, show that the general solution of 3.311 is $\eta^r = A^r s+B^r $, where $A^, \ B^r$ are constants. Verify this by elementary geometry.
\end{tcolorbox}
We have equation 3.111
\begin{align}
\fdv[2]{\eta^r}{s}+ R^r_{.smn} p^s \eta^m p^n=0
\end{align}
From exercise on page 83 we know that $R^r_{.smn}=0$ in an Euclidean space. Also, in such spaces, the Christoffels symbols vanish and equation (1) reduces to $ \dv[2]{\eta}{s} = 0$. And so, $$\eta^r = A^r s+B^r $$
This is also easily deducted from a geometrical point of view. In an Euclidean space, the geodesics are straight lines. For an infinitesimal change in the geodesic family parameter $v$, we can assume that a vector, going perpendicular from 1 point from one geodesic with parameter $v$ to another infinitesimal close geodesic with parameter $v+dv$, will also be perpendicular on this geodesic. This situation is depicted in fig. ~\ref{fig:fig_p93_16}(a). We conclude that $\overrightarrow{AA^{'}} \ \parallel \ \overrightarrow{PP^{'}}$. This can also be deducted from Thales theorem (see fig.~\ref{fig:fig_p93_16} (b)).
\begin{figure}[H]%
\centering
\subfloat[]{\input{./images/fig_p93_16_a.tex}}
\qquad
\subfloat[]{\input{./images/fig_p93_16_b.tex}}
\caption{Geometrical deduction of the geodesical deviation equation in an Euclidean space.}
\label{fig:fig_p93_16}
\end{figure}
Hence, we than can say that,
$$ \frac{\left|AA^{'}\right|}{u_0}=\frac{\left|PP^{'}\right|}{u} $$
or, $$ \eta^r = A^r u + B^r $$ as the reference point $A$ can be chosen arbitrarily on the line $AP$.
$$\blacklozenge$$
\newpage
\section{p96 - Clarification}
\begin{tcolorbox}
... But under parallel propagation along a geodesic, a vector makes a constant angle with the geodesic; following the vector round the small quadrilateral, it is easy to see that the angle through which the vector has turned on completion of the circuit is $E$, the excess of the angle-sum of four right angles...
\end{tcolorbox}
\begin{figure}[h]
\input{./images/fig_p96_3415_a.tex}
\caption{Parallel transportation along a closed path}
\label{fig:fig_p96_3415_a}
\end{figure}
Consider 4 geodesics $\gamma_{1}, \gamma_{2},\gamma_{3},\gamma_{4}$ close to each other so that they form a small quadrilateral. At each intersection they form an angle $\Psi_{i\rightarrow i+1}$. A vector $\overrightarrow{u_{0}}$ is transported parallely along the path starting at the intersection $S$ of $\gamma_{1},\gamma_{4}$ and ends as vector $\overrightarrow{u_{t}}$ at the same point $S$. In general $\overrightarrow{u_{0}}\ne \overrightarrow{u_{t}}$ and will differ by a small angle $\delta\theta$. Let's investigate the relationship between $\delta\theta$ and the $\Psi_{i\rightarrow i+1}$.\\
\begin{figure}[h]
\input{./images/fig_p96_3415_b.tex}
\caption{Relationship between parallel transportation along a closed path and the excess of the angle-sum over four right angles of a quadilateral.}
\label{fig:fig_p96_3415_b}
\end{figure}\\
Let $\xi_{i}$ and $\xi^{o}_{i} \ (i=1,2,3,4)$ be respectively, tangent unit vectors to the geodesics at the beginning and at the end of the intersections of the geodesics. Be $\widehat{\xi}_{i}$ and $\widehat{\xi}^{o}_{i} \ (i=1,2,3,4)$ the angles of these vectors relative to an arbitrary reference vector and be $\widehat{\tau}_{i}$ and $\widehat{\tau}^{o}_{i} \ (i=0,2,3,4)$ the angles of the transported vector (relative to this arbitrary reference vector) at the intersections of the geodesics.
Then,
\begin{align}
\left \{ \begin{array}{ll}
\widehat{\tau}_{0} = \widehat{\xi}_{1}+\theta_{1}&\\
\widehat{\tau}_{1} = \widehat{\xi}^{o}_{1}+\theta_{1}&\widehat{\tau}_{1} = \widehat{\xi}_{2}+\theta_{2}\\
\widehat{\tau}_{2} = \widehat{\xi}^{o}_{2}+\theta_{1}&\widehat{\tau}_{2} = \widehat{\xi}_{3}+\theta_{3}\\
\widehat{\tau}_{3} = \widehat{\xi}^{o}_{3}+\theta_{3}&\widehat{\tau}_{3} = \widehat{\xi}_{4}+\theta_{4}\\
\widehat{\tau}_{4} = \widehat{\xi}^{o}_{4}+\theta_{4}&\\
\end{array} \right.
\end{align}
We have also
\begin{align}
\left \{ \begin{array}{l}
\widehat{\xi}_{2} - \widehat{\xi}^{o}_{1} = \Psi_{1 \rightarrow 2}\\
\widehat{\xi}_{3} - \widehat{\xi}^{o}_{2} = \Psi_{2 \rightarrow 3}\\
\widehat{\xi}_{4} - \widehat{\xi}^{o}_{3} = \Psi_{3 \rightarrow 4}\\
\widehat{\xi}_{1} - \widehat{\xi}^{o}_{4} = \Psi_{4 \rightarrow 1}\\
\widehat{\tau}_{4}-\widehat{\tau}_{0} = \delta \theta\\
\end{array} \right.
\end{align}
Combining (1) and (2)
\begin{align}
\left \{ \begin{array}{l}
\Psi_{1 \rightarrow 2} = \theta_{1}-\theta_{2}\\
\Psi_{2 \rightarrow 3}= \theta_{2}-\theta_{3}\\
\Psi_{3 \rightarrow 4}= \theta_{3}-\theta_{4}\\
\Psi_{4 \rightarrow 1}= \theta_{4}-\theta_{1}- \delta \theta\\
\end{array} \right.
\end{align}
and so
\begin{align}
\delta \theta = -(\Psi_{1 \rightarrow 2}+\Psi_{2 \rightarrow 3}+\Psi_{3 \rightarrow 4}+\Psi_{4 \rightarrow 1})
\end{align}\\
Note that these relationships are valid on the (curved) $V_{2}$ manifold. In order to go further we map the quadrilateral, on the manifold, on it's tangent plane (see fig.~\ref{fig:fig_p96_3415_d} (a) hereunder) - supposing that the quadrilateral is infinitesimally small and that we can find a conformal map from $\gamma$ to $T(\gamma)$.
\begin{figure}[h]%
\centering
\subfloat[]{\input{./images/fig_p96_3415_c.tex}}
\qquad
\subfloat[]{\input{./images/fig_p96_3415_d.tex}}
\caption{Relationship between parallel transportation along a closed path and the excess of the angle-sum over four right angles of a quadrilateral.}
\label{fig:fig_p96_3415_d}
\end{figure}\\
\\ Let's look at the point $p_2$ on $\partial \Omega$. We have $\nu_2 = \epsilon^{-}+\Psi_{1 \rightarrow 2} +\epsilon^{+}= \Psi_{1 \rightarrow 2}+\epsilon$. In general,
\begin{align}
\sum_{i=1}^{4} \nu_i &= 2\pi \\
\underbrace{\sum_{i=1}^{4} \Psi_{i \rightarrow i+1}}_{= - \delta \theta} +\sum_{i=1}^{4} \epsilon_{i} &= \sum_{i=1}^{4} \frac{\pi}{2} \\
\Rightarrow \quad - \delta \theta &= \sum_{i=1}^{4}(\frac{\pi}{2} - \epsilon _{i})
\end{align}
Calling $\frac{\pi}{2} - \epsilon _{i}$ the excess, we get the assertion made.
$$\blacklozenge$$
\newpage
\section{p98 - Clarification}
\begin{tcolorbox}
... it is easy to see that the expansion takes the form $$\mathbf{3.425.}\spatie \eta=\theta\left(s-\frac{1}{6}\epsilon K s^3 + \dots\right)$$
\end{tcolorbox}
Expanding $\eta$ in a power series gives
\begin{align}
\eta &= \underbrace{\left.\eta\right|_0}_{=0} + \underbrace{\left.\dv{\eta}{s}\right|_0 }_{=\theta}s -\half\underbrace{\left.\dv[2]{\eta}{s}\right|_0}_{=0}s^2 + \frac{1}{6}\left.\dv[3]{\eta}{s}\right|_0s^3 + \dots \\
\dv[2]{(1)}{s} \quad \Rightarrow\quad \dv[2]{\eta}{s} &=\left.\dv[3]{\eta}{s}\right|_0s+ \dots\\
\text{for } \lim_{ s \to 0} \ \text{ we have } \eta \approx \theta s \quad \text{so (2)} \quad \Rightarrow\quad \dv[2]{\eta}{s} &=\left.\dv[3]{\eta}{s}\right|_0\frac{\eta}{\theta}+ \dots\\
\lim_{ s \to 0}\quad \Rightarrow \quad \left.\dv[3]{\eta}{s}\right|_0 &= \theta\underbrace{\lim_{ s \to 0}\frac{1}{\eta}\dv[2]{\eta}{s}}_{= -\epsilon K}\\
\Rightarrow \quad \eta&=\theta\left(s-\frac{1}{6}\epsilon K s^3 + \dots\right)
\end{align}
$$\blacklozenge$$
\newpage
\section{p102 - Clarification}
\begin{tcolorbox}
Still using the same notation and Fig. 7, we have at B the three vectors $(T^r)_1,\ (T^r)_2,\ Y_r$ ... it follows that $$\textbf{3.516.}\quad \quad (\Delta T^r)_A(Y_r)_{A'2}= - (\Delta T^r)_B(Y_r)_B$$
\end{tcolorbox}
First we note that for an invariant "propagated parallely" along a curve we have $$\fdv{(T^r Y_r)}{u} = \fdv{T^r} {u}Y_r+T^r\fdv{Y_r}{u}=0$$
\begin{figure}[h]
\input{./images/fig_p102_3516_a.tex}
\caption{Parallel transportation along a closed path}
\label{fig:fig_p102_3516_a}
\end{figure}
In fig. ~\ref{fig:fig_p102_3516_a} we use the following convention:\\
- a one black arrowed line means a forward propagation from $A$ to $B$ along $C_1$\\
- a double black arrowed line means a forward propagation from $A$ to $B$ along $C_2$\\
- a double open arrowed line means a backward propagation from $B$ to $A$ along $C_2$\\\\
In order to find the angular displacement of the vector $(T^r)_0$ when propagated parallely from $A$ to $B$ and back to $A$ along different paths, we follow the dash dotted line in Fig. 1.7. Following that path, we end with the vector $(T^r)_0+ (\Delta T^r)_{A})$ in $A$ and have the vector $(T^r)_1$ as intermediate forward propagation from $A$ to $B$ along $C_1$, that vector being transported backwards to $A$ along $C_2$.\\\\
Note that for a vector the forward and backward propagation along the same curve is a null operation:$$(T^r)_0 \underset{\underset{C_2}{A\rightarrow B}}{\rightarrow} (T^r)_2 \underset{\underset{C_2}{B\rightarrow A}}{\rightarrow} (T^r)_0$$
Also, we have in Fig.1.7.
$$(T^r)_0 \underset{\underset{C_1}{A\rightarrow B}}{\rightarrow} (T^r)_1 \underset{\underset{C_2}{B\rightarrow A}}{\rightarrow} (T^r)_0+ (\Delta T^r)_{A}$$
$$(Y_r)_{B} \underset{\underset{C_2}{B\rightarrow A}}{\rightarrow} (Y_r)_{A,2}$$
At $A$ we form the following invariants
\begin{align}
\left \{ \begin{array}{l}
((T^r)_0+ (\Delta T^r)_{A})\ (Y_r)_{A,2}\\
(T^r)_0 \ (Y_r)_{A,2}
\end{array} \right.
\end{align}\\
and at $B$
\begin{align}
\left \{ \begin{array}{l}
(T^r)_1\ (Y_r)_{B}\\
(T^r)_2 \ (Y_r)_{B} = ((T^r)_1+(\Delta T^r)_B )\ (Y_r)_{B}
\end{array} \right.
\end{align}\\
Due to the null effect of parallel propagation on invariants, we get
\begin{align}
(T^r)_1\ (Y_r)_{B} &= ((T^r)_0+ (\Delta T^r)_{A})\ (Y_r)_{A,2}\\
((T^r)_1+(\Delta T^r)_B )\ (Y_r)_{B}&=(T^r)_0 \ (Y_r)_{A,2}\\
\text{(3)-(4)}\quad \Rightarrow \quad -(\Delta T^r)_B )\ (Y_r)_{B}&=(\Delta T^r)_{A})\ (Y_r)_{A,2}
\end{align}\\
$$\blacklozenge$$
\newpage
\section{p105 - Clarification}
\begin{tcolorbox}
\begin{align*}
\textbf{3.521.} \spatie \dv{I}{v} &= \int_{u_1}^{u_2}\partial_v\left(T_n\partial_u x^n\right)du\\
&= \int_{u_1}^{u_2}\fdv{T_n}{v}\partial_u x^n du + \int_{u_1}^{u_2}T_n\fdv{\left(\partial_u x^n\right)}{v} du.
\end{align*}
Now $\fdv{T_n}{v}=0$, since $T_r$ is propagated along $\textit{all}$ curves in $V_n$.
\end{tcolorbox}
To better understand this last statement recall that from 3.515, we have
\begin{align*}
\left( \Delta T^r \right)_B \left( Y_r \right)_B &= \int \int Y_rR^r_{.pmn}T^p\partial _u x^m \partial _v x^n du dv\\
&= 0 \quad \text{as} \spatie R^r_{.pmn}=0
\end{align*}
As $\left( Y_r \right)_B$ is arbitrary we have $\left( \Delta T^r \right)_B =0$. Consider fig. ~\ref{fig:fig_p105_3521_a} below.\\
\begin{figure}[H]
\center
\input{./images/fig_p105_3521_a.tex}
\caption{Parallel transportation along a path in a space with zero curvature tensor}
\label{fig:fig_p105_3521_a}
\end{figure}
Consider the path $A\rightarrow P \rightarrow P^{'}$, $ P$ being situated at the parametric coordinates $(u, v)$ and $P^{'}$ at $(u, v + dv)$. For this path we have also $\left( \Delta T^r \right)_{P,P^{'}} =0$. So $(T^r)_{u,v} = (T^r)_A$ and $(T^r)_{u,v+dv} = (T^r)_{u,v}$ and thus $\fdv{T_r}{v}=0$.
$$\blacklozenge$$
\newpage
\section{p108 - Exercise 1}
\begin{tcolorbox}
Taking polar coordinates on a sphere of radius a, calculate the curvature tensor, the Ricci tensor, and the curvature invariant.
\end{tcolorbox}
We have
\begin{align}
\Phi = a^2d\theta^2 + a^2\sin^2 \theta d\phi^2
\end{align}
We only have to calculate $R_{1212}$ (see exercise page 86).
\begin{align}
R_{\theta\phi\theta\phi}&= \partial_{\theta}\underbrace{[\phi \phi,\theta]}_{= -a^2\sin\theta\cos\theta} -\partial_{\phi}\underbrace{[\phi \theta,\theta]}_{=0} + \underbrace{\Gamma^{\theta}_{\phi\theta}}_{=0}[\theta\phi,\theta]+ \underbrace{\Gamma^{\phi}_{\phi\theta}[\theta\phi,\phi]}_{=a^2\cos^2\theta} - \underbrace{\Gamma^{\theta}_{\phi\phi}}_{=0}[\theta\theta,\theta]- \underbrace{\Gamma^{\phi}_{\phi\phi}}_{=0}[\theta\theta,\phi]\\
&= a^2\sin^2\theta\\
\text{3.208. : }& \quad \frac{R_{11}}{a_{11}}=\frac{R_{22}}{a_{22}}=-\frac{R_{\theta\phi\theta\phi}}{det(a_{mn})}\quad\Rightarrow \quad \left \{ \begin{array}{l}
R_{11} = -1\\
R_{12} = 0\\
R_{22} = -\sin^2\theta
\end{array} \right.\\
\text{3.210. : }& \quad R=-\frac{2}{det(a_{mn})}R_{\theta\phi\theta\phi}\quad\Rightarrow \quad R= -\frac{2}{a^2}
\end{align}
$$\blacklozenge$$
\newpage
\section{p108 - Exercise 2}
\begin{tcolorbox}
Take as manifold $V_2$ the surface of an ordinary right circular cone, and consider one of the circular sections. A vector in $V_2$ is propagated parallely round this circle. Show that its direction is changed on completion of the circuit. Can you reconcile this result with the fact that $V_2$ is flat?
\end{tcolorbox}
\begin{figure}[h]
\center
\input{./images/fig_p108_Ex2_a.tex}
\caption{Intrinsic coordinates on a cone}
\label{fig:fig_p108_Ex2_a}
\end{figure}
We take as coordinate system $\left(u,\theta \right)$, embedded in the manifold, $u$ being the distance of the generator of the considered point to the apex of the cone and $\theta$ the angle with a arbitrary vector laying in a plane perpendicular to the axis of the cone.\\
It is not hard to see that the fundamental form for this manifold is $$ \Phi = du^2+ \underbrace{k}_{= \ \sin^2 \alpha}u^2 d\theta^2 $$
We have
\begin{align*}
(a_{mn})= \begin{pmatrix}
1&0 \\
0& k u^2 \\
\end{pmatrix}\quad
(a^{mn})= \begin{pmatrix}
1&0 \\
0& \frac{1}{k u^2} \\
\end{pmatrix}\\
\begin{pmatrix}
\left[ mn,u \right] \\
\left[ mn,\theta \right] \\
\end{pmatrix}=\begin{pmatrix}
0&0&-k u \\
k u&0&0 \\
\end{pmatrix}\\
\begin{pmatrix}
\Gamma^u_{mn} \\
\Gamma^{\theta}_{mn} \\
\end{pmatrix}=\begin{pmatrix}
0&0&-k u \\
0&\frac{1}{u}&0 \\
\end{pmatrix}
\end{align*}
Let's calculate the curvature tensor. From the exercise on page 86 we know that for a $V_2$ all components of the curvature tensor can be expressed in terms of $R_{1212}$. We have
\begin{align*}
R_{u\theta u\theta} &= \left \{ \begin{array}{l}
\frac{1}{2}\left(\partial_{\theta u}a_{u \theta}+\partial_{u \theta}a_{\theta u}-\partial_{\theta \theta}a_{uu}-\partial_{u u}a_{\theta \theta} \right) \\
+ a^{pq}\left([u \theta,p][\theta u,q] -[u u,p][\theta \theta,q] \right)
\end{array} \right.
&= \left \{ \begin{array}{l}
-k \\
+ a^{uu}\left(\underbrace{[u \theta,u][\theta u,u] -[u u,u][\theta \theta,u]}_{=0} \right)\\
+ \underbrace{a^{u \theta}}_{=0}\left([u \theta,u][\theta u,\theta] -[u u,u][\theta \theta,\theta] \right)\\
+ \underbrace{a^{\theta u}}_{=0}\left([u \theta,\theta][\theta u,u] -[u u,\theta][\theta \theta,u] \right)\\
+ \underbrace{a^{\theta \theta}}_{=\frac{1}{ku^2}}\left(\underbrace{[u \theta,\theta][\theta u,\theta]}_{= k^2u^2} -\underbrace{[u u,\theta][\theta \theta,\theta]}_{=0} \right)\\
\end{array} \right.
\end{align*}
So indeed all components of the curvature tensor vanish and hence $V_2$ is flat.
Let's now calculate the parallel transportation of a vector $T^r$ along a circle somewhere on the cone. Taking $\theta$ as the parameter of the curve, the equation of the curve is $(u=u_0,\ \theta) \quad \theta \in \ \left[0, 2\pi \right)$. We have for parallel transportation along that curve $\fdv{T^r}{\theta}=0$ and get
\begin{align}
& \left \{ \begin{array}{l}
\dv{T^u}{\theta} + \Gamma^u_{\theta \theta}T^{\theta}\dv{\theta}{\theta} =0\\\\
\dv{T^{\theta}}{\theta} + \Gamma^{\theta}_{u \theta}T^{u}\dv{\theta}{\theta}+ \Gamma^{\theta}_{\theta u}T^{\theta}\underbrace{\dv{u}{\theta}}_{=0} =0\quad \left(\dv{u}{\theta} = 0 \quad \text{as} \quad u= C^{st} \right)
\end{array} \right.\\
\Rightarrow \quad
& \left \{ \begin{array}{l}
\dot{T^u} -ku_0 T^{\theta}=0\\\\
\dot{T}^{\theta} + \frac{1}{u_0}T^{u}=0
\end{array} \right.\\
\Rightarrow \quad
& \left \{ \begin{array}{l}
\frac{\ddot{T}^u}{T^{u}} = -k\\\\
\dot{T}^{\theta} =- \frac{T^{u}}{u_0}
\end{array} \right.
\end{align}
From (3a) we deduce that a solution can be of the form\\ $T^u = p^{'}\left( e^{(a \theta + b^{'})}+ e^{-(a \theta + b^{'})}\right)$. Substituting in (3) we see that $a^2 = -k \rightarrow a = \pm i\sqrt{k}$. Replacing $b^{'}$ by $ib$ the solution for the system of differential equations (3) becomes
\begin{align}
T^u &= p^{'}\left( e^{i(\sqrt{k} \theta + b)}+ e^{-i(\sqrt{k} \theta + b)}\right)\\
&= p\cos{\left(\sqrt{k} \theta + b \right)}\\
\Leftrightarrow \quad &= C_1\sin{\sqrt{k}\theta}+ C_2\cos{\sqrt{k}\theta}\\
\text{(6) in (3) gives:}\quad & \left \{ \begin{array}{l}
T^u = C_1\sin{\sqrt{k}\theta}+ C_2\cos{\sqrt{k}\theta}\\\\
T^{\theta} = \frac{1}{\sqrt{k}u_0}\left(C_1\cos{\sqrt{k}\theta}- C_2\sin{\sqrt{k}\theta}\right)\\
\end{array} \right.\\
\text{with} \quad & \left \{ \begin{array}{l}
C_1=\sqrt{k}u_0\left.T^{\theta}\right|_{\theta=0} \\\\
C_2=\left.T^{u}\right|_{\theta=0}\\
\end{array} \right.\\
\Rightarrow \quad & \left \{ \begin{array}{l}
T^u = \sqrt{k} u_0 T^{\theta}_{0}\sin{\sqrt{k}\theta}+ T^{u}_{0}\cos{\sqrt{k}\theta}\\\\
T^{\theta} = T^{\theta}_{0}\cos{\sqrt{k}\theta}- \frac{T^{u}_{0}}{\sqrt{k}u_0} \sin{\sqrt{k}\theta}\\
\end{array}\right.
\end{align}\\
Let's now compute the angle $\phi$ between the starting vector $T^r_{0}$ and the vector $T^r$ parallely transported over an angle $\theta$ on the circle. We have (see $\textbf{(2.301.)}$ and $\textbf{(2.312.)}$):
\begin{align*}
\left \{ \begin{array}{l}
\left| T^r_0 \right|^2 = \left(T^u_0\right)^2 + k u^2_0\left(T^{\theta}_0\right)^2\\\\
\left| T^r \right|^2 = \left(T^u_0\right)^2 + k u^2_0\left(T^{\theta}_0\right)^2\\\\
\cos{\phi} = \frac{T^u_0 T^u+k u^2_0T^{\theta}_0 T^{\theta} }{\left(T^u_0\right)^2 + k u^2_0\left(T^{\theta}_0\right)^2}
\end{array} \right.
\end{align*}
The last equation in (19) becomes
\begin{align*}
\cos{\phi} &= \cos{\sqrt{k}\theta} \\
\Rightarrow \quad \phi &= \sqrt{k}\theta + 2m\pi\quad\quad (m= 0,\pm 1,\pm 2, \dots)
\end{align*}
So for $\theta = 2\pi$ the angle between the starting vector and the transported vector is not $2m\pi$.
\begin{figure}[H]%
\centering
\subfloat[]{\input{./images/fig_p108_Ex2_b1.tex}}
\qquad
\subfloat[]{\input{./images/fig_p108_Ex2_b2.tex}}
\caption{Relationship between parallel transportation along a circle of a cone and unwrapping a cone .}
\label{fig:fig_p108_Ex2_b2}
\end{figure}
Fig. ~\ref{fig:fig_p108_Ex2_b2} illustrates the analogy between\\
(a): the $\parallel$ transportation of a vector $P$ along a circular curve $\gamma$ on the cone over an angle $\theta$ giving a vector $P_t$ making an angle $\phi = \sqrt{k}\theta$ with the starting vector, and \\
(b): the result of "unwrapping" a cone. Be two vectors $\overrightarrow{OP}$ and $\overrightarrow{OP_t}$, $P$ and $P_t$ being two points placed at a distance $r\theta$ along a circle with radius $r$. Placing the vector $\overrightarrow{OP}$ in the $XY$-plane and unwrapping the cone over an angle $\theta$ will map the vector $\overrightarrow{OP_t}$ in the $XY$-plane to a vector $\overrightarrow{OP_t^{*}}$ making an angle $\phi = \sqrt{k}\theta$ with the vector $\overrightarrow{OP}$.\\
The transported vector will only coincide with the initial vector for $\sqrt{k}=\sin{\alpha} = \frac{1}{n}\quad (n= 1, 2, \dots)$. Fig. ~\ref{fig:fig_p108_Ex2_b3} illustrates this in the $X-Y$-plane, for n=3. Only after a transportation over three periods, will the transported vector coincide with the initial vector. The dotted area represents the unwrapped cone.
\begin{figure}[H]%
\centering
{\input{./images/fig_p108_Ex2_b3.tex}}
\caption{Parallel transportation along a circle of a cone with $\sin{\alpha} = \frac{1}{3}$ .}
\label{fig:fig_p108_Ex2_b3}
\end{figure}
$$\blacklozenge$$
\newpage
\section{p109 - Exercise 3 and 4}
$\mathbf{Exercise \ 3.}$\\
\begin{tcolorbox}
Consider the equations $$ \left( R_{mn} -\theta a_{mn}\right)X^n = 0$$ where $R_{mn}$ is the Ricci tensor in a $V_N \ (N>2)$, $\theta$ an invariant, and $X^n$ a vector. Show that, if these equations are to be consistent, $\theta$ must have one of a certain set of $N$ values, and that the vectors $X^n$ corresponding to different values of $\theta$ are perpendicular to one another. (The directions of these vectors are called the $\mathit{Ricci \ principal \ directions}$).
\end{tcolorbox}
\begin{align}
\left( R_{mn} -\theta a_{mn}\right)X^n = 0\\
(1)\times (a^{mp}) \quad \Rightarrow \quad a^{mp} R_{mn}X^n -\theta \underbrace{a^{mp}a_{mn}}_{=\delta^p_n}X^n =0\\
\Rightarrow \quad a^{mp} R_{mn}X^n -\theta X^p =0
\end{align}
Define $T^{p}_{n} = a^{mp} R_{mn} $, then (3) can be written in matrix form with $\mathbf{T}\equiv (T^{p}_{n})$ , $\mathbf{X}\equiv (X^p)$ and $\mathbf{I} \equiv (\delta^i_j)$
\begin{align}
\left(\mathbf{T}-\theta \mathbf{I}\right)\mathbf{X} =0
\end{align}
This is an eigenvector equation with $\mathbf{T}$ being Hermitian i.e. $\mathbf{T}^{\dag} =\mathbf{T}$ . Indeed, obviously the complex conjugat $\mathbf{\overline{T}} = \textbf{T}$ and
\begin{align*}
\mathbf{T}^T &= \left(\mathbf{AR}\right)^T\\
&= \mathbf{R}^T \mathbf{A}^T\\
\Leftrightarrow \left(T^{j}_{i}\right) &= \left(R_{kj}\right)^T\left(a^{ik}\right)^T\\
&= \left(R_{jk}\right)\left(a^{ki}\right)
\end{align*}
as both $ R_{jk}, \ a^{ki}$ are symmetric we have
\begin{align*}
\left(T^{j}_{i}\right) &= \left(R_{kj}\right)\left(a^{ik}\right)\\
&= \left(T^{i}_{j}\right)\\
\Rightarrow \spatie \mathbf{T}^{\dag} &=\mathbf{T}
\end{align*}
This means that the $N$ roots of $det\left(\mathbf{T}-\theta \mathbf{I_n}\right)=0$, which is a necessary condition to have equation (4) consistent, are real. Hence $\theta$ will take $N$ values, being the eigenvalues of the transformation matrix $\textbf{T}$. If all eigenvalues have multiplicity one, then the N eigenvectors in (4) corresponding to the $N$ eigenvalues will be orthogonal to each other. But, can eigenvalues with algebraic multiplicity $ m > 1$ occur? The answer is yes. Let's rewrite $P(\theta) = det\left(\mathbf{T}-\theta \mathbf{I_n}\right)$ as $\theta^{N} + q_i \theta^{n-i} \quad (i= N-1,N-2,\dots, 1)$ with $q_i$ functions of the Ricci tensor components. The condition for eigenvalues with algebraic multiplicity $ m > 1$ to occur is that the determinant of the Sylvester matrix of the two following polynomial should be zero.
\begin{align}
\left \{ \begin{array}{l}
P(\theta) = \theta^{N} + q_i \theta^{n-i} \quad (i= N-1,N-2,\dots, 1)\\
\dv{P(\theta)}{\theta} = N \theta^{N-1} + (n-i)q_i \theta^{n-i-1} \quad (i= N-1,N-2,\dots, 1)
\end{array} \right.
\end{align}
The associated Sylvester matrix with these two polynomials will be of the form $$ S \left( P(\theta),\dv{P(\theta)}{\theta}\right) = \left( \begin{array}{cccccccc}
1&q_{N-1}&\dots&q_1&0&0&\dots&0\\
0&1&q_{N-1}&\dots&q_1&0&\dots&0\\
\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots\\
0&\dots&0&1&q_{N-1}&\dots &q_2&q_1\\
N&(N-1)q_{N-1}&\dots&q_1&0&0&\dots&0\\
0&N&(N-1)q_{N-1}&\dots&q_1&0&\dots&0\\
\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots\\
0&\dots&0&N&(N-1)q_{N-1}&\dots &2q_2&q_1\\
\end{array} \right)
$$
If the determinant of this matrix is not zero, then there will be no algebraic multiplicity. In the other case, one has to check whether in the eigenspace, related to the eigenvalues with algebraic multiplicity $ m > 1$, $m$ linear independent eigenvectors can be found.\\\\
$\mathbf{Exercise \ 4.}$
\begin{tcolorbox}
What becomes of the Ricci principal directions (see above) if $N=2$?
\end{tcolorbox}
From $\mathbf{3.208.}$ we have
\begin{align*}
\frac{R_{11}}{a_{11}} &= \frac{R_{12}}{a_{12}}=\frac{R_{22}}{a_{22}}=-\frac{R_{1212}}{a}\\
\Rightarrow\quad & \left \{ \begin{array}{l}
T^{1}_{1} = a^{11}R_{11} + a^{12}R_{21}\\
T^{1}_{2} = a^{11}R_{12} + a^{12}R_{22}\\
T^{2}_{2} = a^{21}R_{12} + a^{22}R_{22}\\
\end{array} \right.\\
\text{put } K=-\frac{R_{1212}}{a} \quad \Rightarrow\quad & \left \{ \begin{array}{l}
T^{1}_{1} = K a^{11}a_{11} + K a^{12}a_{12}\\
T^{1}_{2} = Ka^{11}a_{12} + Ka^{12}a_{22}\\
T^{2}_{2} = Ka^{21}a_{12} + Ka^{22}a_{22}\\
\end{array} \right.\quad
\Rightarrow\quad & \left \{ \begin{array}{l}
T^{1}_{1} = K\delta^1_1 = K \\
T^{1}_{2} = K\delta^1_2 = 0\\
T^{2}_{2} = K\delta^2_2 = K\\
\end{array} \right.
\end{align*}
hence the characteristic equation $det\left(\mathbf{T}-\theta \mathbf{I_n} \right)=0$ becomes $ (K-\theta)^2=0$ . So only one value of $\theta$ exists as $\theta = K$. Equation (4) becomes $\mathbf{0}\mathbf{X}=0$. So we can chose any pair of linear independent vectors as eigenvectors and can make them perpendicular to one another.
$$\blacklozenge$$
\newpage
\section{p109 - Exercise 5}
\begin{tcolorbox}
Suppose that two spaces $V_N$, $V^{'}_N$ have metric tensors $a_{mn}, \ a^{'}_{mn}$ such that $\ a^{'}_{mn}=k\ a_{mn} $, where $k$ is a constant. Write down the relations between the curvature tensors, the Ricci tensors, and the curvature invariants of the two spaces.
\end{tcolorbox}
We have
\begin{align*}
ds^2 &= a_{mn}dx^{m} dx^{n}\\
ds^{'2}&= a^{'}_{mn} dx^{'m}dx^{'n}
\end{align*}
But let's be careful: there is no reason to assume that $dx^m = dx^{'m}$. Let's embed the two spaces in a space $V_{N+1}$. If an observer in that space sees two displacements $ds^2$ and $ds^{'2}$ which for him have the same magnitude, we have
\begin{figure}[H]
\centering
\begin{minipage}[t]{.4\textwidth}
%\centering
\vspace{0pt}
%\includegraphics[scale=.5]{p85_ex1.png}
\input{./images/fig_p109_Ex5_a.tex}
\end{minipage}\hfill
\caption{Embedded $V_N$ spaces}
\label{fig:p109_Ex5_a}
\end{figure}
\begin{align*}
ds^{'2} &= ds^2\\
\Rightarrow \spatie a^{'}_{mn} dx^{'m}dx^{'n} &= a_{mn}dx^{m} dx^{n}\\
\Rightarrow \spatie k a_{mn} dx^{'m}dx^{'n} &= a_{mn}dx^{m} dx^{n}\\
\Rightarrow \spatie dx^{'m} &= \frac{1}{\sqrt{k}}dx^{m}
\end{align*}
We have also
\begin{align*}
a^{'}_{mk}a^{'kn} &= \delta^n_m\\
\Rightarrow \spatie ka_{mp}a^{'pn} &= \delta^n_m\\
\Rightarrow \spatie a^{'pn} &= \frac{1}{k}a^{pn}
\end{align*}
And get the following relations
\begin{align*}
\left \{ \begin{array}{l}
[mn,r]^{'} = \sqrt{k}^3[mn,r]\\\\
\Gamma^{'r}_{.mn} =\sqrt{k}\Gamma^{r}_{.mn}
\end {array}\right.
\end{align*}
\begin{align*}
\Rightarrow \spatie &R^{'s}_{\ .rmn} = \frac{\partial \Gamma^{'s}_{\ .rn}}{\partial x^{'m}} + \dots\\
\Rightarrow \spatie &R^{'s}_{\ .rmn} = \frac{\sqrt{k}\partial \Gamma^{s}_{.rn}}{\partial \left(\frac{x^{m}}{\sqrt{k}}\right)} + \dots\\
\Rightarrow \spatie &R^{'s}_{\ .rmn} = k R^{s}_{.rmn}\\
\times \ a^{'ks}\spatie \Rightarrow \spatie & a^{'ks}R^{'s}_{\ .rmn} = k a^{'ks}R^{s}_{.rmn}\\
\Rightarrow \spatie &R^{'}_{krmn} = k \frac{1}{k}a^{ks}R^{s}_{.rmn}\\
\Rightarrow \spatie & R^{'}_{krmn} =R_{krmn}\\
R_{rm} = R^n_{.rmn} \quad \Rightarrow \spatie &R^{'}_{\ mn} = k R_{rmn}\\
R = a^{mn}R_{mn} \quad \Rightarrow \spatie &R^{'} = a^{'mn}R^{'}_{mn}\\
\Rightarrow \spatie &R^{'} = \frac{1}{k}a^{mn}k R_{mn}\\
\Rightarrow \spatie &R^{'} = R
\end{align*}\\\\
$\mathbf{Summary}$\\
\begin{align*}
R^{'s}_{\ .rmn} &= k R^{s}_{.rmn}\\
R^{'}_{krmn} &=R_{krmn}\\
R^{'}_{ mn} &= k R_{rmn}\\
R^{'} &= R
\end{align*}
$$\blacklozenge$$
\newpage
\section{p109 - Exercise 6}
\begin{tcolorbox}
For an orthogonal coordinates system in a $V_2$ we have $$ ds^2=a_{11}\left( dx^1\right)^2+a_{22}\left( dx^2\right)^2$$
Show that
$$\frac{1}{a}R_{1212}= -\half\frac{1}{\sqrt{a}}\left[\partial_1\left(\frac{1}{\sqrt{a}}\partial_1 a_{22}\right)+\partial_2\left(\frac{1}{\sqrt{a}}\partial_2 a_{11}\right)\right]$$
\end{tcolorbox}
We have
\begin{align}
\left(a_{mn}\right)= \begin{pmatrix}
a_{11}& 0 \\
0& a_{22} \\
\end{pmatrix}\quad \left(a^{mn}\right)= \frac{1}{a}\begin{pmatrix}
a_{22}& 0 \\
0& a_{11} \\
\end{pmatrix}\quad
a= a_{11}a_{22}
\end{align}
We have also \\
\begin{align}
R &= -\frac{2}{a}R_{1212}\\
R = a^{mn}R_{mn}\quad\Rightarrow\quad R &= a^{11}R_{11}+ a^{22}R_{22}
\end{align}
Looking at the pattern generated by equations $(2)$ and $(3)$ suggests that using these equations could lead to the proposed equation. Let's have a try ...
\begin{align}
& \left \{ \begin{array}{ll}
\Gamma^1_{11} = \half\frac{a_{22}}{a}\partial_1 a_{11}&\Gamma^1_{22} =- \half\frac{a_{22}}{a}\partial_1 a_{22}\\\\
\Gamma^2_{11} =- \half\frac{a_{11}}{a}\partial_2 a_{11}&\Gamma^2_{22} = \half\frac{a_{11}}{a}\partial_2 a_{22}\\\\
\Gamma^1_{12} = \half\frac{a_{22}}{a}\partial_2 a_{11}&\Gamma^2_{12} = \half\frac{a_{11}}{a}\partial_1 a_{22}
\end {array}\right. \\
\text{3.205.}\quad\Rightarrow\quad & R_{rm} = \half\partial_{rm} \log a - \half \Gamma^p_{rm}\partial_p \log a-\partial_n\Gamma^n_{rm} + \Gamma^p_{rn}\Gamma^n_{pm}\\
\Rightarrow\quad &\left \{ \begin{array}{l}
R_{11} = \half\partial_{11} \log a - \half \Gamma^1_{11}\partial_1 \log a- \half \Gamma^2_{11}\partial_2 \log a \\
-\partial_1\Gamma^1_{11}-\partial_2\Gamma^2_{11} +
\\ \Gamma^1_{11}\Gamma^1_{11}+ \Gamma^1_{12}\Gamma^2_{11}+ \Gamma^2_{11}\Gamma^1_{21}+ \Gamma^2_{12}\Gamma^2_{21}\\\\
R_{22} = \half\partial_{22} \log a - \half \Gamma^1_{22}\partial_1 \log a- \half \Gamma^2_{22}\partial_2 \log a \\
-\partial_1\Gamma^1_{22}-\partial_2\Gamma^2_{22} +
\\ \Gamma^1_{21}\Gamma^1_{12}+ \Gamma^1_{22}\Gamma^2_{12}+ \Gamma^2_{21}\Gamma^1_{22}+ \Gamma^2_{22}\Gamma^2_{22}\\\\
\end {array}\right.
\end{align}
%**************************************************
\begin{align}
\Rightarrow\quad &\left \{ \begin{array}{l}
R_{11} = \half\partial_{11} \log a - \half \half\frac{a_{22}}{a}\partial_1 a_{11}\partial_1 \log a- \half (- \half\frac{a_{11}}{a}\partial_2 a_{11})\partial_2 \log a \\
-\partial_1(\half\frac{a_{22}}{a}\partial_1 a_{11})-\partial_2(- \half\frac{a_{11}}{a}\partial_2 a_{11}) +
\\ \half\frac{a_{22}}{a}\partial_1 a_{11}\half\frac{a_{22}}{a}\partial_1 a_{11}+ \half\frac{a_{22}}{a}\partial_2 a_{11}(- \half\frac{a_{11}}{a}\partial_2 a_{11})+ \\
(- \half\frac{a_{11}}{a}\partial_2 a_{11})\half\frac{a_{22}}{a}\partial_2 a_{11}+ \half\frac{a_{11}}{a}\partial_1 a_{22}\half\frac{a_{11}}{a}\partial_1 a_{22}\\\\
R_{22} = \half\partial_{22} \log a - \half (- \half\frac{a_{22}}{a}\partial_1 a_{22})\partial_1 \log a- \half \half\frac{a_{11}}{a}\partial_2 a_{22}\partial_2 \log a \\
-\partial_1(- \half\frac{a_{22}}{a}\partial_1 a_{22})-\partial_2(\half\frac{a_{11}}{a}\partial_2 a_{22}) +
\\ \half\frac{a_{22}}{a}\partial_2 a_{11}\half\frac{a_{22}}{a}\partial_2 a_{11}+ (- \half\frac{a_{22}}{a}\partial_1 a_{22})\half\frac{a_{11}}{a}\partial_1 a_{22}+ \\
\half\frac{a_{11}}{a}\partial_1 a_{22}(- \half\frac{a_{22}}{a}\partial_1 a_{22})+ \half\frac{a_{11}}{a}\partial_2 a_{22}\half\frac{a_{11}}{a}\partial_2 a_{22}\\\\
\end {array}\right.
\end{align}
Simplifying the notational burden by replacing $a_{11}$ by $\gamma$ and $a_{22}$ by $\eta$:
\begin{align}
\Rightarrow\quad &\left \{ \begin{array}{l}
R_{11} = \half\partial_{11} \log a - \half \half\frac{1}{\gamma}\partial_1 \gamma\partial_1 \log a+ \half \half\frac{1}{\eta}\partial_2 \gamma\partial_2 \log a \\
-\half\partial_1(\frac{1}{\gamma}\partial_1 \gamma)+\half\partial_2(\frac{1}{\eta}\partial_2 \gamma)
\\ + \half\half\frac{1}{\gamma}\frac{1}{\gamma}\partial_1 \gamma\partial_1 \gamma- \half\half\frac{1}{\gamma}\frac{1}{\eta}\partial_2 \gamma \partial_2 \gamma \\
- \half\half\frac{1}{\gamma}\frac{1}{\eta}\partial_2 \gamma\partial_2 \gamma+ \half\half\frac{1}{\eta}\frac{1}{\eta}\partial_1 \eta\partial_1 \eta\\\\
R_{22} = \half\partial_{22} \log a + \half\half\frac{1}{\gamma}\partial_1 \eta\partial_1 \log a- \half \half\frac{1}{\eta}\partial_2 \eta\partial_2 \log a \\
+\half\partial_1(\frac{1}{\gamma}\partial_1 \eta)-\half\partial_2(\frac{1}{\eta}\partial_2 \eta)
\\ + \half\half\frac{1}{\gamma}\frac{1}{\gamma}\partial_2 \gamma\partial_2 \gamma- \half\half\frac{1}{\gamma}\frac{1}{\eta}\partial_1 \eta\partial_1 \eta \\
- \half\half\frac{1}{\gamma}\frac{1}{\eta}\partial_1 \eta \partial_1 \eta+ \half\half\frac{1}{\eta}\frac{1}{\eta}\partial_2 \eta\partial_2 \eta\\\\
\end {array}\right.
\end{align}
%****************************************************
Noting that $\partial_{ii}\log a = \partial_{i}\left(\frac{1}{a_{11}}\partial_{i} a_{11}\right) + \partial_{i}\left(\frac{1}{a_{22}}\partial_{i} a_{22}\right)$ and $\partial_{i}\log a = \frac{1}{a_{11}}\partial_{i} a_{11} + \frac{1}{a_{22}}\partial_{i} a_{22}\quad (i=1,2)$, we get:\\
\begin{align}
%****************************
\left| \begin{array}{c}
2R_{11} = \\\\
\underbrace{\partial_{1}\left(\frac{1}{\gamma}\partial_{1} \gamma\right)}_{*}+\partial_{1}\left(\frac{1}{\eta}\partial_{1} \eta\right) \\\\
- \underbrace{\half \frac{1}{\gamma}\frac{1}{\gamma}(\partial_1 \gamma )^2}_{-} - \half \frac{1}{\gamma}\frac{1}{\eta}\partial_1 \gamma \partial_{1} \eta \\\\
+ \half\frac{1}{\gamma}\frac{1}{\eta}(\partial_2 \gamma)^2 +\half\frac{1}{\eta}\frac{1}{\eta}\partial_2 \gamma \partial_{2} \eta \\\\
-\underbrace{\partial_1(\frac{1}{\gamma}\partial_1 \gamma)}_{*}+\partial_2( \frac{1}{\eta}\partial_2 \gamma)
\\\\
+ \underbrace{\half\frac{1}{\gamma}\frac{1}{\gamma}(\partial_1 \gamma)^2}_{-}- \underbrace{\half \frac{1}{\gamma}\frac{1}{\eta}(\partial_2 \gamma )^2}_{+} \\\\
- \underbrace{\half \frac{1}{\gamma}\frac{1}{\eta}(\partial_2 \gamma)^2}_{+}+ \half\frac{1}{\eta}\frac{1}{\eta}(\partial_1 \eta)^2\\\\
\end {array}\quad
\right.
\left | \begin{array}{c}
2R_{22} = \\\\
\partial_{2}\left(\frac{1}{\gamma}\partial_{2} \gamma\right)+\underbrace{\partial_{2}\left(\frac{1}{\eta}\partial_{2} \eta\right)}_{*} \\\\
+ \half\frac{1}{\gamma}\frac{1}{\gamma}\partial_{1} \gamma\partial_1 \eta + \half\frac{1}{\gamma}\frac{1}{\eta}(\partial_1 \eta )^2 \\\\
- \half \frac{1}{\gamma}\frac{1}{\eta}\partial_{2} \gamma\partial_2 \eta - \underbrace{\half \frac{1}{\eta}\frac{1}{\eta}(\partial_2 \eta)^2 }_{-} \\\\
+\partial_1(\frac{1}{\gamma}\partial_1 \eta)-\underbrace{\partial_2( \frac{1}{\eta}\partial_2 \eta)}_{*}
\\\\
+ \half\frac{1}{\gamma}\frac{1}{\gamma}(\partial_2 \gamma)^2- \underbrace{\half\frac{1}{\gamma}\frac{1}{\eta}(\partial_1 \eta )^2}_{+}\\\\
- \underbrace{\half\frac{1}{\gamma}\frac{1}{\eta}(\partial_1 \eta)^2}_{+}+ \underbrace{\half \frac{1}{\eta}\frac{1}{\eta}(\partial_2 \eta)^2}_{-}\\\\
\end {array}\right|
\end{align}
\begin{align}
%****************************
\Rightarrow\quad &\left| \begin{array}{c}
2R_{11} = \\\\
\partial_{1}\left(\frac{1}{\eta}\partial_{1} \eta\right)+ \partial_2( \frac{1}{\eta}\partial_2 \gamma)\\\\
+ \half\frac{1}{\eta}\frac{1}{\eta}(\partial_1 \eta)^2 - \half\frac{1}{\gamma}\frac{1}{\eta}(\partial_2 \gamma)^2\\\\ - \half \frac{1}{\gamma}\frac{1}{\eta}\partial_1 \gamma \partial_{1} \eta +\half\frac{1}{\eta}\frac{1}{\eta}\partial_2 \gamma \partial_{2} \eta \\
\end{array}\right.\quad
\left|\begin{array}{c}
\spatie 2R_{22} = \\\\
\partial_1(\frac{1}{\gamma}\partial_1 \eta) +\partial_{2}\left(\frac{1}{\gamma}\partial_{2} \gamma\right)\\\\
- \half\frac{1}{\gamma}\frac{1}{\eta}(\partial_1 \eta )^2 + \half\frac{1}{\gamma}\frac{1}{\gamma}(\partial_2 \gamma)^2\\\\
+ \half\frac{1}{\gamma}\frac{1}{\gamma}\partial_{1} \gamma\partial_1 \eta - \half \frac{1}{\gamma}\frac{1}{\eta}\partial_{2} \gamma\partial_2 \eta \\
\end {array}\right|
\end{align}\\\\
%****************************
Be $R = \frac{1}{\gamma}R_{11}+ \frac{1}{\eta}R_{22}$, all first order derivatives vanish and we get,\\\\
\begin{align}
\frac{1}{\gamma}R_{11}+ \frac{1}{\eta}R_{22} = &
\half \left[ \frac{1}{\eta}\partial_1(\frac{1}{\gamma}\partial_1 \eta)+ \frac{1}{\gamma} \partial_{1}(\frac{1}{\eta}\partial_{1} \eta)\right]
+ \half \left[\frac{1}{\gamma}\partial_{2}(\frac{1}{\gamma}\partial_{2} \gamma)+ \frac{1}{\eta}\partial_2( \frac{1}{\eta}\partial_2 \gamma) \right]
\end{align}\\
We further simplify this expression. Considering the symmetry of $(11)$ we only explicit the calculations for the first terms in $\partial_1$.
\begin{align}
\frac{1}{\eta}\partial_1(\frac{1}{\gamma}\partial_1 \eta)+ \frac{1}{\gamma} \partial_{1}(\frac{1}{\eta}\partial_{1} \eta)&=\frac{1}{\eta}\partial_1(\frac{1}{\sqrt{\gamma}}\frac{1}{\sqrt{\gamma}}\frac{\sqrt{\eta}}{\sqrt{\eta}}\partial_1 \eta)+ \frac{1}{\gamma} \partial_{1}(\frac{1}{\sqrt{\eta}}\frac{1}{\sqrt{\eta}}\frac{\sqrt{\gamma}}{\sqrt{\gamma}}\partial_{1} \eta)\\
&=\frac{1}{\eta}\partial_1\left[\left(\frac{\eta}{\gamma}\right)^{\half}\frac{1}{\sqrt{a}}\partial_1 \eta\right]+ \frac{1}{\gamma} \partial_{1}\left[\left(\frac{\eta}{\gamma}\right)^{-\half}\frac{1}{\sqrt{a}}\partial_{1} \eta\right]\\
&=\left \{ \begin{array}{l}
\underbrace{\frac{1}{\eta}\left(\frac{\eta}{\gamma}\right)^{\half}}_{= \frac{1}{\sqrt{a}}}\partial_1\left[\frac{1}{\sqrt{a}}\partial_1 \eta\right]+ \underbrace{\frac{1}{\gamma} \left(\frac{\eta}{\gamma}\right)^{-\half}}_{= \frac{1}{\sqrt{a}}}\partial_{1}\left[\frac{1}{\sqrt{a}}\partial_{1} \eta\right]\\\\
+\frac{1}{\sqrt{a}}\partial_1 \eta\underbrace{\left[\frac{1}{\eta}\partial_1\left(\frac{\eta}{\gamma}\right)^{\half}+ \frac{1}{\gamma} \partial_{1}\left(\frac{\eta}{\gamma}\right)^{-\half}\right]}_{=0}
\end {array}\right.\\
&= 2\frac{1}{\sqrt{a}}\partial_1\left[\frac{1}{\sqrt{a}}\partial_1 a_{22}\right]\\
\Rightarrow \quad \half \left[ \frac{1}{\eta}\partial_1(\frac{1}{\gamma}\partial_1 \eta) + \frac{1}{\gamma} \partial_{1}(\frac{1}{\eta}\partial_{1} \eta)\right] &= \frac{1}{\sqrt{a}}\partial_1\left[\frac{1}{\sqrt{a}}\partial_1 a_{22}\right]
\end{align}
Using $(16)$ and the same calculations for the terms in $\partial_2$ and using $(2)$ and $(3)$ we get $$\frac{1}{a}R_{1212}= -\half\frac{1}{\sqrt{a}}\left[\partial_1\left(\frac{1}{\sqrt{a}}\partial_1 a_{22}\right)+\partial_2\left(\frac{1}{\sqrt{a}}\partial_2 a_{11}\right)\right]$$
$$\blacklozenge$$
\newpage
\section{p109 - Exercise 7}
\begin{tcolorbox}
Suppose that in a $V_3$ the metric is :$$ ds^2= (h_1dx^1)^2+(h_2dx^2)^2+(h_3dx^3)^2$$ where $h_1, h_2, h_3$ are functions of the three coordinates. Calculate the curvature tensor in terms of the $h^{'}s$ and their derivatives. Check your result by nothing that the curvature tensor will vanish if $h_1$ is a function of $x^1$ only, $h_2$ a function of $x^2$ only, and $h_3$ a function of $x^3$ only.
\end{tcolorbox}
From $\mathbf{3.115.}$ and $\mathbf{3.115.}$ we get for the non vanishing components of the covariant curvature tensor($6$ independent components to calculate):
\begin{align*}
R_{1212} =\left\{ \begin{array}{c}
- R_{1221} \\
- R_{2112} \\
R_{2121} \\
\end{array}\right. \quad R_{2323} =\left\{ \begin{array}{c}
- R_{2332} \\
- R_{3223} \\
R_{3232} \\
\end{array}\right. \quad R_{1313} =\left\{ \begin{array}{c}
- R_{1331} \\
- R_{3113} \\
R_{3131} \\
\end{array}\right. \\
R_{1213} =\left\{ \begin{array}{c}
- R_{1231} \\
R_{1312} \\
-R_{1321} \\
- R_{2113} \\
R_{2131} \\
-R_{3112} \\
R_{3121} \\
\end{array}\right. \quad
R_{1223} =\left\{ \begin{array}{c}
- R_{1232} \\
- R_{2123} \\
R_{2132} \\
R_{2312} \\
-R_{2321} \\
R_{3212} \\
-R_{3221} \\
\end{array}\right. \quad
R_{1323} =\left\{ \begin{array}{c}
- R_{1332} \\
R_{2313} \\
-R_{2331} \\
- R_{3123} \\
R_{3132} \\
-R_{3213} \\
R_{3231} \\
\end{array}\right.
\end{align*}
The metric tensors:
\begin{align*}
(a_{mn}) = \begin{pmatrix}
h^2_1& 0&0 \\
0& h^2_2&0 \\
0& 0&h^2_3
\end{pmatrix}\quad (a^{mn}) = \begin{pmatrix}
\frac{1}{h^2_1}& 0&0 \\
0& \frac{1}{h^2_2}&0 \\
0& 0&\frac{1}{h^2_3}
\end{pmatrix}
\end{align*}
The Christoffel symbols:
\begin{align*}
\begin{array}{lll}
\ [11,1]= h_{1} \partial_{1} h_{1} & [11,2]=- h_{1} \partial_{2} h_{1} & [11,3]= -h_{1} \partial_{3} h_{1}\\
\ [12,1]= h_{1} \partial_{2} h_{1} & [12,2]= h_{2} \partial_{1} h_{2} & [12,3]= 0\\
\ [22,1]= -h_{2} \partial_{1} h_{2} & [22,2]=h_{2} \partial_{2} h_{2} & [22,3]= -h_{2} \partial_{3} h_{2}\\
\ [23,1]= 0 & [23,2]= h_{2} \partial_{3} h_{2} & [23,3]= h_{3} \partial_{2} h_{3}\\
\ [33,1]= -h_{3} \partial_{1} h_{3} & [33,2]=- h_{3} \partial_{2} h_{3} & [33,3]= -h_{3} \partial_{3} h_{3}\\
\ [31,1]= h_{1} \partial_{3} h_{1} & [31,2]=0 & [31,3]= h_{3} \partial_{1} h_{3}\\
\end{array}
\end{align*}
\begin{align*}
\begin{array}{lll}
\Gamma^1_{11}=\frac{1}{h_{1}}\partial_{1}{h_{1}} &\Gamma^2_{11}=-\frac{h_{1}}{h_{2}^2}\partial_{2}{h_{1}} & \Gamma^3_{11}= -\frac{h_{1}}{h_{3}^2}\partial_{3}{h_{1}} \\
\Gamma^1_{12}= \frac{1}{h_{1}}\partial_{2}{h_{1}} &\Gamma^2_{12}=\frac{1}{h_{2}}\partial_{1}{h_{2}} & \Gamma^3_{12}=0\\
\Gamma^1_{22}= -\frac{h_{2}}{h_{1}^2}\partial_{1}{h_{2}} &\Gamma^2_{22}=\frac{1}{h_{2}}\partial_{2}{h_{2}} & \Gamma^3_{22}=-\frac{h_{2}}{h_{3}^2}\partial_{3}{h_{2}} \\
\Gamma^1_{23}=0 &\Gamma^2_{23}=\frac{1}{h_{2}}\partial_{3}{h_{2}} & \Gamma^3_{23}= \frac{1}{h_{3}}\partial_{2}{h_{3}}\\
\Gamma^1_{33}= -\frac{h_{3}}{h_{1}^2}\partial_{1}{h_{3}}&\Gamma^2_{33}=-\frac{h_{3}}{h_{2}^2}\partial_{2}{h_{3}} & \Gamma^3_{33}= \frac{1}{h_{3}}\partial_{3}{h_{3}}\\
\Gamma^1_{31}=\frac{1}{h_{1}}\partial_{3}{h_{1}} &\Gamma^2_{31}=0 & \Gamma^3_{31}=\frac{1}{h_{3}}\partial_{1}{h_{3}} \\
\end{array}
\end{align*}
We use $3.113.$ $$R_{rsmn}= \partial_m[sn,r] -\partial_n[sm,r]+\Gamma^p_{sm}[rn,p]-\Gamma^p_{sn}[rm,p]$$\\
Note that we only have to perform the full calculation for two curvature tensors e.g. $R_{1212}$ and $R_{1213}$ as the others can be retrieved by using adequate indices renaming and use of the identities $3.115.$
\begin{align*}
R_{1212}&=
-h_2\partial_{11}^2(h_2)-h_1\partial_{22}^2(h_1)
+\frac{h_2}{h_1}\partial_1 h_1\partial_1 h_2+\frac{h_1}{h_2}\partial_2 h_1\partial_2 h_2-\frac{h_1 h_2}{h_3^2}\partial_3 h_1\partial_3 h_2\\
R_{2323}&=
-h_3\partial_{22}^2(h_3)-h_2\partial_{33}^2(h_2)
+\frac{h_3}{h_2}\partial_2 h_2\partial_2 h_3+\frac{h_2}{h_3}\partial_3 h_2\partial_3 h_3-\frac{h_2 h_3}{h_1^2}\partial_1 h_2\partial_1 h_3\\
R_{1313}&=
-h_3\partial_{11}^2(h_3)-h_1\partial_{33}^2(h_1)
+\frac{h_3}{h_1}\partial_1 h_1\partial_1 h_3+\frac{h_1}{h_3}\partial_3 h_1\partial_3 h_3-\frac{h_1 h_3}{h_2^2}\partial_2 h_1\partial_2 h_3
\end{align*}
\begin{align*}
R_{1213}&=-h_1\partial_{32}^2(h_1)+\frac{h_1}{h_3}\partial_2 h_3\partial_3 h_1+\frac{h_1}{h_2}\partial_2 h_1\partial_3 h_2\\
R_{1223}&=h_2\partial_{31}^2(h_2)-\frac{h_2}{h_1}\partial_1 h_2\partial_3 h_1-\frac{h_2}{h_3}\partial_3 h_2\partial_1 h_3\\
R_{1323}&=
-h_3\partial_{21}^2(h_3)+\frac{h_3}{h_1}\partial_1 h_3\partial_3 h_1+\frac{h_3}{h_2}\partial_2 h_3\partial_1 h_2
\end{align*}
And, indeed, all curvature tensors vanish when the $h_i$ are only a function of the indices' dimension.
$$\blacklozenge$$
\newpage
\section{p109 - Exercise 8}
\begin{tcolorbox}
In relativity we encounter the metric form $$ \Phi= e^{\alpha} + e^{x^1}\left[ \left( dx^2 \right)^2 + \sin^2 x^2 \left(dx^3 \right)^2 \right] - e^{\gamma}\left(dx^4\right)^2$$ where $\alpha$ and $\gamma$ are functions of $x^1$ and $x^4$ only.\\
Show that the complete set of non-zero components of the Einstein tensor (see equation (3.214)) for the form given above are as follows
\begin{align*}
G^1_{.1} &= e^{-\alpha}\left( -\kwart -\half\gamma_1\right) + e^{-x^1}\\
G^2_{.2} &= e^{\alpha} \left(-\kwart - \half \gamma_{11} - \kwart \gamma_1^2 -\kwart\gamma_1+\kwart \alpha _1 +\kwart \alpha _1 \gamma_1 \right) \\
&+e^{\gamma} \left(\half \alpha_{44} + \kwart\alpha_4^2 -\kwart\alpha_4\gamma_4\right)\\
G^3_{.3} &= G^2_{.2} \\
G^4_{.4} &= e^{-\alpha}\left( -\frac{3}{4} -\half\alpha_1\right) + e^{-x^1}\\
e^{\alpha} G^4_{.1} &= -e^{\gamma} G^4_{.1} = -\half \alpha_4
\end{align*}
The subscript on $\alpha$ and $\gamma$ indicate partial derivatives with respect to $x^1$ and $x^4$.
\end{tcolorbox}
We have
\begin{align}
\left(a_{mn}\right)= \begin{pmatrix}
e^{\alpha}& 0&0&0 \\
0& e^{x^1} &0&0\\
0& 0 &e^{x^1}\sin^2 x^2&0\\
0& 0 &0&-e^{\gamma}\\
\end{pmatrix}\quad
\left(a^{mn}\right)= \begin{pmatrix}
{e^{-\alpha}}& 0&0&0 \\
0& {e^{-x^1}} &0&0\\
0& 0 &\frac{e^{-x^1}}{\sin^2 x^2}&0\\
0& 0 &0&-{e^{-\gamma}}\\
\end{pmatrix}
\end{align}
And will use the following definitions:
\begin{align}
G^n_{.t} &= R^n_{.t} - \half \delta^n_t R\\
R^n_{.t} &= a^{nk}R_{kt}\\
R_{kt} &= a^{sn}R_{sktn}\\
R &= a^{kt}R_{kt}
\end{align}
Considering that the non-diagonal components of $a_{mn}$ vanish and as $R_{sktn}=0$ when $s=k$ or $t=n$, we can write :
\begin{align}
\begin{pmatrix}
R_{11}\\
R_{22}\\
R_{33}\\
R_{44}\\
\end{pmatrix}&=
\begin{pmatrix}
0& R_{2112} & R_{3113} & R_{4114} \\
R_{1221}& 0 & R_{3223}& R_{4224} \\
R_{1331}&R_{2332} & 0 & R_{4334} \\
R_{1441}& R_{2442} & R_{3443}& 0 \\
\end{pmatrix}\begin{pmatrix}
a^{11}\\
a^{22}\\
a^{33}\\
a^{44}\\
\end{pmatrix}
\end{align}
\begin{align}
\begin{pmatrix}
R_{12}\\
R_{13}\\
R_{14}\\
R_{23}\\
R_{24}\\
R_{34}\\
\end{pmatrix}&=
\begin{pmatrix}
0& 0& R_{3123} & R_{4124} \\
0& R_{2132} & 0& R_{4134} \\
0&R_{2142} &R_{3143} & 0 \\
R_{1231}& 0 &0& R_{4234} \\
R_{1241}& 0 &R_{3243}& 0 \\
R_{1341}& R_{2342} &0& 0 \\
\end{pmatrix}\begin{pmatrix}
a^{11}\\
a^{22}\\
a^{33}\\
a^{44}\\
\end{pmatrix}\\
\end{align}
The Christoffel symbols of the first kind are:
\begin{align}
\begin{array}{llll}
\ [11,1]=\half\alpha _1 e^{\alpha}& \ [11,2]=0& \ [11,3]= 0& \ [11,4]=-\half\alpha _4 e^{\alpha} \\
\ [12,1]=0& \ [12,2]=\half e^{x^1}& \ [12,3]= 0& \ [12,4]=0 \\
\ [13,1]=0& \ [13,2]=0& \ [13,3]= \half e^{x^1}\sin^2 x^2& \ [13,4]=0\\
\ [14,1]=\half\alpha _4 e^{\alpha}& \ [14,2]=0& \ [14,3]= 0& \ [14,4]=-\half\gamma _1 e^{\gamma} \\
\ [22,1]=-\half e^{x^1}& \ [22,2]=0& \ [22,3]= 0& \ [22,4]=0\\
\ [23,1]=0& \ [23,2]=0& \ [23,3]= \half e^{x^1}\sin 2x^2 & \ [23,4]=0\\
\ [24,1]=0& \ [24,2]=0& \ [24,3]= 0& \ [24,4]=0 \\
\ [33,1]=-\half e^{x^1}sin^2 x^2& \ [33,2]=-\half e^{x^1}\sin 2x^2 & \ [33,3]= 0& \ [33,4]=0\\
\ [34,1]=0& \ [34,2]=0& \ [34,3]= 0& \ [34,4]=0 \\
\ [44,1]=\half \gamma _1 e^{\gamma}& \ [44,2]=0& \ [44,3]= 0& \ [44,4]=-\half\gamma _4 e^{\gamma} \\
\end{array}
\end{align}
We use $3.114.$ and considering that $a_{mn}= a^{mn}= 0$ for $m\ne n$: $$R_{rsmn}= \left\{\begin{array}{l}
\half\left(\partial^2_{sm}a_{rn}+\partial^2_{rn}a_{sm}\right)\\
+ \frac{1}{e^{\alpha}}\left( [rn,1] [sm,1] -[rm,1] [sn,1] \right)\\
+ \frac{1}{e^{x^1}}\left([rn,2] [sm,2] -[rm,2][sn,2] \right)\\
+ \frac{1}{e^{x^1}\sin^2 x^2}\left( [rn,3][sm,3]-[rm,3][sn,3] \right)\\
-\frac{1}{e^{\gamma}}\left( [rn,4][sm,4] -[rm,4][sn,4] \right)
\end{array}\right.$$\\
%************************************************************
Giving:
\begin{align}
R_{2112}&= \left\{\begin{array}{l}
\half\left(\partial^2_{11} a_{22}+\partial^2_{22} a_{11}\right)\\\\