-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathchapterAppend.tex
255 lines (242 loc) · 16.9 KB
/
chapterAppend.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
\appendix
\rohead{\headmark}
\lehead{\headmark}
\definecolor{prussianblue}{rgb}{0.0, 0.19, 0.33}
\renewcommand{\chapterformat}{\raggedleft \colorbox{prussianblue}{%
\centering\textit{\textcolor{white}{{\Large Appendix} {\Huge \thechapter}}}}}
\chapter{Supplementary Information for the Main Text}
\section{Chapter \ref*{chapter:invdet}}
\label{section:invdetappend}
\paragraph{Properties \ref*{proper:zerodet}}
Consider an $n \times n$ square matrix $A$ that has two identical and adjacent rows with indices $i_1$ and $i_2$, where $i_2 = i_1 + 1$ (hence one of the indices is odd and another is even), then cofactor expansion along the odd row (let's say $i_1$) will give
\begin{subequations}
\begin{align}
\abs{A} &= \sum_{k=1}^{n} A_{i_1k}C_{i_1k} \nonumber \\
&= \sum_{k=1}^{n} (-1)^{i_1+k}A_{i_1k}\det(M_{i_1k})
\end{align}
by Properties \ref{proper:cofactorex} and Definition \ref{defn:cofactor}. Similarly, by considering the even row, we have
\begin{align}
\abs{A} &= \sum_{k=1}^{n} A_{i_2k}C_{i_2k} \nonumber \\
&= \sum_{k=1}^{n} (-1)^{i_2+k}A_{i_2k}\det(M_{i_2k})
\end{align}
\end{subequations}
But since the $i_1$-th and $i_2$-th row are identical, $A_{i_1k} = A_{i_2k}$. Furthermore, as these two identical rows are also adjacent, the minors $M_{i_1k} = M_{i_2k}$ are also equal. The only difference between the two expressions for $|A|$ above is the $(-1)^{i+j}$ factor. And because $i_1$ is odd and $i_2$ is even, they are differed by a negative sign only. Explicitly, we have
\begin{align}
\abs{A} &= \sum_{k=1}^{n} (-1)^{i_1+k}A_{i_1k}\det(M_{i_1k}) \nonumber \\
&= \sum_{k=1}^{n} (-1)^{(i_2-1)+k}A_{i_2k}\det(M_{i_2k}) \nonumber \\
&= (-1) \sum_{k=1}^{n} (-1)^{i_2+k}A_{i_2k}\det(M_{i_2k}) \nonumber \\
&= -\abs{A}
\end{align}
Therefore $\abs{A} = -\abs{A}$ and $\abs{A} = 0$ must equal to zero. Now we can generalize the results to non-adjacent, proportional rows by doing the first and third kind (multiplication and swapping) of elementary row operations when appropriate with the aid of Properties \ref{proper:elementaryopdet}, and the second case in Properties \ref{proper:zerodet} is completed. Subsequently, for the addition/subtraction type of elementary row operations, let's say $R_{p} + cR_{q} \to R_{p}$ is applied on some matrix $A$ (this $A$ is not the same one as in the first part) to produce $A'$, then
\begin{align*}
A' =
\begin{bmatrix}
\vdots & \vdots & & \vdots\\
A_{p1} + cA_{q1} & A_{p2} + cA_{q2} & \cdots & A_{pn} + cA_{qn} \\
\vdots & \vdots & & \vdots\\
A_{q1} & A_{q2} & \cdots & A_{qn} \\
\vdots & \vdots & & \vdots
\end{bmatrix}
\end{align*}
where we have only written out the rows $R_p$ and $R_q$. By applying cofactor expansion along $R_p$ following Properties \ref{proper:cofactorex}, we have
\begin{align}
\abs{A'} &= \sum_{k=1}^{n} [(A_{pk} + cA_{qk}) C_{pk}] \nonumber \\
&= \sum_{k=1}^{n} A_{pk}C_{pk} + c\sum_{k=1}^{n} A_{qk}C_{pk}
\end{align}
We identify the first term with $\abs{A}$ that is computed using cofactor expansion on the row $R_p$ of $A$. The second term can be thought of as the determinant of a matrix $\tilde{A}$ that is formed by replacing the $p$-th row $R_p$ by the $q$-th row $R_q$ in $A$ and subsequently expanded along that row. So $\tilde{A}$ practically has two identical rows $R_p = R_q$ and by the previous result the value of $\abs{\tilde{A}}$ is zero. Therefore
\begin{align}
\abs{A'} = \abs{A} + c\abs{\tilde{A}} = \abs{A} + c(0) = \abs{A}
\end{align} implying that the addition/subtraction type of elementary row operations does not affect the value of the determinant.
\section{Chapter \ref*{chap:DFT}}
\label{section:DFTappend}
\paragraph{Bessel's Inequality}
By part (d) of Theorem \ref{thm:spectralinner}, we can expand a function $f$ as
\begin{align}
f = \lim_{p \to \infty} \sum_{j=1}^{p} \langle f, \varphi^{(j)} \rangle \varphi^{(j)}
\end{align}
where $\varphi^{(j)}$ are the orthonormal basis vectors/functions and the partial sum $S_p[f]$ will be just $\sum_{j=1}^{p} \langle f, \varphi^{(j)} \rangle \varphi^{(j)}$. Particularly, the completeness of the orthonormal basis in Properties \ref{proper:hilbertorthosys} means that (see the footnote below)\footnote{Otherwise, if $\norm{\smash{f - \sum_{j=1}^{\infty} \langle f, \varphi^{(j)} \rangle \varphi^{(j)}}}^2 > 0$, then consider $\tilde{\varphi} = f - \sum_{j=1}^{\infty} \langle f, \varphi^{(j)} \rangle \varphi^{(j)}$ which is now a non-zero vector. It will be orthogonal to any of the original basis vectors $\varphi^{(j')}$ as
\begin{align*}
\langle \tilde{\varphi}, \varphi^{(j')} \rangle &= \langle f - \sum_{j=1}^{\infty} \langle f, \varphi^{(j)} \rangle \varphi^{(j)} , \varphi^{(j')} \rangle \\
&= \langle f, \varphi^{(j')} \rangle - \langle \sum_{j=1}^{\infty} \langle f, \varphi^{(j)} \rangle \varphi^{(j)} , \varphi^{(j')} \rangle \\
&= \langle f, \varphi^{(j')} \rangle - (\cdots + (0) +\langle f, \varphi^{(j')} \rangle (1) + (0) + \cdots) \\
& \quad \text{(Orthonormality of the basis: $\langle \varphi^{(j)}, \varphi^{(j')} \rangle = 1$ } \\
& \quad \text{only when $j = j'$ and $0$ if $j \neq j'$)} \\
&=\langle f, \varphi^{(j')} \rangle - \langle f, \varphi^{(j')} \rangle = 0
\end{align*}
which violates the premise of completeness as it will become another vector that is linearly independent of all the other basis vectors and can be added to the basis.}
\begin{align}
\norm{f - \sum_{j=1}^{\infty} \langle f, \varphi^{(j)} \rangle \varphi^{(j)}}^2 = 0 \label{eqn:fcomplete}
\end{align}
On the other hand,
\begin{align}
\norm{f - S_p(f)}^2 &= \norm{f - \sum_{j=1}^{p} \langle f, \varphi^{(j)} \rangle \varphi^{(j)}}^2 \nonumber \\
&= \left\langle f - \sum_{j=1}^{p} \langle f, \varphi^{(j)} \rangle \varphi^{(j)}, f - \sum_{j=1}^{p} \langle f, \varphi^{(j)} \rangle \varphi^{(j)} \right\rangle \nonumber \\
&= \langle f, f \rangle - \left\langle f, \sum_{j=1}^{p} \langle f, \varphi^{(j)} \rangle \varphi^{(j)} \right\rangle - \left\langle \sum_{j=1}^{p} \langle f, \varphi^{(j)} \rangle \varphi^{(j)}, f \right\rangle \nonumber \\
&\quad + \left\langle \sum_{j=1}^{p} \langle f, \varphi^{(j)} \rangle \varphi^{(j)}, \sum_{j=1}^{p} \langle f, \varphi^{(j)} \rangle \varphi^{(j)} \right\rangle \nonumber \\
&= \norm{f}^2 - \left\langle f, \sum_{j=1}^{p} \langle f, \varphi^{(j)} \rangle \varphi^{(j)} \right\rangle \nonumber \\
&\quad - \overline{\left\langle f, \sum_{j=1}^{p} \langle f, \varphi^{(j)} \rangle \varphi^{(j)} \right\rangle} \quad \text{((1) of Definition \ref{defn:innerprod})} \nonumber \\
&\quad + \sum_{j=1}^{p} \abs{\langle f, \varphi^{(j)} \rangle}^2 \quad \text{(Orthonormality of the basis)} \nonumber \\
&= \norm{f}^2 - 2\Re{\langle f, \sum_{j=1}^{p} \langle f, \varphi^{(j)} \rangle \varphi^{(j)} \rangle} + \sum_{j=1}^{p} \abs{\langle f, \varphi^{(j)} \rangle}^2 \nonumber \\
&= \norm{f}^2 - 2\Re{\sum_{j=1}^{p} (\overline{\langle f, \varphi^{(j)} \rangle} \langle f, \varphi^{(j)} \rangle)} \nonumber \\
& \quad \text{((4) of Properties \ref{proper:innerprod2})}\nonumber \\
&\quad + \sum_{j=1}^{p} \abs{\langle f, \varphi^{(j)} \rangle}^2 \nonumber \\
&= \norm{f}^2 - 2\sum_{j=1}^{p} \abs{\langle f, \varphi^{(j)} \rangle}^2 + \sum_{j=1}^{p} \abs{\langle f, \varphi^{(j)} \rangle}^2 \nonumber \\
&= \norm{f}^2 - \sum_{j=1}^{p} \abs{\langle f, \varphi^{(j)} \rangle}^2
\end{align}
and since $\norm{f - S_p(f)}^2 \geq 0$ is always non-negative, we have the \textit{Bessel's Inequality} as
\begin{align}
\sum_{j=1}^{p} \abs{\langle f, \varphi^{(j)} \rangle}^2 \leq \norm{f}^2 \label{eqn:Bessel}
\end{align}
From Exercise \ref{ex:triangular2}, we can apply the Triangular Inequality to get
\begin{align}
&\quad \norm{(f - \sum_{j=1}^{\infty} \langle f, \varphi^{(j)} \rangle \varphi^{(j)}) + (\sum_{j=1}^{\infty} \langle f, \varphi^{(j)} \rangle \varphi^{(j)} - S_p[f])}^2 \nonumber \\
&= \norm{f - S_p[f]}^2 \leq \norm{f - \sum_{j=1}^{\infty} \langle f, \varphi^{(j)} \rangle \varphi^{(j)}}^2 + \norm{\sum_{j=1}^{\infty} \langle f, \varphi^{(j)} \rangle \varphi^{(j)} - S_p[f]}^2
\end{align}
By Equation (\ref{eqn:fcomplete}), the first term on R.H.S. of the equality is zero. And the second term
\begin{align}
&\quad \norm{\sum_{j=1}^{\infty} \langle f, \varphi^{(j)} \rangle \varphi^{(j)} - S_p[f]}^2 \nonumber \\
&= \norm{\sum_{j=p+1}^{\infty} \langle f, \varphi^{(j)} \rangle \varphi^{(j)}}^2 = \sum_{j=p+1}^{\infty} \abs{\langle f, \varphi^{(j)} \rangle}^2
\end{align}
is a remainder term and when $p \to \infty$, must tend to zero, because $\sum_{j=1}^{p} \abs{\langle f, \varphi^{(j)} \rangle}^2$ is a convergent sequence, seen by applying the \textit{Monotone Convergence Theorem} from elementary Analysis on Bessel's Inequality (\ref{eqn:Bessel}). Therefore, the L.H.S.
\begin{align}
\lim_{p \to \infty}\norm{f - S_p[f]}^2 = 0
\end{align}
also tends to zero as we take the same limit, which implies the $L^2$ convergence for a complete orthonormal basis.
\section{Chapter \ref*{chapter:Markov}}
\label{section:Markovappend}
\paragraph{Properties \ref*{proper:positivestoceig}}
\textit{Reference Materials: \cite{markov}}
Let $\vec{q}$ be any eigenvector of $A^T$ that has an eigenvalue of modulus $\abs{\lambda} = 1$. Then $A^T\vec{q} = \lambda \vec{q}$ by the definition of an eigenvalue-eigenvector problem, i.e.\
\begin{align}
\sum_{k=1}^{n} a_{ki}\vec{q}_k = \lambda \vec{q}_i
\end{align}
where $a_{ji}$ is the $(j,i)$ [$(i,j)$] entry of the matrix $A$ [$A^T$]. Then
\begin{align}
\abs{\lambda \vec{q}_i} = \abs{\lambda}\abs{\vec{q}_i} &= \abs{\sum_{k=1}^{n} a_{ki}\vec{q}_k} \nonumber \\
\abs{\vec{q}_i} &\leq \sum_{k=1}^{n} \abs{a_{ki}}\abs{\vec{q}_k} = \sum_{k=1}^{n} a_{ki}\abs{\vec{q}_k} \quad (\abs{\lambda} = 1)\\
&\text{(Triangle Inequality for complex numbers and $a_{ij}$ are real)} \nonumber
\end{align}
Now assume that for some fixed $i$, $\abs{\vec{q}_i} \geq \abs{\vec{q}_k}$ for $1 \leq k \leq n$. Then the inequality above becomes
\begin{align}
\abs{\vec{q}_i} \leq \sum_{k=1}^{n} a_{ki}\abs{\vec{q}_k} &\leq \sum_{k=1}^{n} a_{ki}\abs{\vec{q}_i} = \abs{\vec{q}_i} & \text{($\sum_{k=1}^{n} a_{ki}$ sums to $1$)}
\end{align}
By squeezing using $\abs{\vec{q}_i}$ at the both ends, the part $\sum_{k=1}^{n} a_{ki}\abs{\vec{q}_k} \leq \sum_{k=1}^{n} a_{ki}\abs{\vec{q}_i}$ forces that $\abs{\vec{q}_k} = \abs{\vec{q}_i}$ for all $k$ and it also holds for any $i$. This is where the positiveness of $A$ is needed, otherwise $a_{ki}$ can be $0$ and the squeeze becomes vacuous ($0\abs{\vec{q}_k} = 0\abs{\vec{q}_i} = 0$). Moreover, as $\abs{\vec{q}_i} = \abs{\sum_{k=1}^{n} a_{ki}\vec{q}_k}$, incorporating it into the inequality
\begin{align}
\abs{\vec{q}_i} = \abs{\sum_{k=1}^{n} a_{ki}\vec{q}_k} \leq \sum_{k=1}^{n} a_{ki}\abs{\vec{q}_k} &\leq \sum_{k=1}^{n} a_{ki}\abs{\vec{q}_i} = \abs{\vec{q}_i}
\end{align}
and apply squeezing again, the part of Triangle Inequality $\abs{\sum_{k=1}^{n} a_{ki}\vec{q}_k} \leq \sum_{k=1}^{n} a_{ki}\abs{\vec{q}_k}$ becomes an equality $\abs{\sum_{k=1}^{n} a_{ki}\vec{q}_k} = \sum_{k=1}^{n} a_{ki}\abs{\vec{q}_k}$, which means that all the components $\vec{q}_k$ have to lie along the same direction in the complex plane.\footnote{For any two complex number $z_1$ and $z_2$, if $\abs{z_1 + z_2} = \abs{z_1} + \abs{z_2}$, then
\begin{align*}
\abs{z_1 + z_2}^2 &= (z_1 + z_2) \overline{(z_1 + z_2)} \\
&= z_1 \overline{z_1} + z_1 \overline{z_2} + z_2 \overline{z_1} + z_2 \overline{z_2} \\
&= \abs{z_1}^2 + z_1 \overline{z_2} + z_2 \overline{z_1} + \abs{z_2}^2
\end{align*}
but also
$(\abs{z_1} + \abs{z_2})^2 = \abs{z_1}^2 + 2\abs{z_1}\abs{z_2} + \abs{z_2}^2$. Hence we have
\begin{align*}
z_1 \overline{z_2} + z_2 \overline{z_1} = 2\abs{z_1}\abs{z_2}
\end{align*}
Assume that $z_1$ and $z_2$ points in some directions so that $z_1 = \abs{z_1}e^{i\theta_1}$ and $z_2 = \abs{z_2}e^{i\theta_2}$, then
\begin{align*}
z_1 \overline{z_2} + z_2 \overline{z_1} &= \abs{z_1}e^{i\theta_1} \abs{z_2}e^{-i\theta_2} + \abs{z_2}e^{i\theta_2} \abs{z_1}e^{-i\theta_1} \\
&= \abs{z_1}\abs{z_2} e^{i(\theta_1-\theta_2)} + \abs{z_1}\abs{z_2} e^{-i(\theta_1-\theta_2)} \\
&= 2 \abs{z_1}\abs{z_2} (\frac{e^{i(\theta_1-\theta_2)} + e^{-i(\theta_1-\theta_2)}}{2}) \\
&= 2 \abs{z_1}\abs{z_2} \cos(\theta_1-\theta_2) & \text{(Properties \ref{proper:sincoscomplex})}
\end{align*}
but this has to equal to $2\abs{z_1}\abs{z_2}$, thus $\cos(\theta_1-\theta_2) = 1$ and the arguments $\theta_1$ and $\theta_2$ will be the same. Now apply this logic repetitively to $\abs{\sum_{k=1}^{n} a_{ki}\vec{q}_k} = \sum_{k=1}^{n} a_{ki}\abs{\vec{q}_k}$.} Therefore, $\vec{q}$ must be be in the form of $c(1,1,1,\ldots,1)^T$ where $c$ will be any complex constant and this shows that over $\mathbb{C}$, the only linearly independent eigenvector for $A^T$ corresponding to $\abs{\lambda} = 1$ will be $(1,1,1,\ldots,1)^T$, matching the derivation in Properties \ref{proper:markoveigen1} where the eigenvalue is exactly $\lambda = 1$ and the geometric multiplicity is hence $1$ as well. Again, by Properties \ref{proper:eigentransinv}, this result will then also hold for the original matrix $A$ (but the form of the eigenvector will be different).
\section{Chapter \ref*{chapter:Tensor}}
\label{section:tensorappend}
\paragraph{Uniqueness of rank-$2$ and $3$ Isotropic Tensors} The uniqueness of the Kronecker delta as the only isotropic rank-$2$ tensor can be shown as follows. By a \SI{90}{\degree} positive rotation about the $3$-axis, we have
\begin{subequations}
\begin{align}
\textbf{e}'_1 &= \textbf{e}_2 \\
\textbf{e}'_2 &= -\textbf{e}_1 \\
\textbf{e}'_3 &= \textbf{e}_3
\end{align}
\end{subequations}
and according to (\ref{eqn:aij}), the change of coordinates matrix is
\begin{align}
a_{ij} =
\begin{bmatrix}
\textbf{e}_1 \cdot \textbf{e}'_1 & \textbf{e}_1 \cdot \textbf{e}'_2 & \textbf{e}_1 \cdot \textbf{e}'_3 \\
\textbf{e}_2 \cdot \textbf{e}'_1 & \textbf{e}_2 \cdot \textbf{e}'_2 & \textbf{e}_2 \cdot \textbf{e}'_3 \\
\textbf{e}_3 \cdot \textbf{e}'_1 & \textbf{e}_3 \cdot \textbf{e}'_2 & \textbf{e}_3 \cdot \textbf{e}'_3
\end{bmatrix} =
\begin{bmatrix}
0 & -1 & 0 \\
1 & 0 & 0 \\
0 & 0 & 1
\end{bmatrix}
\label{eqn:rotate903ax}
\end{align}
For a general rank-$2$ tensor $F$, by (\ref{eqn:rank2aF}) its transformation will then be
\begin{align}
\begin{bmatrix}
F'_{11} & F'_{12} & F'_{13} \\
F'_{21} & F'_{22} & F'_{23} \\
F'_{31} & F'_{32} & F'_{33}
\end{bmatrix} &=
\begin{bmatrix}
0 & 1 & 0 \\
-1 & 0 & 0 \\
0 & 0 & 1
\end{bmatrix}
\begin{bmatrix}
F_{11} & F_{12} & F_{13} \\
F_{21} & F_{22} & F_{23} \\
F_{31} & F_{32} & F_{33}
\end{bmatrix}
\begin{bmatrix}
0 & -1 & 0 \\
1 & 0 & 0 \\
0 & 0 & 1
\end{bmatrix} \nonumber \\
&= \begin{bmatrix}
F_{22} & -F_{21} & F_{23} \\
-F_{12} & F_{11} & -F_{13} \\
F_{32} & -F_{31} & F_{33}
\end{bmatrix}
\end{align}
If the tensor is isotropic such that $F'_{ij} = F_{ij}$, then by comparing the entries on both sides, we must have
\begin{subequations}
\begin{align}
F_{11} &= F_{22} \\
F_{13} = F_{23} = -F_{13} \implies F_{13} &= F_{23} = 0 \\
F_{31} = F_{32} = -F_{31} \implies F_{31} &= F_{32} = 0
\end{align}
\end{subequations}
By the same technique, we can carry out a \SI{90}{\degree} positive rotation about the $2$-axis to obtain
\begin{subequations}
\begin{align}
F_{11} &= F_{33} \\
F_{12} &= F_{32} = 0 \\
F_{21} &= F_{23} = 0
\end{align}
\end{subequations}
Therefore, $F_{11} = F_{22} = F_{33}$ and $F_{12} = F_{21} = F_{13} = F_{31} = F_{23} = F_{32} = 0$, the diagonal entries have the same value and all off-diagonal entries are zero, and any isotropic rank-$2$ tensor must be in the form of $\lambda \delta_{ij}$, a constant multiple of the Kronecker delta symbol. We can show the same for the rank-$3$ isotropic tensor to be found. First, do a rotation about the $(1,1,1)^T$ axis so that the effect is a subscript permutation $1 \to 2 \to 3 \to 1$, then we have
\begin{subequations}
\begin{align}
T_{111} &= T_{222} = T_{333} \\
T_{112} &= T_{223} = T_{331} \\
T_{122} &= T_{233} = T_{311} \\
T_{212} &= T_{323} = T_{131} \\
T_{211} &= T_{322} = T_{133} \\
T_{121} &= T_{232} = T_{313} \\
T_{221} &= T_{332} = T_{113} \\
T_{123} &= T_{231} = T_{312} \\
T_{132} &= T_{321} = T_{213}
\end{align}
\end{subequations}
Next, apply a \SI{90}{\degree} positive rotation about the $3$-axis as suggested by (\ref{eqn:rotate903ax}) before. After some tedious algebra, we can obtain the following relations:
\begin{subequations}
\begin{align}
T_{111} = -T_{222} &\implies T_{111} = T_{222} = T_{333} = 0 \\
T_{112} = T_{221} \quad \text{and}\quad T_{221} = -T_{112} &\implies T_{112} = T_{221} = 0 \\
T_{122} = T_{211} \quad \text{and}\quad T_{211} = -T_{122} &\implies T_{122} = T_{211} = 0 \\
T_{121} = T_{212} \quad \text{and}\quad T_{212} = -T_{121} &\implies T_{121} = T_{212} = 0 \\
T_{123} = -T_{213} &\implies \begin{aligned}
T_{123} &= T_{231} = T_{312} \\
= -T_{213} &= -T_{321} = -T_{132}
\end{aligned}
\end{align}
The first four of them show that the rank-$3$ isotropic tensor must take a value of zero when there are repeated subscripts. The last relation implies that it will have an opposite sign when the subscripts belong to an odd/even permutation respectively. This coincides with Definition \ref{defn:epsilon} for the epsilon symbol and thus any isotropic rank-$3$ tensor must be in the form of $\lambda\epsilon_{ijk}$, a constant multiple of the epsilon symbol.
\end{subequations}