Grocery on Distributed Algorithms

T1: What is Determinant

Reminder: This post contains 4022 words · 12 min read · by Xianbin

What is Determinant?

In Wiki, it says that

In mathematics the determinant is a scalar-valued function of the entries of a square matrix.

In Geometry, Determinant is a Volume

Since it is a volume, the output must be a value!

Let us put it more simple. Determinant is function \(f\): \(\mathbb{R}^n\times \cdots \times \mathbb{R}^n \to \mathbb{R}\).

Let \(\textbf{a}_1,\textbf{a}_2\) be vectors of size two.

\[f(\textbf{a}_1,\textbf{a}_2) = \textup{Det}(\textbf{a}_1,\textbf{a}_2)\]

If we increase \(\textbf{a}_1\) to be \(c\textbf{a}_1\) where \(c > 1\), then the volume increases \(c\) times.

property(1)

\[1): f(c\textbf{a}_1,\textbf{a}_2) = cf(\textbf{a}_1,\textbf{a}_2)\]

That is simple, right?

By similar tricks, we have that

\[2): f(\textbf{a}_1+\textbf{v},\textbf{a}_2) = f(\textbf{a}_1,\textbf{a}_2) +f(\textbf{v},\textbf{a}_2)\]

Now, let us consider the exchange of two vectors in \(f\).

\[f(\textbf{a}_1,\ldots, \textbf{a}_k, \ldots,\textbf{a}_i,\ldots,\textbf{a}_n)\]

We know that the absolute value cannot change. Notice that due to the definition of determinant, the notion sign will change.

Then,

\[3): f(\textbf{a}_1,\ldots, \textbf{a}_k, \ldots,\textbf{a}_i,\ldots,\textbf{a}_n) = - f(\textbf{a}_1,\ldots, \textbf{a}_i, \ldots,\textbf{a}_k,\ldots,\textbf{a}_n)\]

Finally, we know that if the vectors all unit vectors, we have

\((4) f(\textbf{e}_1,\ldots, \textbf{e}_i, \ldots,\textbf{e}_k,\ldots,\textbf{e}_n)=1\) where \(\textbf{e}_i \perp \textbf{e}_j\) for any \(i\not = j\). That is because, the volume can only be one.

Now, let us see \(\textup{Det}(\textbf{a}_1,\textbf{a}_2)\). By property (1),(2),(3),(4).

\[\textup{Det}(\textbf{a}_1,\textbf{a}_2) = \textup{Det}\left( {v_1 \choose v_2}, {v_3 \choose v_4} \right)\\= \textup{Det}\left( {v_1 \choose 0} + {0 \choose v_2}, {v_3 \choose v_4} \right)\\= \textup{Det}\left( {v_1 \choose 0} , {v_3 \choose v_4} \right) + \textup{Det}\left( {0 \choose v_2}, {v_3 \choose v_4} \right)\\= v_1\textup{Det}\left( {1 \choose 0} , {v_3 \choose v_4} \right) + v_2\textup{Det}\left( {0 \choose 1}, {v_3 \choose v_4} \right)\\ = v_1v_3 \textup{Det}\left( {1 \choose 0}, {1 \choose 0}\right) +v_1v_4\textup{Det}\left( {1 \choose 0}, {0 \choose 1}\right) \\+v_2v_3\textup{Det}\left( {0 \choose 1}, {1 \choose 0}\right) + v_2v_4\textup{Det}\left( {0 \choose 1}, {0 \choose 1}\right) \\= v_1 v_4 - v_2v_3\]

Leibniz Formula for Determinants

Let us calculate \(\textbf{Det}(\textbf{a}_1,\ldots,\textbf{a}_n)\).

\[\textup{Det}\left(\left( \begin{matrix} v_{11}\\ \vdots\\ v_{n1} \end{matrix}\right) \left( \begin{matrix} v_{12}\\ \vdots\\ v_{n2} \end{matrix}\right), \cdots, \left( \begin{matrix} v_{n1}\\ \vdots\\ v_{n1} \end{matrix}\right) \right) = \\ \textup{Det}\left( v_{11}e_1+\ldots,v_{n1}e_n, \Delta_1 \right)=\\ v_{11}\textup{Det}(e_1,\Delta_1)+\ldots+v_{n1}\textup{Det}(e_n,\Delta_1)\\= \sum_{j_1=1}^n v_{j_1,1}\textup{Det}(e_{j_1},\Delta_1)\]

where \(\Delta_1 = \left(\left( \begin{matrix} v_{n2}\\ \vdots\\ v_{n2} \end{matrix}\right), \cdots, \left( \begin{matrix} v_{n1}\\ \vdots\\ v_{n1} \end{matrix}\right)\right)\)

Now, we obtain a recursion equality.

\[\textup{Det}(\textbf{a}_1,\Delta_1) = \sum_{j_1=1}^n v_{j_1,1}\textup{Det}(e_{j_1},\Delta_1)\]

We can see that the relationship is this:

First, we swap \(\textbf{a}_2\) to \(e_{j1}\), then, we have

\[\textup{Det}(\textbf{a}_1,\textbf{a}_2,\Delta_2) = \\ \sum_{j_1=1}^n v_{j_1,1} \textup{Det}(\textbf{a}_2,e_{j_1},\Delta_2) \cdot (-1) \\ =\sum_{j_1=1}^n \sum_{j_2=1}^n v_{j_1,1} v_{j_2,2} \textup{Det}(e_{j2},e_{j_1},\Delta_2) \cdot (-1)\]

Then, we swap it back.

\[\textup{Det}(\textbf{a}_1,\Delta_1) =\\ \textup{Det}(\textbf{a}_1,\textbf{a}_2,\Delta_2) = \sum_{j_1=1}^n \sum_{j_2=1}^n v_{j_1,1} v_{j_2,2} \textup{Det}(e_{j_1}, e_{j_2},\Delta_2) \cdot (-1) \cdot(-1) \\= \sum_{j_1=1}^n \sum_{j_2=1}^n v_{j_1,1} v_{j_2,2} \textup{Det}(e_{j_1}, e_{j_2},\Delta_2)\]

We construct \(\Delta_{n-1}\subset\cdots\subset\Delta_{1}\) such that \(\Delta_{i+1} \cup \textbf{a}_{i+1} = \Delta_{i}\), where \(\Delta_{n-1} = e_{jn}\) and \(i\in [n-2]\).

Then, we have

\[\textup{Det}(\textbf{a}_1, \textbf{a}_2,\ldots, \textbf{a}_n) = \sum_{j_1=1}^n \sum_{j_2=1}^n \ldots \sum_{j_n=1}^nv_{j_1,1}\ldots v_{j_n,n}\textup{Det}(e_{j_1},\ldots,\Delta_{n-1})\\= \sum_{\delta = (j_1,\ldots,j_n)\in \pi_n}\text{sgn}(j_1,\ldots,j_n) \prod_{i=1}^n v_{j_{\delta(i)}, i}\]

where \(\pi_n\) is a permutation of \(\{1,\ldots,n\}\).

Reference

[1] The Bright Side of Mathematics. Linear Algebra 45,46.