Matrix representation of tensors
- This article was considered for deletion at Wikipedia on February 19 2020. This is a backup of Wikipedia:Matrix_representation_of_tensors. All of its AfDs can be found at Wikipedia:Special:PrefixIndex/Wikipedia:Articles_for_deletion/Matrix_representation_of_tensors, the first at Wikipedia:Wikipedia:Articles_for_deletion/Matrix_representation_of_tensors.
Matrix representation of tensors - it is a form of representation tensors using matrix. The basic principle of the matrix representation of tensors is: each superscript of the tensor must be associated with a column, each subscript must be associated with a row.
Contents
1st-order tensor
1st-order tensor ie the vector <math>\vec v</math> by default it is expressed using covariant coordinates (as a row vector, with lower indices)
- <math>
\vec v = [v_i] = \begin{bmatrix}v_1 & v_2 & \dots & v_n \end{bmatrix} </math>
or contravariant (as a column vector, with an upper index)
- <math>
\vec v = [v^i] = \begin{bmatrix} v^1 \\ v^2 \\ \vdots \\ v^n \end{bmatrix} </math>
For orthogonal coordinate systems, equality of co- and contravairant coordinates occurs ie <math>v_i=v^j</math> and therefore, only subscripts are typically used.
1nd-order tensor
Orthogonal coordinate systems
Because in the orthogonal system we have the equality of co- and contra-variant coordinates, we write the 2nd tensor using only the lower indexes and its matrix form can be as follows
- <math>
T=[T_{ij}]= \begin{bmatrix} T_{11} & T_{12} & \dots & T_{1n} \\ T_{21} & T_{22} & \dots & T_{2n} \\ \vdots \\ T_{n1} & T_{n2} & \dots & T_{nn} \\ \end{bmatrix} </math>
Non-orthogonal coordinate systems
Introduction
If the coordinate system is not orthogonal, then the matrix form for orthogonal systems cannot be used because this form of the tensor "loses" information about variance - as illustrated in the example below: [1]
Let's take a metric tensor (found in relativity theory) <math>\eta=[\eta_{ij}]</math> i wykonajmy inner multiplication with contravariant vector <math>\vec v=[v^j]</math>. From metric tensor properties it follows that we should get a row vector (covariant - that is, with a subscript <math>[v_i]</math>). Using summation convention and inner product, in index notation we have
- <math>
\eta_{ij} v^j = v_i </math>
so on the right side of equality we get the expected covariant vector (row, i.e. with the lower index). However, let's see what happens when we use the matrix notation used in orthogonal systems
- <math>
\eta\cdot \vec v=[\eta_{ij}]\cdot [v^j] \begin{bmatrix} -1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \cdot \begin{bmatrix} t \\ x \\ y \\ z \end{bmatrix} = \begin{bmatrix} -t \\ x \\ y \\ z \end{bmatrix} </math> we got a column vector as a result but it should come out as a row. Thus, the above matrix notation together with the matrix multiplication operation did not correctly map the operation inner multiplication because "lost" information about the variance of the resulting vector. So this matrix form of the <math>\eta</math> tensor is not valid.
Notation
In the literature, the above-described problem is often not taken into account and for non-orthogonal systems, an incorrect representation of the 2nd order tensor in the form of a matrix is used. However, in non-orthogonal systems, this tensor can be represented correctly using matrix notation - in such a way that matrix multiplication with the vector correctly maps inner multiplication - namely [2]
- save the tensor with two covariant (subscript) indexes as a single-row matrix whose elements are row vectors
- <math>
[T_{ij}] = [ ~~[T_{1j}] ~~ [T_{2j}] ~~ \dots ~~ [T_{nj}] ~~] = [~~ [ T_{11} ~ T_{12} ~ \dots ~ T_{1n} ] ~~ [ T_{21} ~ T_{22} ~ \dots ~ T_{2n} ] ~~ \dots ~~ [ T_{n1} ~ T_{n2} ~ \dots ~ T_{nn} ] ~~] </math>
- This indexing should not be confused with standard matrix indexing because although we have two subscripts here, the lower left index does not apply to the row number (since the matrix is single-row).
Details - Inner product <math>T_{ij} v^i = P_j</math> returning a covariant (row) vector agrees with the result of matrix multiplication for the tensor form adopted in this way (i.e. a row vector whose element are row vectors)
- <math>
T\cdot v = [T_{ij}]\cdot [v^i] = [T_{1j}] v^1 + [T_{2j}] v^2 + \dots + [T_{nj}] v^n </math>
- <math>
= \begin{bmatrix} (v^1T_{11}+v^2T_{21}+\dots+v^nT_{n1}) & (v^1T_{12}+v^2T_{22}+\dots+v^nT_{n2}) & \dots & (v^1T_{1n}+v^2T_{2n}+\dots+v^nT_{nn}) \end{bmatrix} = [P_j] </math>
- we should write the mixed tensor as a matrix in which the rows correspond to the covariant (lower) index and the columns correspond to the contravariant (upper) index
- <math>
[T_i^j] =\begin{bmatrix} T_1^1 & T_2^1 & \dots & T_n^1 \\ T_1^2 & T_2^2 & \dots & T_n^2 \\ \vdots \\ T_1^n & T_2^n & \dots & T_n^n \\ \end{bmatrix} </math>
- this indexing should not be confused with conventional matrix indexing in which the lower left index means a row and the lower right column means - because the lower one means a column, the upper one means a row and the lower right does not exist at all.
Details The mixed tensor can also be represented in other ways
- columnar vector whose elements are row vectors
- <math>
[T_i^j] = \begin{bmatrix} {[T_i^1]} \\ {[T_i^2]} \\ \vdots \\ {[T_i^n]} \end{bmatrix} = \begin{bmatrix} {[T_1^1 ~ T_2^1 ~ \dots ~ T_n^1]} \\ {[T_1^2 ~ T_2^2 ~ \dots ~ T_n^2]} \\ \vdots \\ {[T_1^n ~ T_2^n ~ \dots ~ T_n^n]} \\ \end{bmatrix} </math>
- a row vector whose elements are column vectors
- <math>
[T_i^j] = [ ~~ [T_1^j] ~~ [T_2^j] ~~ \dots ~~ [T_n^j] ~~] =\begin{bmatrix} \begin{bmatrix}T_1^1 \\ T_1^2 \\ \vdots \\ T_1^n \end{bmatrix} & \begin{bmatrix}T_2^1 \\ T_2^2 \\ \vdots \\ T_2^n \end{bmatrix} & \dots & \begin{bmatrix}T_n^1 \\ T_n^2 \\ \vdots \\ T_n^n \end{bmatrix} & \end{bmatrix} </math>
However, they are usually not used because they contain redundant information (regarding row / column structure of elements)
- let's write a tensor with two contravariant (superscript) indices as a single-column matrix whose elements are column vectors
- <math>
[T^{ij}]= \begin{bmatrix} \begin{bmatrix} T^{11} \\ T^{12} \\ \vdots \\ T^{1n} \end{bmatrix} \\ \begin{bmatrix} T^{21} \\ T^{22} \\ \vdots \\ T^{2n} \end{bmatrix} \\ \vdots \\ \begin{bmatrix} T^{n1} \\ T^{n2} \\ \vdots \\ T^{nn} \end{bmatrix} \\ \end{bmatrix} </math>
Details - Inner product <math>T^{ij} v_i = v_i T^{ij} = P^j</math> returning a contravariant (columnar) vector agrees with the result of matrix multiplication (note that to perform them we must multiply by the row vector on the left)
- <math>
v\cdot T = [v_i]\cdot [T^{ij}] = v_1\cdot [T^{1j}] + v_2\cdot [T^{2j}] + \dots + v_n\cdot [T^{nj}] </math>
- <math>
= \begin{bmatrix} v_1T^{11}+v_2T^{21}+\dots+v_nT^{n1} \\ v_1T^{12}+v_2T^{22}+\dots+v_nT^{n2} \\ \vdots \\ v_1T^{1n}+v_2T^{2n}+\dots+v_nT^{nn} \\ \end{bmatrix} = [P^j] </math>
Higher order tensors can be represented in a similar way. Also note that inner multiplication tensors includes a contraction operation which allows summation only after indices with opposite variance (which is in harmony with the operation of matrix multiplication, i.e. the matrix can only be multiplied by left-hand row vector, and only by right-hand column vector).
Examples
2nd-order tensors
For mentioned earlier metric tensor <math>\eta=[\eta_{ij}]</math> and using the above notation we get
- <math>
\eta\cdot\vec v= \begin{bmatrix} \begin{bmatrix} -1 & 0 & 0 & 0 \end{bmatrix} & \begin{bmatrix} 0 & 1 & 0 & 0 \end{bmatrix} & \begin{bmatrix} 0 & 0 & 1 & 0 \end{bmatrix} & \begin{bmatrix} 0 & 0 & 0 & 1 \end{bmatrix} \end{bmatrix} \cdot \begin{bmatrix} t \\ x \\ y \\ z \end{bmatrix} </math>
- <math>
= t\begin{bmatrix} -1 & 0 & 0 & 0 \end{bmatrix} + x\begin{bmatrix} 0 & 1 & 0 & 0 \end{bmatrix} + y\begin{bmatrix} 0 & 0 & 1 & 0 \end{bmatrix} + z\begin{bmatrix} 0 & 0 & 0 & 1 \end{bmatrix} = \begin{bmatrix} -t & x & y & z \end{bmatrix} </math>
as you can see now we've got the expected row vector.
Tensors of higher orders
In this way, we can write higher order tensors while maintaining information about the variance of their indexes, e.g.
- for Christoffel symbol of the second kind <math>\Gamma_{ij}^k</math> the form will be a matrix whose elements are row vectors
- <math>
\begin{bmatrix} {[\Gamma_{00}^0~\Gamma_{01}^0~\Gamma_{02}^0~\Gamma_{03}^0]} & {[\Gamma_{10}^0~\Gamma_{11}^0~\Gamma_{12}^0~\Gamma_{13}^0]} & {[\Gamma_{20}^0~\Gamma_{21}^0~\Gamma_{22}^0~\Gamma_{23}^0]} & {[\Gamma_{30}^0~\Gamma_{31}^0~\Gamma_{32}^0~\Gamma_{33}^0]} \\ {[\Gamma_{00}^1~\Gamma_{01}^1~\Gamma_{02}^1~\Gamma_{03}^1]} & {[\Gamma_{10}^1~\Gamma_{11}^1~\Gamma_{12}^1~\Gamma_{13}^1]} & {[\Gamma_{20}^1~\Gamma_{21}^1~\Gamma_{22}^1~\Gamma_{23}^1]} & {[\Gamma_{30}^1~\Gamma_{31}^1~\Gamma_{32}^1~\Gamma_{33}^1]} \\ {[\Gamma_{00}^2~\Gamma_{01}^2~\Gamma_{02}^2~\Gamma_{03}^2]} & {[\Gamma_{10}^2~\Gamma_{11}^2~\Gamma_{12}^2~\Gamma_{13}^2]} & {[\Gamma_{20}^2~\Gamma_{21}^2~\Gamma_{22}^2~\Gamma_{23}^2]} & {[\Gamma_{30}^2~\Gamma_{31}^2~\Gamma_{32}^2~\Gamma_{33}^2]} \\ {[\Gamma_{00}^3~\Gamma_{01}^3~\Gamma_{02}^3~\Gamma_{03}^3]} & {[\Gamma_{10}^3~\Gamma_{11}^3~\Gamma_{12}^3~\Gamma_{13}^3]} & {[\Gamma_{20}^3~\Gamma_{21}^3~\Gamma_{22}^3~\Gamma_{23}^3]} & {[\Gamma_{30}^3~\Gamma_{31}^3~\Gamma_{32}^3~\Gamma_{33}^3]} \end{bmatrix} </math>
- for Levi-Civita symbol <math>\epsilon_{ijk}</math> the form will be a row vector whose elements are row vectors whose elements are row vectors
- <math>
\epsilon_{ijk}=[ Robyt (talk) 06:26, 22 February 2020 (CET) [[0,0,0], [0,0,1], [0,-1,0]], Robyt (talk) 06:26, 22 February 2020 (CET) [[0,0,-1], [0,0,0], [1,0,0]], Robyt (talk) 06:26, 22 February 2020 (CET) [[0,1,0], [-1,0,0], [0,0,0]] Robyt (talk) 06:26, 22 February 2020 (CET) ] </math>
Literature
References
- ↑ Toth, Viktor (August 2005) (in En). On tensors and their matrix representations. https://www.vttoth.com/CMS/physics-notes/139-on-tensors-and-their-matrix-representations.
- ↑ Zhang, Hongbing (2017). Matrix-Representations of Tensors. viXra. https://vixra.org/pdf/1710.0196v1.pdf.