# Matrix representation of tensors

This article was considered for deletion at Wikipedia on February 19 2020. This is a backup of Wikipedia:Matrix_representation_of_tensors. All of its AfDs can be found at Wikipedia:Special:PrefixIndex/Wikipedia:Articles_for_deletion/Matrix_representation_of_tensors, the first at Wikipedia:Wikipedia:Articles_for_deletion/Matrix_representation_of_tensors. Purge

Matrix representation of tensors - it is a form of representation tensors using matrix. The basic principle of the matrix representation of tensors is: each superscript of the tensor must be associated with a column, each subscript must be associated with a row.

## 1st-order tensor

1st-order tensor ie the vector $\vec v$ by default it is expressed using covariant coordinates (as a row vector, with lower indices)

$\vec v = [v_i] = \begin{bmatrix}v_1 & v_2 & \dots & v_n \end{bmatrix}$

or contravariant (as a column vector, with an upper index)

$\vec v = [v^i] = \begin{bmatrix} v^1 \\ v^2 \\ \vdots \\ v^n \end{bmatrix}$

For orthogonal coordinate systems, equality of co- and contravairant coordinates occurs ie $v_i=v^j$ and therefore, only subscripts are typically used.

## 1nd-order tensor

### Orthogonal coordinate systems

Because in the orthogonal system we have the equality of co- and contra-variant coordinates, we write the 2nd tensor using only the lower indexes and its matrix form can be as follows

$T=[T_{ij}]= \begin{bmatrix} T_{11} & T_{12} & \dots & T_{1n} \\ T_{21} & T_{22} & \dots & T_{2n} \\ \vdots \\ T_{n1} & T_{n2} & \dots & T_{nn} \\ \end{bmatrix}$

### Non-orthogonal coordinate systems

#### Introduction

If the coordinate system is not orthogonal, then the matrix form for orthogonal systems cannot be used because this form of the tensor "loses" information about variance - as illustrated in the example below: [1]

Let's take a metric tensor (found in relativity theory) $\eta=[\eta_{ij}]$ i wykonajmy inner multiplication with contravariant vector $\vec v=[v^j]$. From metric tensor properties it follows that we should get a row vector (covariant - that is, with a subscript $[v_i]$). Using summation convention and inner product, in index notation we have

$\eta_{ij} v^j = v_i$

so on the right side of equality we get the expected covariant vector (row, i.e. with the lower index). However, let's see what happens when we use the matrix notation used in orthogonal systems

$\eta\cdot \vec v=[\eta_{ij}]\cdot [v^j] \begin{bmatrix} -1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \cdot \begin{bmatrix} t \\ x \\ y \\ z \end{bmatrix} = \begin{bmatrix} -t \\ x \\ y \\ z \end{bmatrix}$ we got a column vector as a result but it should come out as a row. Thus, the above matrix notation together with the matrix multiplication operation did not correctly map the operation inner multiplication because "lost" information about the variance of the resulting vector. So this matrix form of the $\eta$ tensor is not valid.

#### Notation

In the literature, the above-described problem is often not taken into account and for non-orthogonal systems, an incorrect representation of the 2nd order tensor in the form of a matrix is used. However, in non-orthogonal systems, this tensor can be represented correctly using matrix notation - in such a way that matrix multiplication with the vector correctly maps inner multiplication - namely [2]

• save the tensor with two covariant (subscript) indexes as a single-row matrix whose elements are row vectors
$[T_{ij}] = [ ~~[T_{1j}] ~~ [T_{2j}] ~~ \dots ~~ [T_{nj}] ~~] = [~~ [ T_{11} ~ T_{12} ~ \dots ~ T_{1n} ] ~~ [ T_{21} ~ T_{22} ~ \dots ~ T_{2n} ] ~~ \dots ~~ [ T_{n1} ~ T_{n2} ~ \dots ~ T_{nn} ] ~~]$

This indexing should not be confused with standard matrix indexing because although we have two subscripts here, the lower left index does not apply to the row number (since the matrix is single-row).
• we should write the mixed tensor as a matrix in which the rows correspond to the covariant (lower) index and the columns correspond to the contravariant (upper) index
$[T_i^j] =\begin{bmatrix} T_1^1 & T_2^1 & \dots & T_n^1 \\ T_1^2 & T_2^2 & \dots & T_n^2 \\ \vdots \\ T_1^n & T_2^n & \dots & T_n^n \\ \end{bmatrix}$

this indexing should not be confused with conventional matrix indexing in which the lower left index means a row and the lower right column means - because the lower one means a column, the upper one means a row and the lower right does not exist at all.
• let's write a tensor with two contravariant (superscript) indices as a single-column matrix whose elements are column vectors
$[T^{ij}]= \begin{bmatrix} \begin{bmatrix} T^{11} \\ T^{12} \\ \vdots \\ T^{1n} \end{bmatrix} \\ \begin{bmatrix} T^{21} \\ T^{22} \\ \vdots \\ T^{2n} \end{bmatrix} \\ \vdots \\ \begin{bmatrix} T^{n1} \\ T^{n2} \\ \vdots \\ T^{nn} \end{bmatrix} \\ \end{bmatrix}$

Higher order tensors can be represented in a similar way. Also note that inner multiplication tensors includes a contraction operation which allows summation only after indices with opposite variance (which is in harmony with the operation of matrix multiplication, i.e. the matrix can only be multiplied by left-hand row vector, and only by right-hand column vector).

## Examples

#### 2nd-order tensors

For mentioned earlier metric tensor $\eta=[\eta_{ij}]$ and using the above notation we get

$\eta\cdot\vec v= \begin{bmatrix} \begin{bmatrix} -1 & 0 & 0 & 0 \end{bmatrix} & \begin{bmatrix} 0 & 1 & 0 & 0 \end{bmatrix} & \begin{bmatrix} 0 & 0 & 1 & 0 \end{bmatrix} & \begin{bmatrix} 0 & 0 & 0 & 1 \end{bmatrix} \end{bmatrix} \cdot \begin{bmatrix} t \\ x \\ y \\ z \end{bmatrix}$

$= t\begin{bmatrix} -1 & 0 & 0 & 0 \end{bmatrix} + x\begin{bmatrix} 0 & 1 & 0 & 0 \end{bmatrix} + y\begin{bmatrix} 0 & 0 & 1 & 0 \end{bmatrix} + z\begin{bmatrix} 0 & 0 & 0 & 1 \end{bmatrix} = \begin{bmatrix} -t & x & y & z \end{bmatrix}$

as you can see now we've got the expected row vector.

#### Tensors of higher orders

In this way, we can write higher order tensors while maintaining information about the variance of their indexes, e.g.

• for Christoffel symbol of the second kind $\Gamma_{ij}^k$ the form will be a matrix whose elements are row vectors
$\begin{bmatrix} {[\Gamma_{00}^0~\Gamma_{01}^0~\Gamma_{02}^0~\Gamma_{03}^0]} & {[\Gamma_{10}^0~\Gamma_{11}^0~\Gamma_{12}^0~\Gamma_{13}^0]} & {[\Gamma_{20}^0~\Gamma_{21}^0~\Gamma_{22}^0~\Gamma_{23}^0]} & {[\Gamma_{30}^0~\Gamma_{31}^0~\Gamma_{32}^0~\Gamma_{33}^0]} \\ {[\Gamma_{00}^1~\Gamma_{01}^1~\Gamma_{02}^1~\Gamma_{03}^1]} & {[\Gamma_{10}^1~\Gamma_{11}^1~\Gamma_{12}^1~\Gamma_{13}^1]} & {[\Gamma_{20}^1~\Gamma_{21}^1~\Gamma_{22}^1~\Gamma_{23}^1]} & {[\Gamma_{30}^1~\Gamma_{31}^1~\Gamma_{32}^1~\Gamma_{33}^1]} \\ {[\Gamma_{00}^2~\Gamma_{01}^2~\Gamma_{02}^2~\Gamma_{03}^2]} & {[\Gamma_{10}^2~\Gamma_{11}^2~\Gamma_{12}^2~\Gamma_{13}^2]} & {[\Gamma_{20}^2~\Gamma_{21}^2~\Gamma_{22}^2~\Gamma_{23}^2]} & {[\Gamma_{30}^2~\Gamma_{31}^2~\Gamma_{32}^2~\Gamma_{33}^2]} \\ {[\Gamma_{00}^3~\Gamma_{01}^3~\Gamma_{02}^3~\Gamma_{03}^3]} & {[\Gamma_{10}^3~\Gamma_{11}^3~\Gamma_{12}^3~\Gamma_{13}^3]} & {[\Gamma_{20}^3~\Gamma_{21}^3~\Gamma_{22}^3~\Gamma_{23}^3]} & {[\Gamma_{30}^3~\Gamma_{31}^3~\Gamma_{32}^3~\Gamma_{33}^3]} \end{bmatrix}$

• for Levi-Civita symbol $\epsilon_{ijk}$ the form will be a row vector whose elements are row vectors whose elements are row vectors
$\epsilon_{ijk}=[ Robyt (talk) 06:26, 22 February 2020 (CET) [[0,0,0], [0,0,1], [0,-1,0]], Robyt (talk) 06:26, 22 February 2020 (CET) [[0,0,-1], [0,0,0], [1,0,0]], Robyt (talk) 06:26, 22 February 2020 (CET) [[0,1,0], [-1,0,0], [0,0,0]] Robyt (talk) 06:26, 22 February 2020 (CET) ]$

## References

1. Toth, Viktor (August 2005) (in En). On tensors and their matrix representations.
2. Zhang, Hongbing (2017). Matrix-Representations of Tensors. viXra.