Matrix representation of tensors

From Deletionpedia.org: a home for articles deleted from Wikipedia
Jump to: navigation, search
This article was considered for deletion at Wikipedia on February 19 2020. This is a backup of Wikipedia:Matrix_representation_of_tensors. All of its AfDs can be found at Wikipedia:Special:PrefixIndex/Wikipedia:Articles_for_deletion/Matrix_representation_of_tensors, the first at Wikipedia:Wikipedia:Articles_for_deletion/Matrix_representation_of_tensors. Purge

Matrix representation of tensors - it is a form of representation tensors using matrix. The basic principle of the matrix representation of tensors is: each superscript of the tensor must be associated with a column, each subscript must be associated with a row.

1st-order tensor

1st-order tensor ie the vector <math>\vec v</math> by default it is expressed using covariant coordinates (as a row vector, with lower indices)

<math>

\vec v = [v_i] = \begin{bmatrix}v_1 & v_2 & \dots & v_n \end{bmatrix} </math>

or contravariant (as a column vector, with an upper index)

<math>

\vec v = [v^i] = \begin{bmatrix} v^1 \\ v^2 \\ \vdots \\ v^n \end{bmatrix} </math>

For orthogonal coordinate systems, equality of co- and contravairant coordinates occurs ie <math>v_i=v^j</math> and therefore, only subscripts are typically used.

1nd-order tensor

Orthogonal coordinate systems

Because in the orthogonal system we have the equality of co- and contra-variant coordinates, we write the 2nd tensor using only the lower indexes and its matrix form can be as follows

<math>

T=[T_{ij}]= \begin{bmatrix} T_{11} & T_{12} & \dots & T_{1n} \\ T_{21} & T_{22} & \dots & T_{2n} \\ \vdots \\ T_{n1} & T_{n2} & \dots & T_{nn} \\ \end{bmatrix} </math>

Non-orthogonal coordinate systems

Introduction

If the coordinate system is not orthogonal, then the matrix form for orthogonal systems cannot be used because this form of the tensor "loses" information about variance - as illustrated in the example below: [1]

Let's take a metric tensor (found in relativity theory) <math>\eta=[\eta_{ij}]</math> i wykonajmy inner multiplication with contravariant vector <math>\vec v=[v^j]</math>. From metric tensor properties it follows that we should get a row vector (covariant - that is, with a subscript <math>[v_i]</math>). Using summation convention and inner product, in index notation we have

<math>

\eta_{ij} v^j = v_i </math>

so on the right side of equality we get the expected covariant vector (row, i.e. with the lower index). However, let's see what happens when we use the matrix notation used in orthogonal systems

<math>

\eta\cdot \vec v=[\eta_{ij}]\cdot [v^j] \begin{bmatrix} -1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \cdot \begin{bmatrix} t \\ x \\ y \\ z \end{bmatrix} = \begin{bmatrix} -t \\ x \\ y \\ z \end{bmatrix} </math> we got a column vector as a result but it should come out as a row. Thus, the above matrix notation together with the matrix multiplication operation did not correctly map the operation inner multiplication because "lost" information about the variance of the resulting vector. So this matrix form of the <math>\eta</math> tensor is not valid.

Notation

In the literature, the above-described problem is often not taken into account and for non-orthogonal systems, an incorrect representation of the 2nd order tensor in the form of a matrix is used. However, in non-orthogonal systems, this tensor can be represented correctly using matrix notation - in such a way that matrix multiplication with the vector correctly maps inner multiplication - namely [2]

  • save the tensor with two covariant (subscript) indexes as a single-row matrix whose elements are row vectors
<math>

[T_{ij}] = [ ~~[T_{1j}] ~~ [T_{2j}] ~~ \dots ~~ [T_{nj}] ~~] = [~~ [ T_{11} ~ T_{12} ~ \dots ~ T_{1n} ] ~~ [ T_{21} ~ T_{22} ~ \dots ~ T_{2n} ] ~~ \dots ~~ [ T_{n1} ~ T_{n2} ~ \dots ~ T_{nn} ] ~~] </math>

This indexing should not be confused with standard matrix indexing because although we have two subscripts here, the lower left index does not apply to the row number (since the matrix is single-row).
  • we should write the mixed tensor as a matrix in which the rows correspond to the covariant (lower) index and the columns correspond to the contravariant (upper) index
<math>

[T_i^j] =\begin{bmatrix} T_1^1 & T_2^1 & \dots & T_n^1 \\ T_1^2 & T_2^2 & \dots & T_n^2 \\ \vdots \\ T_1^n & T_2^n & \dots & T_n^n \\ \end{bmatrix} </math>

this indexing should not be confused with conventional matrix indexing in which the lower left index means a row and the lower right column means - because the lower one means a column, the upper one means a row and the lower right does not exist at all.
  • let's write a tensor with two contravariant (superscript) indices as a single-column matrix whose elements are column vectors
<math>

[T^{ij}]= \begin{bmatrix} \begin{bmatrix} T^{11} \\ T^{12} \\ \vdots \\ T^{1n} \end{bmatrix} \\ \begin{bmatrix} T^{21} \\ T^{22} \\ \vdots \\ T^{2n} \end{bmatrix} \\ \vdots \\ \begin{bmatrix} T^{n1} \\ T^{n2} \\ \vdots \\ T^{nn} \end{bmatrix} \\ \end{bmatrix} </math>

Higher order tensors can be represented in a similar way. Also note that inner multiplication tensors includes a contraction operation which allows summation only after indices with opposite variance (which is in harmony with the operation of matrix multiplication, i.e. the matrix can only be multiplied by left-hand row vector, and only by right-hand column vector).

Examples

2nd-order tensors

For mentioned earlier metric tensor <math>\eta=[\eta_{ij}]</math> and using the above notation we get

<math>

\eta\cdot\vec v= \begin{bmatrix} \begin{bmatrix} -1 & 0 & 0 & 0 \end{bmatrix} & \begin{bmatrix} 0 & 1 & 0 & 0 \end{bmatrix} & \begin{bmatrix} 0 & 0 & 1 & 0 \end{bmatrix} & \begin{bmatrix} 0 & 0 & 0 & 1 \end{bmatrix} \end{bmatrix} \cdot \begin{bmatrix} t \\ x \\ y \\ z \end{bmatrix} </math>

<math>

= t\begin{bmatrix} -1 & 0 & 0 & 0 \end{bmatrix} + x\begin{bmatrix} 0 & 1 & 0 & 0 \end{bmatrix} + y\begin{bmatrix} 0 & 0 & 1 & 0 \end{bmatrix} + z\begin{bmatrix} 0 & 0 & 0 & 1 \end{bmatrix} = \begin{bmatrix} -t & x & y & z \end{bmatrix} </math>

as you can see now we've got the expected row vector.

Tensors of higher orders

In this way, we can write higher order tensors while maintaining information about the variance of their indexes, e.g.

  • for Christoffel symbol of the second kind <math>\Gamma_{ij}^k</math> the form will be a matrix whose elements are row vectors
<math>

\begin{bmatrix} {[\Gamma_{00}^0~\Gamma_{01}^0~\Gamma_{02}^0~\Gamma_{03}^0]} & {[\Gamma_{10}^0~\Gamma_{11}^0~\Gamma_{12}^0~\Gamma_{13}^0]} & {[\Gamma_{20}^0~\Gamma_{21}^0~\Gamma_{22}^0~\Gamma_{23}^0]} & {[\Gamma_{30}^0~\Gamma_{31}^0~\Gamma_{32}^0~\Gamma_{33}^0]} \\ {[\Gamma_{00}^1~\Gamma_{01}^1~\Gamma_{02}^1~\Gamma_{03}^1]} & {[\Gamma_{10}^1~\Gamma_{11}^1~\Gamma_{12}^1~\Gamma_{13}^1]} & {[\Gamma_{20}^1~\Gamma_{21}^1~\Gamma_{22}^1~\Gamma_{23}^1]} & {[\Gamma_{30}^1~\Gamma_{31}^1~\Gamma_{32}^1~\Gamma_{33}^1]} \\ {[\Gamma_{00}^2~\Gamma_{01}^2~\Gamma_{02}^2~\Gamma_{03}^2]} & {[\Gamma_{10}^2~\Gamma_{11}^2~\Gamma_{12}^2~\Gamma_{13}^2]} & {[\Gamma_{20}^2~\Gamma_{21}^2~\Gamma_{22}^2~\Gamma_{23}^2]} & {[\Gamma_{30}^2~\Gamma_{31}^2~\Gamma_{32}^2~\Gamma_{33}^2]} \\ {[\Gamma_{00}^3~\Gamma_{01}^3~\Gamma_{02}^3~\Gamma_{03}^3]} & {[\Gamma_{10}^3~\Gamma_{11}^3~\Gamma_{12}^3~\Gamma_{13}^3]} & {[\Gamma_{20}^3~\Gamma_{21}^3~\Gamma_{22}^3~\Gamma_{23}^3]} & {[\Gamma_{30}^3~\Gamma_{31}^3~\Gamma_{32}^3~\Gamma_{33}^3]} \end{bmatrix} </math>

  • for Levi-Civita symbol <math>\epsilon_{ijk}</math> the form will be a row vector whose elements are row vectors whose elements are row vectors
<math>

\epsilon_{ijk}=[ Robyt (talk) 06:26, 22 February 2020 (CET) [[0,0,0], [0,0,1], [0,-1,0]], Robyt (talk) 06:26, 22 February 2020 (CET) [[0,0,-1], [0,0,0], [1,0,0]], Robyt (talk) 06:26, 22 February 2020 (CET) [[0,1,0], [-1,0,0], [0,0,0]] Robyt (talk) 06:26, 22 February 2020 (CET) ] </math>

Literature

Template:Empty section

References