next up previous
Next: Tensor Transformation Up: Cartesian Tensors Previous: Introduction

Tensors and Tensor Notation

Let the Cartesian coordinates $x$, $y$, $z$ be written as the $x_i$, where $i$ runs from 1 to 3. In other words, $x=x_1$, $y= x_2$, and $z= x_3$. Incidentally, in the following, any lowercase roman subscript (e.g., $i$, $j$, $k$) is assumed to run from 1 to 3. We can also write the Cartesian components of a general vector ${\bf v}$ as the $v_i$. In other words, $v_x= v_1$, $v_y= v_2$, and $v_z= v_3$. By contrast, a scalar is represented as a variable without a subscript: e.g., $a$, $\phi$. Thus, a scalar--which is a tensor of order zero--is represented as a variable with zero subscripts, and a vector--which is a tensor of order one--is represented as a variable with one subscript. It stands to reason, therefore, that a tensor of order two is represented as a variable with two subscripts: e.g., $a_{ij}$, $\sigma_{ij}$. Moreover, an $n$th-order tensor is represented as a variable with $n$ subscripts: e.g., $a_{ijk}$ is a third-order tensor, and $b_{ijkl}$ a fourth-order tensor. Note that a general $n$th-order tensor has $3^n$ independent components.

Now, the components of a second-order tensor are conveniently visualized as a two-dimensional matrix, just as the components of a vector are sometimes visualized as a one-dimensional matrix. However, it is important to recognize that an $n$th-order tensor is not simply another name for an $n$-dimensional matrix. A matrix is just an ordered set of numbers. A tensor, on the other hand, is an ordered set of components that have specific transformation properties under rotation of the coordinate axes. (See Section B.3.)

Consider two vectors ${\bf a}$ and ${\bf b}$ that are represented as $a_i$ and $b_i$, respectively, in tensor notation. According to Section A.6, the scalar product of these two vectors takes the form

{\bf a}\cdot{\bf b} = a_1\,b_1+a_2\,b_2+a_3\,b_3.
\end{displaymath} (1485)

The above expression can be written more compactly as
{\bf a}\cdot{\bf b} = a_i\,b_i.
\end{displaymath} (1486)

Here, we have made use of the Einstein summation convention, according to which, in an expression containing lower case roman subscripts, any subscript that appears twice (and only twice) in any term of the expression is assumed to be summed from 1 to 3 (unless stated otherwise). Thus, $a_i\,b_i= a_1\,b_1+a_2\,b_2+a_3\,b_3$, and $a_{ij}\,b_j= a_{i1}\,b_1+a_{i2}\,b_2+a_{i3}\,b_3$. Note that when an index is summed it becomes a dummy index, and can be written as any (unique) symbol: i.e., $a_{ij}\,b_j$ and $a_{ip}\,b_p$ are equivalent. Moreover, only non-summed, or free, indices count toward the order of a tensor expression. Thus, $a_{ii}$ is a zeroth-order tensor (because there are no free indices), and $a_{ij}\,b_j$ is a first-order tensor (because there is only one free index). The process of reducing the order of a tensor expression by summing indices is known as contraction. For example, $a_{ii}$ is a zeroth-order contraction of the second-order tensor $a_{ij}$. Incidentally, when two tensors are multiplied together without contraction the resulting tensor is called an outer product: e.g., the second-order tensor $a_i\,b_j$ is the outer product of the two first-order tensors $a_i$ and $b_i$. Likewise, when two tensors are multiplied together in a manner that involves contraction then the resulting tensor is called an inner product: e.g., the first-order tensor $a_{ij}\,b_j$ is an inner product of the second-order tensor $a_{ij}$ and the first-order tensor $b_i$. Note, from Equation (1486), that the scalar product of two vectors is equivalent to the inner product of the corresponding first-order tensors.

According to Section A.8, the vector product of two vectors ${\bf a}$ and ${\bf b}$ takes the form

$\displaystyle ({\bf a}\times {\bf b})_1$ $\textstyle =$ $\displaystyle a_2\,b_3-a_3\,b_2,$ (1487)
$\displaystyle ({\bf a}\times {\bf b})_2$ $\textstyle =$ $\displaystyle a_3\,b_1-a_1\,b_3,$ (1488)
$\displaystyle ({\bf a}\times {\bf b})_3$ $\textstyle =$ $\displaystyle a_1\,b_2-a_2\,b_1$ (1489)

in tensor notation. The above expression can be written more compactly as
({\bf a}\times {\bf b})_i = \epsilon_{ijk}\,a_j\,b_k.
\end{displaymath} (1490)

\epsilon_{ijk} = \left\{
...of $1, 2, 3$}\\ [0.5ex]
\end{displaymath} (1491)

is known as the third-order permutation tensor (or, sometimes, the third-order Levi-Civita tensor). Note, in particular, that $\epsilon_{ijk}$ is zero if one of its indices is repeated: e.g., $\epsilon_{113}=\epsilon_{212}=0$. Furthermore, it follows from (1491) that
\end{displaymath} (1492)

It is helpful to define the second-order identity tensor (also known as the Kroenecker delta tensor),

\delta_{ij} = \left\{
...x{if $i=j$}\\ [0.5ex]
\end{array}\right. .
\end{displaymath} (1493)

It is easily seen that
$\displaystyle \delta_{ij}$ $\textstyle =$ $\displaystyle \delta_{ji},$ (1494)
$\displaystyle \delta_{ii}$ $\textstyle =$ $\displaystyle 3,$ (1495)
$\displaystyle \delta_{ik}\,\delta_{kj}$ $\textstyle =$ $\displaystyle \delta_{ij},$ (1496)
$\displaystyle \delta_{ij}\,a_j$ $\textstyle =$ $\displaystyle a_i,$ (1497)
$\displaystyle \delta_{ij}\,a_i\,b_j$ $\textstyle =$ $\displaystyle a_i\,b_i,$ (1498)
$\displaystyle \delta_{ij}\,a_{ki}\,b_j$ $\textstyle =$ $\displaystyle a_{ki}\,b_i,$ (1499)


The following is a particularly important tensor identity:

\epsilon_{ijk}\,\epsilon_{ilm} = \delta_{jl}\,\delta_{km}-\delta_{jm}\,\delta_{kl}.
\end{displaymath} (1500)

In order to establish the validity of the above expression, let us consider the various cases that arise. As is easily seen, the right-hand side of (1500) takes the values
$\displaystyle +1$ $\textstyle \mbox{\hspace{1cm}}$ $\displaystyle \mbox{if $j=l$\ and $k=m\neq j$},$ (1501)
$\displaystyle -1$   $\displaystyle \mbox{if $j=m$\ and $k=l\neq j$},$ (1502)
$\displaystyle 0$   $\displaystyle \mbox{otherwise}.$ (1503)

Moreover, in each product on the left-hand side, $i$ has the same value in both $\epsilon$ factors. Thus, for a non-zero contribution, none of $j$, $k$, $l$, and $m$ can have the same value as $i$ (because each $\epsilon$ factor is zero if any of its indices are repeated). Since a given subscript can only take one of three values ($1$, $2$, or $3$), the only possibilities that generate non-zero contributions are $j=l$ and $k=m$, or $j=m$ and $k=l$, excluding $j=k=l=m$ (since each $\epsilon$ factor would then have repeated indices, and so be zero). Thus, the left-hand side reproduces (1503), as well as the conditions on the indices in (1501) and (1502). The left-hand side also reproduces the values in (1501) and (1502) since if $j=l$ and $k=m$ then $\epsilon_{ijk}=\epsilon_{ilm}$ and the product $\epsilon_{ijk}\,\epsilon_{ilm}$ (no summation) is equal to $+1$, whereas if $j=m$ and $k=l$ then $\epsilon_{ijk}=\epsilon_{iml}=-\epsilon_{ilm}$ and the product $\epsilon_{ijk}\,\epsilon_{ilm}$ (no summation) is equal to $-1$. Here, use has been made of Equation (1492). Hence, the validity of the identity (1500) has been established.

In order to illustrate the use of (1500), consider the vector triple product identity (see Section A.11)

{\bf a}\times ({\bf b}\times {\bf c}) = ({\bf a}\cdot{\bf c})\,{\bf b} - ({\bf a}\cdot{\bf b})\,{\bf c}.
\end{displaymath} (1504)

In tensor notation, the left-hand side of this identity is written
\begin{displaymath}[{\bf a}\times ({\bf b}\times {\bf c})]_i = \epsilon_{ijk}\,a_j\,(\epsilon_{klm}\,b_l\,c_m),
\end{displaymath} (1505)

where use has been made of Equation (1490). Employing Equations (1492) and (1500), this becomes
\begin{displaymath}[{\bf a}\times ({\bf b}\times {\bf c})]_i = \epsilon_{kij}\,\...}\,\delta_{jm}-\delta_{im}\,\delta_{jl}\right)a_j\,b_l\,c_m,
\end{displaymath} (1506)

which, with the aid of Equations (1486) and (1497), reduces to
\begin{displaymath}[{\bf a}\times ({\bf b}\times {\bf c})]_i = a_j\,c_j\,b_i - a...
...ot{\bf c})\,{\bf b} - ({\bf a}\cdot{\bf b})\,{\bf c}\right]_i.
\end{displaymath} (1507)

Thus, we have established the validity of the vector identity (1504). Moreover, our proof is much more rigorous than that given earlier (in Section A.11).

next up previous
Next: Tensor Transformation Up: Cartesian Tensors Previous: Introduction
Richard Fitzpatrick 2012-04-27