next up previous
Next: Tensor Transformation Up: Cartesian Tensors Previous: Introduction

Tensors and Tensor Notation

Let the Cartesian coordinates $ x$ , $ y$ , $ z$ be written as the $ x_i$ , where $ i$ runs from 1 to 3. In other words, $ x=x_1$ , $ y= x_2$ , and $ z= x_3$ . Incidentally, in the following, any lowercase roman subscript (e.g., $ i$ , $ j$ , $ k$ ) is assumed to run from 1 to 3. We can also write the Cartesian components of a general vector $ {\bf v}$ as the $ v_i$ . In other words, $ v_x= v_1$ , $ v_y= v_2$ , and $ v_z= v_3$ . By contrast, a scalar is represented as a variable without a subscript: for instance, $ a$ , $ \phi$ . Thus, a scalar--which is a tensor of order zero--is represented as a variable with zero subscripts, and a vector--which is a tensor of order one--is represented as a variable with one subscript. It stands to reason, therefore, that a tensor of order two is represented as a variable with two subscripts: for instance, $ a_{ij}$ , $ \sigma_{ij}$ . Moreover, an $ n$ th-order tensor is represented as a variable with $ n$ subscripts: for instance, $ a_{ijk}$ is a third-order tensor, and $ b_{ijkl}$ a fourth-order tensor. Note that a general $ n$ th-order tensor has $ 3^n$ independent components.

The components of a second-order tensor are conveniently visualized as a two-dimensional matrix, just as the components of a vector are sometimes visualized as a one-dimensional matrix. However, it is important to recognize that an $ n$ th-order tensor is not simply another name for an $ n$ -dimensional matrix. A matrix is merely an ordered set of numbers. A tensor, on the other hand, is an ordered set of components that have specific transformation properties under rotation of the coordinate axes. (See Section B.3.)

Consider two vectors $ {\bf a}$ and $ {\bf b}$ that are represented as $ a_i$ and $ b_i$ , respectively, in tensor notation. According to Section A.6, the scalar product of these two vectors takes the form

$\displaystyle {\bf a}\cdot{\bf b} = a_1\,b_1+a_2\,b_2+a_3\,b_3.$ (B.1)

The previous expression can be written more compactly as

$\displaystyle {\bf a}\cdot{\bf b} = a_i\,b_i.$ (B.2)

Here, we have made use of the Einstein summation convention, according to which, in an expression containing lower case roman subscripts, any subscript that appears twice (and only twice) in any term of the expression is assumed to be summed from 1 to 3 (unless stated otherwise). Thus, $ a_i\,b_i= a_1\,b_1+a_2\,b_2+a_3\,b_3$ , and $ a_{ij}\,b_j= a_{i1}\,b_1+a_{i2}\,b_2+a_{i3}\,b_3$ . Note that when an index is summed it becomes a dummy index and can be written as any (unique) symbol: that is, $ a_{ij}\,b_j$ and $ a_{ip}\,b_p$ are equivalent. Moreover, only non-summed, or free, indices count toward the order of a tensor expression. Thus, $ a_{ii}$ is a zeroth-order tensor (because there are no free indices), and $ a_{ij}\,b_j$ is a first-order tensor (because there is only one free index). The process of reducing the order of a tensor expression by summing indices is known as contraction. For example, $ a_{ii}$ is a zeroth-order contraction of the second-order tensor $ a_{ij}$ . Incidentally, when two tensors are multiplied together without contraction the resulting tensor is called an outer product: for instance, the second-order tensor $ a_i\,b_j$ is the outer product of the two first-order tensors $ a_i$ and $ b_i$ . Likewise, when two tensors are multiplied together in a manner that involves contraction then the resulting tensor is called an inner product: for instance, the first-order tensor $ a_{ij}\,b_j$ is an inner product of the second-order tensor $ a_{ij}$ and the first-order tensor $ b_i$ . It can be seen from Equation (B.2) that the scalar product of two vectors is equivalent to the inner product of the corresponding first-order tensors.

According to Section A.8, the vector product of two vectors $ {\bf a}$ and $ {\bf b}$ takes the form

$\displaystyle ({\bf a}\times {\bf b})_1$ $\displaystyle = a_2\,b_3-a_3\,b_2,$ (B.3)
$\displaystyle ({\bf a}\times {\bf b})_2$ $\displaystyle = a_3\,b_1-a_1\,b_3,$ (B.4)
$\displaystyle ({\bf a}\times {\bf b})_3$ $\displaystyle =a_1\,b_2-a_2\,b_1$ (B.5)

in tensor notation. The previous expression can be written more compactly as

$\displaystyle ({\bf a}\times {\bf b})_i = \epsilon_{ijk}\,a_j\,b_k.$ (B.6)

Here,

$\displaystyle \epsilon_{ijk} = \left\{ \begin{array}{lll} +1&\mbox{\hspace{0.5c...
... odd permutation of $1, 2, 3$}\\ [0.5ex] 0&&\mbox{otherwise} \end{array}\right.$ (B.7)

is known as the third-order permutation tensor (or, sometimes, the third-order Levi-Civita tensor). Note, in particular, that $ \epsilon_{ijk}$ is zero if one of its indices is repeated: for instance, $ \epsilon_{113}=\epsilon_{212}=0$ . Furthermore, it follows from Equation (B.7) that

$\displaystyle \epsilon_{ijk}=\epsilon_{jki}=\epsilon_{kij}=-\epsilon_{kji}=-\epsilon_{jik}=-\epsilon_{ikj}.$ (B.8)

It is helpful to define the second-order identity tensor (also known as the Kroenecker delta tensor),

$\displaystyle \delta_{ij} = \left\{ \begin{array}{lll} 1&\mbox{\hspace{0.5cm}}& \mbox{if $i=j$}\\ [0.5ex] 0&&\mbox{otherwise} \end{array}\right. .$ (B.9)

It is easily seen that

$\displaystyle \delta_{ij}$ $\displaystyle = \delta_{ji},$ (B.10)
$\displaystyle \delta_{ii}$ $\displaystyle =3,$ (B.11)
$\displaystyle \delta_{ik}\,\delta_{kj}$ $\displaystyle = \delta_{ij},$ (B.12)
$\displaystyle \delta_{ij}\,a_j$ $\displaystyle = a_i,$ (B.13)
$\displaystyle \delta_{ij}\,a_i\,b_j$ $\displaystyle = a_i\,b_i,$ (B.14)
$\displaystyle \delta_{ij}\,a_{ki}\,b_j$ $\displaystyle = a_{ki}\,b_i,$ (B.15)

et cetera.

The following is a particularly important tensor identity:

$\displaystyle \epsilon_{ijk}\,\epsilon_{ilm} = \delta_{jl}\,\delta_{km}-\delta_{jm}\,\delta_{kl}.$ (B.16)

In order to establish the validity of the previous expression, let us consider the various cases that arise. As is easily seen, the right-hand side of Equation (B.16) takes the values

$\displaystyle +1$   $\displaystyle \mbox{if $j=l$\ and $k=m\neq j$}$$\displaystyle ,$ (B.17)
$\displaystyle -1$   $\displaystyle \mbox{if $j=m$\ and $k=l\neq j$}$$\displaystyle ,$ (B.18)
0   otherwise$\displaystyle .$ (B.19)

Moreover, in each product on the left-hand side of Equation (B.16), $ i$ has the same value in both $ \epsilon$ factors. Thus, for a non-zero contribution, none of $ j$ , $ k$ , $ l$ , and $ m$ can have the same value as $ i$ (because each $ \epsilon$ factor is zero if any of its indices are repeated). Because a given subscript can only take one of three values ($ 1$ , $ 2$ , or $ 3$ ), the only possibilities that generate non-zero contributions are $ j=l$ and $ k=m$ , or $ j=m$ and $ k=l$ , excluding $ j=k=l=m$ (as each $ \epsilon$ factor would then have repeated indices, and so be zero). Thus, the left-hand side of Equation (B.16) reproduces Equation (B.19), as well as the conditions on the indices in Equations (B.17) and (B.18). The left-hand side also reproduces the values in Equations (B.17) and (B.18) because if $ j=l$ and $ k=m$ then $ \epsilon_{ijk}=\epsilon_{ilm}$ and the product $ \epsilon_{ijk}\,\epsilon_{ilm}$ (no summation) is equal to $ +1$ , whereas if $ j=m$ and $ k=l$ then $ \epsilon_{ijk}=\epsilon_{iml}=-\epsilon_{ilm}$ and the product $ \epsilon_{ijk}\,\epsilon_{ilm}$ (no summation) is equal to $ -1$ . Here, use has been made of Equation (B.8). Hence, the validity of the identity (B.16) has been established.

In order to illustrate the use of Equation (B.16), consider the vector triple product identity (see Section A.11)

$\displaystyle {\bf a}\times ({\bf b}\times {\bf c}) = ({\bf a}\cdot{\bf c})\,{\bf b} - ({\bf a}\cdot{\bf b})\,{\bf c}.$ (B.20)

In tensor notation, the left-hand side of this identity is written

$\displaystyle [{\bf a}\times ({\bf b}\times {\bf c})]_i = \epsilon_{ijk}\,a_j\,(\epsilon_{klm}\,b_l\,c_m),$ (B.21)

where use has been made of Equation (B.6). Employing Equations (B.8) and (B.16), this becomes

$\displaystyle [{\bf a}\times ({\bf b}\times {\bf c})]_i = \epsilon_{kij}\,\epsi...
... = \left(\delta_{il}\,\delta_{jm}-\delta_{im}\,\delta_{jl}\right)a_j\,b_l\,c_m,$ (B.22)

which, with the aid of Equations (B.2) and (B.13), reduces to

$\displaystyle [{\bf a}\times ({\bf b}\times {\bf c})]_i = a_j\,c_j\,b_i - a_j\,...
...\left[({\bf a}\cdot{\bf c})\,{\bf b} - ({\bf a}\cdot{\bf b})\,{\bf c}\right]_i.$ (B.23)

Thus, we have established the validity of the vector identity (B.20). Moreover, our proof is much more rigorous than that given earlier in Section A.11.


next up previous
Next: Tensor Transformation Up: Cartesian Tensors Previous: Introduction
Richard Fitzpatrick 2016-03-31