Modern Theory of Vectors and Tensors in Mechanics and Engineering
Modern Theory of Vectors and Tensors in Mechanics and Engineering
2 Second-Order Tensors 25
2.1 Motivation and Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.1.1 Deformation state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.1.2 Stressed state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.2 Second-Order Tensors as Linear Transformations . . . . . . . . . . . . . . . . . . . . . . 28
2.2.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.2.2 Multiplication by number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.2.3 Addition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.2.4 2nd-order tensor spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.2.5 Composite product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.2.6 Transposition, non-singularity and singularity . . . . . . . . . . . . . . . . . . . . 30
2.3 Algebraic Expressions: Standard Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.3.1 Dyadic Product of two vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.3.2 Standard expression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.3.3 Transformation rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.4 Algebraic Expressions: Arbitrary Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.4.1 Co-, contra- and mixed-variant expressions . . . . . . . . . . . . . . . . . . . . . 33
2.4.2 Transformation rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.5 Characteristic Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.5.1 Eigenvalues and eigenvectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3
4 CONTENTS
5 Tensor Functions 83
5.1 Motivation: constitutive functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
5.2 Material Symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
5.2.1 Structural elements and their symmetries . . . . . . . . . . . . . . . . . . . . . . 84
5.2.2 Orthogonal groups and cylindrical groups . . . . . . . . . . . . . . . . . . . . . . 85
5.2.3 Crystal classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
5.2.4 Quasicrystal classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
5.2.5 Isotropic and anisotropic functions of vectors and 2nd-order tensors . . . . . . . . 90
CONTENTS 5
when studying concepts and results. Do not try hard to simply bear them in mind before you know the
answers for the above questions!
Chapter 1
The notion of vector itself characterizes and represents various kinds of physical and geometrical quan-
tities, as will be indicated below. Besides, as will be seen, vectors serve as building blocks in defining,
constructing and understanding more sophisticated objects, i.e., tensors. Here, we focus attention mainly
on vectors defined over a 3-dimensional Eulclidean space.
There are different ways in introducing and defining vectors and their operations. In a 3-dimensional
Euclidean space, however, it is possible from the beginning to introduce vectors and their algebraic oper-
ations in a perhaps more straightforward and accessible manner. In our treatment, bases and components
will not be involved at all at the first stage, and they will be used in subsequent stages to facilitate repre-
sentations and operations of vectors from an algebraic standpoint.
1.1 Vectors
When we are invited to learn things new to us, naturally we may ask: What are they? Why are they
introduced and needed? How can we understand and use them? and so on. In what follows we attempt to
answer these questions concerning the basic objects discussed here, i.e., vectors and then tensors.
7
8 CHAPTER 1. VECTORS AND VECTOR SPACES
DK
Arkona
5 3 5
Sylt
Helgoland
5
Kiel
Rostock
5
4 Schwerin
Stettin
Emden
Hamburg
4
4 4
4
Bremen
Berlin
PL
5
NL Hannover
3
4
=
<=
2 Magdeburg 5
<
Münster
Cottbus
:;
4 Leipzig
:
Kassel 4 4
89
B 3 3
3 CZ
3
7
67
L Frankfurt
5 6
4 45
Trier Nürnberg
$%
4 4
Würzburg
" "
# #
" "
# #
4
2 2
3 3
3
2 2
3 3
Saarbrücken 4 '
&'
4 ,
-
,
-
&
Regensburg
5
, ,
- -
Stuttgart
F München )
3
Freiburg
()
2
/
2
./
A
Konstanz
1
* *
+ +
3
01
* *
+ +
Basel 4
CH
Stockholm
Moskva
London
Berlin Warszawa
Paris
Madrid Roma
A more familiar case is when you meet your friends at Uni-Center. Perhaps your friends and you may
ask “where are you going?” among yourselves. Possibly, the answers may be like, “I am going from here
to Plus”,“I am going from here to Globus”, “I am going from here to the library”, etc. This case is shown
in Fig. 1.3.
Shopping Mart
Cinema
Library
These examples involve quantities essentially different from scalar quantities. Yes, each of them has a
magnitude, like a scalar quantity, but it includes more than a magnitude, i.e., it simultaneously refers to a
direction. Because such quantities even with the same magnitude do not have the same meaning at all, they
cannot be represented and characterized by scalars. In mechanics and engineering and other fields, there
are a great number of basic physical and geometrical quantities which are endowed with both magnitudes
and directions. As a result, a new notion for their characterizations and representations would become
necessary.
This new notion, i.e. vectors, has been invented by mathematicians, physicists and engineers from their
different backgrounds. In a number of scientific and technological fields, e.g., mechanics and engineering,
basic geometrical and physical features of the investigated objects and their behaviour are characterized and
represented by vectors and tensors, together with the usually known scalar quantities. To study and answer
meaningful and challenging topics in mechanics and various engineering fields, as one of the coherent and
constituent steps it would be instrumental and necessary to understand and use vectors and tensors and
operations concerning them.
Direction
Symbol
v
Length
zero length
This vector is called null (zero) vector and will be designated by a particular notation 0 . It is a particular
yet important vector and will play a similar role in basic vector operations as the number zero does in basic
number operations, as will be seen below.
1.2.2 Addition
We know how to add up several numbers to produce a sum. We know this operation is basic and necessary
(imagine what if we do not know it at all) and we have been familiar with its simple rules since we learned
them in primary schools. For vectors, from physical considerations and others it is also basic and necessary
to have an operation called addition. A somewhat sweeping reason is that, without this and other basic
operations, one could not make any progress at all in using vectors and tensors as powerful notions and
1.2. VECTOR SPACES 11
tools to understand and solve meaningful questions in mechanics and engineering and other related fields.
Another reason is that these basic operations provide us with simple means to gain insight into certain
essential features of vectors.
There are physical backgrounds of introducing addition operation for vectors. Yes, let us consider a
body subjected to two or more forces (vectors). For example, the moon attracted by the universal gravita-
tions from the sun and the earth, as shown in Fig. 1.7. To know how it will move, Newton’s second law
tells us that we have to know the composition (sum) of these force vectors (here two but possibly more) in
Fig. 1.6.
The earth
The sun
The moon
Figure 1.6: The moon attracted by the sun and the earth
Another example is a small boat crossing a river. The river is constantly flowing at a velocity along its
course, and somebody sitting on the boat is always paddling it to the opposite bank at a constant velocity
in the direction normal to the water flow, as shown in Fig. 1.7.
To know in what direction the boat will move, we must find out the composition (sum) of the velocity
vectors (here two but possibly more) in Fig. 1.7.
Now we proceed to study the composition or sum of two vectors and its rules. To this goal, a natural
way is to consider displacements. Now let us imagine a displacement from point A to point B, and then an
immediately following displacement from point B to point C. In this process, the displacements from A to
B and from B to C may be represented by vectors u and v , respectively. It may be clear that the composition
(sum) of the two displacements from A to B and then from B to C are equivalent to the single displacement
directly from A to C, represented by a . As a result, the vector a may be called the sum of the vectors u and
v and the latter is written as u + v . Then we have a = u + v . This is shown in Fig. 1.8.
Fig. 1.8 clearly shows how to adding up two vectors u and v . First, we draw vector u starting from
point A and setting its arrowhead at B; then, starting from the arrowhead B of vector u , we draw vector v
12 CHAPTER 1. VECTORS AND VECTOR SPACES
v
a
v
B
u
A
with its arrowhead at point C; and finally we draw another vector a by connecting points A and C and by
setting the arrowhead at point C. This process generates a triangle composed of vectors u , v and a , where
vector a is just the sum of vectors u and v , i.e., a = u + v .
The above process is called the triangle rule for vector addition. Another equivalent process is also
shown in Fig. 1.8 and generates a parallelogram with vector a one of its diagonals. This process is also
used and called the parallelogram rule for vector addition.
Either of the foregoing two rules defines the vector addition. It is well defined from both geometrical
and physical viewpoint. Indeed, it is also in full agreement with the observation on the composition of
force vectors and velocity vectors.
With either of the two aforementioned rules, it may be easy to confirm the following basic operation
rules for vector addition:
u + v = v + u (commutative rule) ; (1.1)
(uu + v ) + w = u + (vv + w ) = (uu + w ) + v ≡ u + v + w (associative rule) ; (1.2)
u +0 = u; (1.3)
for any given vectors u , v and w .
On the other hand, given a vector u , we may obtain a new vector by simply reversing the arrowhead
to the opposite end. This new vector is called the opposed vector of u and denoted −uu. A vector and its
opposed vector have the same length but their directions are opposed to each other. The following rule
holds:
−(−uu) = u , u + (−uu) = 0 . (1.4)
For scalars, we have subtraction operation. Then what about vectors? The answer is simple: it is a straight-
forward manner to define and do subtraction operation by means of addition operation for vectors. Indeed,
we can define
u − v ≡ u + (−vv) (1.5)
for any two vectors u and v . Namely, subtracting vector v from u is simply adding up the vector u and the
opposed vector, −vv, of vector v . The following rule holds:
u +v = w ⇐⇒ w −u = v. (1.6)
To understand (1.3) and (1.4)2 , we may imagine two limiting processes, respectively. One is that the length
of vector v in Fig. 1.8 is constantly becoming smaller and smaller, and, accordingly, the sum a = u + v
is becoming closer and closer to vector u . The other is that vector v in Fig. 1.8 is assigned to have the
same length as vector u and then is constantly rotating about point B inside the triangle, and, accordingly,
the length of the sum a = u + v is becoming smaller and smaller. With these limiting processes, (1.3) is
produced as the length of vector v finally tends to vanish, while (1.4)2 is generated as point C finally arrives
at point A.
determined simply by the sign of the number α. Namely, a positive α gives the same direction as u , while
a negative α reverses the direction of u , as indicated in Fig. 1.9.
u
αu αu
( α <0 ) ( α >0 )
The above procedures tell us how to multiply a vector u by a number α. The result is a vector, denoted
αuu and called the product of the number α and the vector u . The following rules may be obvious:
Motivated by the notion of work and its calculation, now we introduce the scalar product of two vectors.
Let |uu| and |vv| be the lengths (magnitudes) of vectors u and v , respectively, and θ be the angle between
vectors u and v , 0 ≤ θ ≤ π, which is formed by allowing the starting ends of u and v to meet together (see
1.8). Then the scalar product of vectors u and v , denoted u · v , is a scalar given below:
Usually, we refer to |uu| cos θ (resp., |vv| cos θ) as the projection component of vector u (resp., v ) in the
direction of vector v (resp., u ). Then, a geometric explanation of the scalar product (1.11) is quite clear and
simple: it is the product of the length (magnitude) of vector u (resp., v ) and the projection component of
vector v (resp., u ) in the direction of vector u (resp., v ).
Let u and v be non-zero vectors, i.e., |uu| · |vv| 6= 0. Then we have
u ·v
cos θ = . (1.12)
|u | · |vv|
u
This indicates a geometric feature of the scalar product of two vectors: it also characterizes the angle
between two vectors. As a result, two vectors are said to be orthogonal whenever their scalar product
vanishes.
The following properties (rules) hold true for any number α and for any vectors u , v and w .
√
|uu| = u · u ; (1.13)
u ·u = 0 ⇐⇒ u = 0; (1.14)
u ·v = v ·u (commutative rule) ; (1.15)
(αuu) · v = u · (αvv) = α(uu · v ) (associative rule) ; (1.16)
(uu + v ) · w = u · w + v · w (distributive rule) . (1.17)
The above rules show that the scalar product of vectors is similar to the usual product of two scalars in
certain respects. However, there is an essential difference between them. There exist continued products
of three and even more scalars, whereas the same is not true for the scalar product of vectors. The main
reason is that the scalar product of two vectors is already a scalar by definition.
Physically, a common feature of vector spaces is that all the vectors in each vector space must have
the same physical dimension. In fact, the sum of two vectors with different physical dimensions, e.g., a
velocity vector and a force vector, can not be defined. However, two vector spaces with the same physical
dimension may be quite different in physical background and feature and, therefore, must be regarded as
different vector spaces.
In summary, we may say that a vector space is a collection of vectors in which the vectors are closely
connected and integrated by the basic operations into a self-contained structure. This will be clearly seen
by the structural formula (1.40) for vector spaces that will be given in §1.3.3. A perhaps vivid description
is as follows. Suppose that we have a collection of many assorted beads in different colours, each of which
is made of pearl or even precious stone. Although each individual bead looks very beautiful, a random
collection of them may look disordered and not so nice. Imagine we connect those of the same colour
together through golden threads, separately. In so doing we finally turn a disordered collection of assorted
coloured beads into bunches of necklaces each of which looks orderly, simple and elegant from an artistic
viewpoint. Now we may make a comparison of the foregoing process in constructing vector spaces with the
just-mentioned process. We may regard vectors from different backgrounds as assorted beads in different
colours, and the basic vector operations as golden threads. Then, a vector space (a bunch of necklace)
is a collection of vectors with the same background (beads with the same colour) connected by the basic
operations (golden threads). How the latter as “golden threads” connect all vectors into simple, elegant
structures, i.e., vector spaces, will be seen in §1.3.
It might be seen that the basic operations play a central role in defining vector spaces. A vector space
with the basic operations removed is no longer a vector space but only a loose collection of vectors without
the integrated connection and simple and elegant structure maintained by the basic operations, just as a
necklace with the golden thread removed is no longer a necklace but only a loose collection of beads
without the integrated connection and simple and elegant structure maintained by the golden thread.
In §1.3 we shall reveal the simple and elegant structure of Euclidean vector spaces resulting from the
basic algebraic operations.
It should be pointed out that a more general notion of vector space may be introduced from a math-
ematical standpoint. In this general case, vectors are regarded as abstract mathematical objects, and the
vector addition and multiplication by number are, in a formal and abstract manner, introduced as algebraic
operations obeying the rules (1.1)–(1.10) without any motivation. Then, a vector space is defined as a
self-contained collection of vectors in which the vector addition and multiplication by number obeying the
rules (1.1)–(1.10) are defined. Here, the scalar product may be or may not be involved. For some abstract
vector space, the scalar product obeying (1.14)- (1.17) could not be defined. If such a scalar product can
be defined, then follows a vector space with additional geometrical features. Here we are not concerned
with this general viewpoint. Details in this general respect can be found in many monographs, e.g., Halmos
(1974).
v
B
u
A
m= u x v
A geometrical meaning of the vector product u × v is that its magnitude |uu × v | is just the area of the
parallelogram with the two vectors u and v its two neighbouring sides. By choosing various pairs of two
neighbouring vectors u and v in 3-dimensional space we may construct many different parallelograms,
which have different areas and lie on planes with different normals. A useful, significant fact is that each
such parallelogram spanned by a pair of two neighbouring vectors u and v may be exactly represented by
the vector product u × v , since the latter is just an entity combining the two main geometric features of the
former: its area and its orientation in space.
Besides, two vectors are said to be collinear or parallel, whenever their vector product is the null vector.
For vector product hold the following rules.
u ×u = 0; (1.19)
(uu × v ) · w = |uu × v | · |w
w| cos φ , (1.23)
where φ is the angle between u × v and w . Note that |uu × v | is the area of the parallelogram as the base
spanned by u and v and that |w
w| cos φ is just the height of the parallelepiped shown in Fig. 1.12.
v u
A significant property of the mixed product (uu × v ) · w is: Vectors u , v and w are non-coplanar (resp.,
co-planar) if and only if their mixed product is non-zero (resp., zero).
Below are some interesting properties and results involving both scalar and vector products..
(uu × v ) · (w
w × r ) = (uu · w )(vv · r ) − (uu · r )(vv · w ) . (1.28)
In the above, (1.24)–(1.26) may be evident by definitions of scalar and vector products. However, (1.27)
and (1.28) may not be so evident. Their proofs are non-trivial. Eq. (1.28) is sometimes called Lagrange
identity. The importance of (1.27) and (1.28) lies in the fact that they reduce the continued products
involving vector products to scalar products. Usually, it may be much easier to perform the operation of
scalar product than vector product, as will be indicated in §1.2.6.
u2
u
e2
e1 u1
Further, since u 1 and u 2 are orthogonal, we have u 1 · e 2 = u 2 · e 1 = 0. Hence, from these and the distributive
rule (1.17) as well as (1.30)2 and (1.32) and (1.33), we infer
u1 = u1 · e1 = (uu1 + u2 ) · e1 = u · e1 ,
(1.34)
u2 = u2 · e2 = (uu1 + u2 ) · e2 = u · e2 .
These indicate that u1 and u2 are exactly the projection components of vector u in the directions of e 1 and
e 2 , respectively. Combining (1.32)–(1.34), finally we arrive at
u = u1 e 1 + u2 e 2 ,
(1.35)
u1 = u · e 1 , u2 = u · e 2 .
When the scalars u1 and u2 take all possible values, the above expression in terms of two specified or-
thonormal vectors e 1 and e 2 produces all the vectors normal to both e 1 and e 2 .
Now we consider any given vector u in a three-dimensional space. As shown in Fig. 1.14, we first construct
a vector rectangle in the plane including e 3 and u , with u its diagonal and with two sides (vectors) u 3 and
u 03 being collinear with e 3 and lying in the plane including e 1 and e 2 , respectively, and then we construct
another vector rectangle in the plane including e1 and e2 , with u03 its diagonal and with two sides (vectors)
u1 and u2 collinear with e1 and e2 , respectively.
u3
e3 u
u2
e1
e2
u1
u3’
For the twin vector rectangles indicated above, by using the result given in §1.2.2 we have
u = u 3 + u 03 ,
(1.37)
u 3 = u3 e 3 , u3 = u · e 3 ,
1.3. ALGEBRAIC EXPRESSIONS: STANDARD BASES 19
u 03 = u − u 3 , u3 · e1 = u3 · e2 = 0 ,
we derive
u1 = u 03 · e 1 = (uu − u 3 ) · e 1 = u · e 1 ,
u2 = u 03 · e 2 = (uu − u 3 ) · e 2 = u · e 2 .
Thus, combining the above results, finally we obtain
u = u1 e 1 + u2 e 2 + u3 e 3 ,
(1.39)
ui = u · e i , i = 1, 2, 3 .
To explain what the expression (1.39) implies, we make the following observation and remarks.
(i) Choose and fix three orthonormal vectors e i meeting (1.36). Then, for any given vector u , we can
reduced u to its three projection components through the twin vector rectangles in Fig. 1.14. Hence, it
turns out that any given vector can be represented and expressed by its three projection components
through (1.39) by a simple, unified form of summation. These facts suggest a simple procedure
of expressing any given vector from an algebraic viewpoint: Choose three orthonormal vectors e i
meeting (1.36) and determine three ordered scalars ui by (1.39)2 , and then use (1.39)1 .
(i)) Fix the three e i . Now we give three scalars ui instead of a vector. Then the right hand side of
expression (1.39)1 generates a vector u through the twin vector rectangles in Fig. 1.14. Hence,
taking all possible ui produces all the vectors. A consequence is that vector spaces can be generated
through (1.39)1 by choosing a common triplet (ee1 , e 2 , e 3 ) meeting (1.36) and taking all possible
values of each scalar ui , separately. Thus, we have1
(iii) Is it possible to express all the vectors by a simpler expression like (1.35) by simply choosing a
common pair ( e 1 , e 2 ) meeting (1.31) instead of a common triplet (ee1 , e 2 , e 3 ) meeting (1.36)? The
answer is definitely no. An evident fact is that a vector u normal to both e 1 and e 2 , such as u = e 3 ,
could not be expressed by (1.35).
In view of the above facts, we say that each vector space given by (1.40) is three-dimensional and we
refer to (ee1 , e 2 , e 3 ) meeting (1.36) as a standard basis of vector spaces. The three scalars ui in (1.39) are
called the standard components of vector u relative to the standard basis (ee1 , e 2 , e 3 ). Moreover, we say
that expression (1.39) is the algebraic expression of vector u relative to the standard basis (ee1 , e 2 , e 3 ), or,
simply, as a standard expression of u for brevity.
Expressions (1.39) and (1.40) reveal the simple, elegant structure of vector spaces. This simplicity will
essentially facilitates the implementation of basic and sophisticated operations and analyses for vectors and
tensors, as will be seen next subsection and other places.
have a common feature and are associated in certain order with a vector basis (ee1 , e 2 , e 3 ) as given by (1.35).
Various kinds of summations involving mixed operations of addition and multiplication for them will be
carried out. In this case, it will prove very efficient and convenient to introduce symbols attached with one
or more indices, subscripts or superscripts, to represent these members and then express various kinds of
relevant operations in terms of these indexed symbols.
As the beginning of application, the above idea has been used in presenting a standard basis meeting
(1.36) and the related components. Here we denote three orthonormal vectors and related components by
the indexed symbols e i and ui . Of course, we can replace the index i with any other, e.g., j, k, etc., as we
wish. We assume that each index takes the values 1, 2, 3. For instance, in so doing, either e i or e j , etc.,
represents e 1 , e 2 and e 3 .
From now on, if otherwise indicated, each index will take values 1, 2, 3 and this fact will no longer
be mentioned. In many cases, we write down expressions with two or more juxtaposing indexed symbols.
For such expressions, we adopt Einstein’s summation convention: each index emerging only once, called a
free index, can freely take values 1, 2, 3, and each index repeatedly emerging twice and only twice, called
a dummy index, means the summation when this index runs over 1, 2, 3. Except for few particular cases,
we can avoid any index repeatedly emerging more than twice by adjusting the symbol of dummy indices.
This is exemplified in (1.49) below.
With the above conventions, we present the following results.
e i · e j = δi j ; (1.41)
u = ui e i , v = vi e i , w = wi e i ; (1.42)
0 = 0ee1 + 0ee2 + 0ee3 ; (1.43)
−uu = (−ui )eei ; (1.44)
u ± v = (ui ± vi )eei ; (1.45)
αuu = (αui )eei ; (1.46)
√
|uu| = ui ui ; (1.47)
u · v = ui vi ; (1.48)
ui vi
cos θ = √ √ ; (1.49)
u j u j vk vk
e1 × e2 = e3 , e2 × e3 = e1 , e3 × e1 = e2 ; (1.50)
u × v = (u2 v3 − u3 v2 )ee1 + (u3 v1 − u1 v3 )ee2 + (u1 v2 − u2 v1 )ee3 ; (1.51)
u1 u2 u3
(uu × v ) · w =
v1 v2 v3 ≡ u1 v2 w3 + u2 v3 w1 + u3 v1 w2 − u1 v3 w2 − u2 v1 w3 − u3 v1 w2 . (1.52)
w1 w2 w3
The last two expressions suggest that operations involving vector product are usually much complicated.
That is why formulas (1.26) and (1.27), in particular, the Lagrange formula (1.27), are useful and signifi-
cant.
In (1.41) and henceforward, we introduce the notation
1 for i = j ,
δi j ≡ (1.53)
0 for i 6= j .
This useful notation is usually called Kronecker delta (symbol). Evidently, we have
δi j = δ ji .
An obvious yet very useful identiuty is as follows:
δi j H··· j··· = H···i··· , (1.54)
where “ · · ·” may represent several indices. For instance, we have
δi j y j = yi , δi j T jk = Tik , δi j Tr j = Tri ,
and many others.
In passing, by (1.52)2 we introduce 3 × 3 determinant, which will be used again later on.
1.4. ALGEBRAIC EXPRESSION: ARBITRARY BASES 21
Changing the dummy index “i” in (1.56)1 to “j” and then substituting the resulted expression into (1.57)2 ,
we derive
ūi = (u j e j ) · ēei = (ēei · e j )u j . (1.58)
Since each e i and each ēei are unit vectors, we infer that each e r · ēes (= ēes · e r ) is simply the cosine of the
angle between e r and ēes , call directional cosine.
Now it may become clear that the two sets of standard components of any given vector u relative to
any two given standard bases are interrelated to each other through (1.58). Fixing a standard basis e i and
allowing the basis ēei to run over all possible standard bases, from (1.58) we derive the following significant
fact.
Although the three standard components of any given vector u are changing with the changing of
standard basis ēei , its three components ūi relative to every standard basis ēei can be determined through
(1.58) by its three components ui relative to a given standard basis e i .
(gg1 × g 2 ) · g 3 6= 0 . (1.59)
In Fig. 1.14, we replace each e i with each g i . Then, in this general case, the twin vector rectangles in Fig.
1.14 become twin vector parallelograms. Then by using the parallelogram rule for vector addition twice
we can derive
u = u 1 + u 2 + u 3 = ui g i , (1.60)
where u i is collinear with g i . Here by using the symbol ui instead of ui we intend to distinguish the
corresponding case that will be treated below.
The above expression is of the same form as (1.39)1 , but its three coefficients ui are no longer the
projection components as given by (1.39)2 . Because the three g i need not be orthonormalized, each ui is
related to u and the three g i in a complicated manner. An idea to overcome this difficulty is to introduce
another three vectors g i given by
g2 × g3 g3 × g1 g1 × g2
g1 = , g2 = , g3 = . (1.61)
(g 1 × g2 ) · g3
g (g 1 × g2 ) · g3
g (g 1 × g2 ) · g3
g
22 CHAPTER 1. VECTORS AND VECTOR SPACES
By using (1.25) and (1.26) it may be readily shown that the following so-called reciprocity conditions hold:
g i · g j = δi j . (1.62)
ūi = (ḡgi · g j )u j ,
(1.71)
ūi = (ḡgi · g j )u j ;
and
ūi = (ḡgi · g j )u j ,
(1.72)
ūi = (ḡgi · g j )u j .
Comparing the results in this section and last section, it may be seen that use of a standard basis e i meeting
(1.41) is much simpler and much more convenient than use of a general basis g i meeting (1.59) only.
Another perhaps essential fact is that a standard component is really a physical component, whereas that
is not the case for either a co- or a contravariant component. These contrasts will become sharper when
tensor quantities are treated.
Thus, if possible, for the sake of simplicity and convenience, a standard basis should be chosen and
used.
1.5. REMARKS: VECTORS AS BASIS-FREE ENTITIES 23
different from the components ūi relative to basis ḡi . With the two bases g i and ḡgi and the corresponding
components ui and ūi , we form expressions ui g i and ūi ḡgi , which yield two vectors. Now a question arises
as to whether these two expressions really yield the same vector? They should be the same according to
the algebraic expression derived. But why and how?
Now we answer this question. Since u i and ūi are related to each other by the transformation rule
(1.72)1 , we have
ūi ḡgi = ḡgi · g j u j ḡgi = u j g j · ḡgi ḡgi .
On the other hand, we can express each g j in terms of the basis ḡgi . Setting u = g j in (1.69)1 and (1.70)2 ,
we have
g j = (gg j · ḡgi )ḡgi .
Hence, from the last two expressions we infer
ūi ḡgi = u j g j .
This confirms that each algebraic expression for vector in §1.3 and §1.4 always yields the same vector for
all possible bases. From this we conclude:
Although the components of a vector relative to different bases are different from each other, these
components are related to each other by a transformation rule and as such its algebraic expression always
expresses the same vector.
Sometimes, for every basis g i , we may give three scalars yi in a determined manner. For instance, let u
and v be two given vectors, and their components be ui and vi relative to each basis g i . Then, we give three
scalars yi = ui vi (no summation here) for each basis g i .
For each basis g i and three associated scalars yi , we form an expression yi g i . Evidently, this expression
produces a vector for each choice of basis g i . Only when the scalars yi given for all bases g i are related to
each other by the transformation rule (1.72)1 , the foregoing expression always yields the same vector for
all bases g i . In this case, the scalars yi given for all bases g i are indeed the components of the same vector
relative to all bases g i . Otherwise, the foregoing expression generates different vectors for different choices
of basis g i , and, hence, we could not say that the scalars yi associated with different choices of basis g i are
the components of the same vector relative to different bases.
When three ordered scalars yi associated with bases g i are treated, it would be helpful to keep the above
fact in mind.
Sometimes it is said that each triplet (y1 , y2 , y3 ) determines a vector and hence may be regarded as a
vector. From expression (1.69)1 , this is reasonable only when a basis is specified. Clearly, for the same
triplet xi , generally expression (1.69)1 supplies different vectors for different bases g i .
Finally, we like to remark that once a vector as a basis-free entity is defined, its components relative
to various bases, no matter whether standard or arbitrary, are simply derived consequences of its algebraic
expressions relative to these bases. Hence, it might not be reasonable to say that different kinds of com-
ponents, i.e., standard, co- and contravariant components, would generate different kinds of vectors. A
meaningful question is that if we introduce and define a triplet of ordered numbers, (y1 , y2 , y3 ), associated
with every basis g i in a certain manner, we like to know whether or not the expression yi g i produce the
same vector for all choices of basis g i , i.e., whether or not the introduced triplet is the components of the
same vector relative to all bases, as elaborated before.
24 CHAPTER 1. VECTORS AND VECTOR SPACES
Chapter 2
Second-Order Tensors
Starting from the notion of vector spaces, more sophisticated quantities called tensors may be introduced.
Among them, second order tensors, including stress and strain tensors, play a prominent role in mechanics
and engineering, which supply us with unified and powerful tools and means to understand and resolve
certain basic and difficult issues. This chapter is devoted to second order tensors. Tensors of higher order
will be discussed elsewhere.
It seems helpful to always keep the notion of linear mapping or transformation in mind. The simplest
example is a linear function F which transforms each scalar in a scalar set S1 into a scalar in another scalar
set S2 . Symbolically, we may signify this by
F : x ∈ S1 −→ y ∈ S2 ,
y = F[x] ,
with a linear rule
F[αxx] = αF[x], F[x1 + x2 ] = F[x1 ] + F[x2 ] ,
for any number α and for any scalars x, x1 , x2 ∈ S1 .
Here, important matter is to regard F as a transformation-like quantity whose function or role is to
transform each object in a set into an object in another set. To make this always evident, we introduce
the particular notation F[x], which means that F is a transformation-like quantity and that this quantity
transforms each object x into another object F[x] according to a linear rule.
In this and next two chapters, this simple notion will be essential to understanding various cases for
tensors, in which we interpret the scalars x ∈ S1 and y ∈ S2 here as various objects involved.
25
26 CHAPTER 2. SECOND-ORDER TENSORS
vector with the arrowhead at the end opposite to point A. We call each such vector a linear element at point
A. Each linear element at point A will experience rotation and length change in a course of deformation of
the material body at issue. Such rotations and length changes for all linear elements at point A constitute
the deformation state at material point A. It may be evident that the deformation state of a deforming body
is determined by the deformation state at all points inside it.
Then, how can we describe and determine the deformation state at a material point? Let point A in an
undeformed material body move to point Ā after the material body experiences deformation. Then, a linear
element l at point A in the undeformed material body will become and correspond to a linear element l̄l at
point Ā in the deformed material body, as shown in Fig. 2.1.
l
A
l
A
Now we come to an important idea that the deformation state at point A can be determined, whenever
we can find out a transformation-like quantity L which establishes a correspondence relationship between
all the linear elements l at point A, denoted VA , and all the linear elements l¯ at point Ā, denoted VĀ .
Symbolically, we signify this transformation-like quantity as follows:
L : l ∈ VA −→ l̄l ∈ VĀ ,
(2.1)
l̄l = L [ll ] .
l1 l
l2
l1 l2
A
f− s
s
−s
fs
How can we characterize and determine the stressed state at point Ā? A basic idea is that we find out
also a transformation-like quantity Γ which establishes a correspondence relationship between all the area
elements at point Ā, denoted SĀ , and all possible area forces at point Ā, denoted FĀ . Symbolically, we
signify this below.
Γ : s ∈ SĀ −→ f s ∈ FĀ ,
(2.4)
f s = Γ [ss] .
The following properties hold:
Γ [αss] = α (Γ
Γ[ss]) (2.5)
for every area element s ∈ SĀ , and
Γ [ss1 + s 2 ] = Γ [ss1 ] + Γ [ss2 ] (2.6)
for any two area elements s 1 , s 2 ∈ SĀ .
The property (2.5) may be evident. It means that α times area element s corresponds to α times contact
force f s . In particular, from Newton’s action-reaction principle (see, e.g., Bruhns and Lehmann 1993) we
have
f −ss = − f s , i.e., Γ [−ss] = −Γ
Γ[ss] ,
as indicated in Fig. 2.3. To understand property (2.6), out of the body we cut a small prismatic body around
point Ā with its base a triangle, as shown in Fig. 2.4.
fs
f−s −s
1 2
f−s f−s
2
1
f−s
Of the three upright surfaces of this prism, the area elements of two are just s 1 and s 2 , while the area
element of the other is just −(ss1 + s 2 ). There is a force acting on each of the five surfaces. Observing that
28 CHAPTER 2. SECOND-ORDER TENSORS
the forces acting on the upper and lower surfaces cancel each other out, from the principle of equilibrium
for force system (see, e.g., Bruhns and Lehmann 1993), we derive
f s 1 + f s 2 + f −(ss1 +ss2 ) = 0 .
2.2.1 Definition
A common feature of (2.1)–(2.3) and (2.4)–(2.6) may be summarized as follows: Find out a transformation-
like quantity T which establishes a correspondence relationship between two vector spaces V and V̄ , i.e.,
T : u∈V −→ ūu ∈ V̄ ,
(2.7)
ūu = T [uu] ,
and the linearity property (2.8) we shall derive a unified, simple algebraic expression for second-order ten-
sors and show how to perform relevant operations in a simple manner. Toward this goal, we first introduce
basic algebraic operations for second-order tensors below.
In the subsequent account of this section, for the sake of simplicity by tensor(s) we mean 2nd-order
tensor(s), if not indicated otherwise.
Moreover, we denote
T ≡ −T
(−1)T T. (2.11)
The following rules may be evident:
T = O,
0T T =T;
1T (2.12)
T = α(βT
(αβ)T T ) = β(αT
T ) (associative rule) . (2.13)
2.2.3 Addition
Let T 1 and T 2 be any two given tensors. Their addition is denoted T 1 + T 2 and defined by
T 1 − T 2 = T 1 + (−T
T 2) . (2.15)
(i) Choose and fix a vector basis e i . Then the nine dyadic products e i ⊗ e j are given. Hence, from
expression (2.41) it turns out that a tensor T can be characterized and represented simply by the
nine scalar coefficients Ti j , each of which is related to the tensor T and the chosen basis e i by a
simple formula (2.39).
(ii) Given a basis e i and nine scalars Ti j , (2.41) generates a tensor. For all possible Ti j , (2.41) generates
all possible tensors. Then we have (cf. footnote 1)
(iii) The expression (2.41) could not be reduced further. Namely, any given eight members of the nine
dyadic products e i ⊗ e j are no longer adequate to express all tensors given in (2.42).
Consequently, each tensor space given by (2.42) is said to be 9-dimensional. Accordingly, we say that the
9 dyadic product e i ⊗ e j is a standard basis of tensor spaces T .
Now it may be clear that, with a chosen standard basis e i and the standard expression (2.41), any given
tensor T is reduced to and determined by its nine standard components Ti j . Now we say that the abstract
notion of tensor might become simple in the following sense.
(i) Choose a standard basis ei . Form the dyadic products ei ⊗ e j and keep the simple transformation
property (2.36) in mind.
(ii) Determine the nine components Ti j relative to the chosen basis e i . Then the standard expression
(2.41) gives tensor T .
(iii) Recall the main motivation and the main purpose of introducing the notion of tensor. As a transform-
like quantity, either a stress tensor or a deformation tensor provides us with a unified and powerful
tool in determining the stressed state or the deformation state. In so doing, by means of standard
expression (2.41) we are able to transform in a simple and unified manner each initial linear element
or each area element into a deformed linear element or a force vector acting on the area element
under consideration. With (2.36) and (2.41) as well as the rules for basic operations, the relevant
calculations now become accessible, efficient and standard. Indeed, we have the following formulas:
u · (T
T [uu]) = Ti j ui u j ; (2.43)
u · (T
T [vv]) = Ti j ui v j ; (2.44)
for any two vectors u and v . In the above, the ui and the vi are the standard components of vectors u
and v relative to the chosen standard basis e i .
In addition to (2.43)–(2.44), the standard expression (2.41) may facilitate the implementation of the other
operations introduced before. The results are summarized as follows.
I = δi j e i ⊗ e j = e i ⊗ e i ; (2.45)
The permutation symbol εi jk and the Kronecker symbol δi j introduced earlier are very useful in vector- and
tensor-calculus. These two symbols are related to each other by the following useful identity
More general relations and detailed account may be found, e.g., in Ogden (1984) and Betten (1987). The
foregoing identity leads to the following useful formula:
with
Ti j = e i · (T
T [ee j ]), T̄kl = ēek · (T
T [ēel ]) . (2.59)
Substituting (2.58)1 into (2.59)2 and then using (2.36) we deduce
i.e.,
T̄kl = (ēek · ei ) (ēel · e j ) Ti j . (2.60)
From the rule given by (2.60) we conclude that the 9 components of a tensor relative to any standard
basis ēei may be determined by its 9 components relative to a chosen standard basis e i as well as the 9
direction cosines ēei · e j .
In terms of an arbitrary basis g i and its reciprocal basis g i , we have co- and contravariant expressions
(1.60) and (1.64) for each vector u . Hence we have
T [ggi ] = T j i g j , T [ggi ] = T ji g j ,
T [ggi ] = T ji g j , T [ggi ] = T j i g j ,
where
T j i = g j · T [ggi ], T ji = g j · T [ggi ], T ji = g j · T [ggi ], T j i = g j · T [ggi ] . (2.61)
Substituting these four expressions into the foregoing two and then utilizing (1.63) and (1.65) as well as
(2.36), we deduce
T [uu] = ui T j i g j = T j i g i · u g j = T j i g j ⊗ g i [uu] ,
T [uu] = ui T ji g j = T ji g i · u g j = T ji g j ⊗ g i [uu] ,
T = T j i g j ⊗ gi; T = T ji g j ⊗ g i ; T = T ji g j ⊗ g i ; T = Tj i g j ⊗ gi . (2.62)
Expressions (2.62)2 and (2.62)3 will be called co- and contravariant expressions for tensor T , respectively,
and expressions (2.62)1 and (2.65)4 mixed-variant expressions for tensor T . Accordingly, T ji and T ji are
referred to as co- and contrainvariant components of tensor T , and T j i and T j i as mixed-variant components
of tensor T . These four kinds of components are not independent. Any kind of them can determine the
other three kinds. This can be done by substituting the four expressions of (2.62) into each expression of
(2.61). The results are as follows:
Now we establish relationships between these components and those given by (2.61). Since we have four
expressions for T (see (2.62)), for each kind of components we can derive four transformation rules. This
is done by substituting (2.62) into each of the four expressions given the last expressions in the above,
respectively. The results are as follows:
j
T̄i = (ḡg j · g s )(ḡgi · g r )T s r = (ḡg j · g s )(ḡgi · g r )Tsr = (ḡg j · g s )(ḡgi · g r )T sr = (ḡg j · g s )(ḡgi · g r )Ts r ; (2.67)
2.5. CHARACTERISTIC PROPERTIES 35
T̄ ji = (ḡg j · g s )(ḡgi · g r )T s r = (ḡg j · g s )(ḡgi · g r )Tsr = (ḡg j · g s )(ḡgi · g r )T sr = (ḡg j · g s )(ḡgi · g r )Ts r ; (2.68)
T̄ ji = (ḡg j · g s )(ḡgi · g r )T s r = (ḡg j · g s )(ḡgi · g r )Tsr = (ḡg j · g s )(ḡgi · g r )T sr = (ḡg j · g s )(ḡgi · g r )Ts r ; (2.69)
T̄ ji = (ḡg j · g s )(ḡgi · g r )T s r = (ḡg j · g s )(ḡgi · g r )Tsr = (ḡg j · g s )(ḡgi · g r )T sr = (ḡg j · g s )(ḡgi · g r )Ts r . (2.70)
The four transformation rules represent the feature of a tensor as a basis-free entity and ensure that the
four expressions given by (2.62) always yield the same tensor T for different sets of components relative
to different choices of the basis g i .
As in the case of vectors, expressions for tensors in terms of an arbitrary basis g i are much more
complicated than the standard expression for tensors in terms of a standard basis e i . As remarked earlier,
if possible, it is recommended that a standard basis should be chosen and used for the sake of simplicity.
We call such a scalar λ an eigenvalue of tensor S , and such a vector v an eigenvector of S subordinate to
the eigenvalue λ.
The physical meaning of the above question may be clear for stress tensor S . In this case, (2.71) means
that we like to find an area element v such that the force vector acting on this area element takes the simplest
form, i.e., collinear with v. In this case, usually we call an eigenvalue λ of stress tensor S a principal stress.
By means of the eigenvalues and eigenvectors we may gain insight into essential features of 2nd-order
tensors, as will seen in the next sections.
Before deriving a simplified form of the above condition, we introduce a useful and important quantity.
We refer to
trSS ≡ Skk = e k · S [eek ] (2.74)
as the trace of tensor S , with the property
tr(αSS + βT
T ) = αtrSS + βtrT
T (2.75)
36 CHAPTER 2. SECOND-ORDER TENSORS
for any two scalars α and β and for any two tensors S and T .
By means of the above definition, we may further generate many other similar quantities, such as
trSS 1 S 2 · · · S r with any given r 2nd-order tensors S 1 , · · ·, S r . In particular, S 1 = · · · = S r = S results in
Ir = trSS r . (2.76)
It may be noted that the above definition would produce different results for different bases. However,
by using the transformation formula (2.60) we infer
for any other basis ēek . From (1.48) and (1.39)2 we know that (ēek · e i )(ēek · e j ) just yields the scalar product
of vectors e i and e j . Hence we have
(ēek · e i )(ēek · e j ) = e i · e j = δi j .
Thus we conclude
S̄kk = Skk
for any two bases e k and ēek .
The above fact means that each of the trace trSS defined by (2.74) and the derived traces mentioned
before is independent of the choice of basis. We call such quantities invariants of tensor(s). A detailed
account in this respect will be given in Chapter 5.
Now, expanding (2.73) according to (1.51)2 , we arrive at
λ3 − J1 λ2 + J2 λ − J3 = 0 , (2.77)
where
J1 = I1 ,
J2 = 21 I12 − I2 ,
(2.78)
J3 = detSS = 16 I13 − 3I1 I2 + 2I3 .
Since the three traces Ir are invariants of S , so are the three Jr given above. Usually, the former and
latter are called basic and principal invariants of S , respectively. Because these invariants supply the same
values for all the bases, equation (2.76) with (2.77) is the same for all the choices of bases and is called the
characteristic equation of tensor S .
A real or complex number satisfying the characteristic equation of tensor S is called a characteristic
root of S . Generally, for a tensor S there are three characteristic roots. Of them, at least one is real. It
should be noted that only a real characteristic root of S can be an eigenvalue of S . Hence, a 2nd-order
tensor S has at least one (real) eigenvalue. However, generally the other two characteristic roots may be a
pair of conjugate complex numbers and, in this case, they are not (real) eigenvalues of S .
The three roots of the characteristic equation (2.76) can be calculated by a direct formula. Then, the
eigenvalues S can accordingly determined.
However, the determination of eigenvectors is not so easy and direct as the characteristic roots. Ac-
cording to (2.72), the three components v j of eigenvector v relative to a standard basis e j is determined by
the following system of three linear equations in v j :
for each eigenvalue λ of S . Since S − λII is singular, at most two of the above three equations are indepen-
dent.
(SS 3 − J1 S 2 + J2 S − J3 I )[vv] = 0 .
Now suppose we have three eigenvectors v =gg1 , g 2 , g 3 , which meet the condition (1.59). Then each vector
u may be expressed in terms of these three eigenvectors, as given by (1.60). From this fact and the last
equality in the above, we derive
Thus, we obtain
S 3 − J1 S 2 + J2 S − J3 I = O . (2.81)
This implies that every tensor satisfies its own characteristic equation. This intriguing yet very important
fact is known as Cayley-Hamilton theorem.
A direct consequence of Cayley-Hamilton theorem is that each power S r for r ≥ 3 is expressible as a
linear combination of I , S and S 2 with each coefficient an invariant of S . Besides, each trace trSS r for r ≥ 4
is determinable by the three basic invariants of S .
1
S = Si j e i ⊗ e j = Si j (eei ⊗ e j + e j ⊗ e i ) (2.83)
2
for a symmetric tensor S .
Symmetric tensors have nice characteristic properties and hence simpler structures, as shown below.
1
χ = λ − J1 . (2.84)
3
Then we may recast equation (2.76) in the form
χ3 − pχ − q = 0 , (2.85)
where
1 1
p = (3I2 − I12 ), q = (2I13 − 9I1 I2 + 9I3 ) . (2.86)
6 27
Utilizing the symmetry condition Si j = S ji , we have
1
(S11 − S22 )2 + (S22 − S33 )2 + (S33 − S11 )2 + S12
2 2 2
p= + S23 + S31 ≥ 0. (2.87)
6
38 CHAPTER 2. SECOND-ORDER TENSORS
v 0 · S [vv] = v · S [vv0 ] .
with
l r · l s = δrs , (2.93)
Hence, the three eigenvectors of S form a standard basis of vector space, called eigenbasis of S . Utilizing
(2.21), (2.45) and (2.92), as well as the identity
for any tensor S and for any two vectors a and b , we derive
Thus, we arrive at
3
S= ∑ λr l r ⊗ l r . (2.95)
r=1
It turns out that a symmetric tensor is endowed with the noticiably simple structure resulting from the
symmetry condition (2.82). It assumes an appealingly simple and elegant form in terms of its eigenbasis.
We refer to (2.95) as the characteristic expression of symmetric tensor S .
Some important properties of the characteristic expression (2.95) for symmetric tensor S are as follows:
(i) Each symmetric tensor S may be determined and represented by its three eigenvalues and its eigen-
basis. All the symmetric tensors can be generated through the characteristic expression (2.95) by
allowing the eigenvalues λr and its eigenbasis l r to run over all possible cases.
(ii) When u runs over all possible unit vectors, the maximum magnitude and the minimum magnitude of
the transformed vector S [uu] are just the greatest and the smallest of the eigenvalues of S , and attained
by setting u to be their subordinate eigenvectors, respectively. When S represents strain tensor or
stress tensor, this fact represents significant physical meanings.
(iii) S is nonsingular, whenever each eigenvalue λr of S is non-vanishing. For a nonsingular S , we have
3
S −1 = ∑ λ−1
r lr ⊗ lr . (2.96)
r=1
for every non-negative integer α. In particular, we have S 0 = I . For a positive definite S , i.e., λr > 0,
we may extend the α in the above to cover all the numbers. This may go even further. More generally,
we may introduce the following symmetric 2nd-order tensor
3
g (SS ) ≡ ∑ g(λr )ll r ⊗ l r (2.98)
r=1
known as the scale function for strain measure. The scale function
λα − 1
g(λ) =
2α
produces
1 α
S (α) =
(SS − 1)
2α
which is a well-known subclass of strain tensors, known as Doyle-Ericksen or Seth-Hill strain class
(see, e.g., Doyle and Ericksen 1956, Seth 1964, Hill 1968, 1978, Ogden 1984, Morman 1986, Basa̧r
40 CHAPTER 2. SECOND-ORDER TENSORS
and Weichert 2000). When α = −0.5, −1, 0.5, 1, the last expression generate the known strain
tensors
1 1
(II − S −1/2 ),, (II − S −1 ), (SS 1/2 − I ), , (SS − I ),
2 2
which are often associated with the names Cauchy, St. Venant, Green, Finger, Hamel, Swainger, etc.
In particular, the natural logarithmic scale function g(λ) = 12 ln λ for α = 0 yields the well-known
Hencky’s logarithmic strain tensor h = 12 ln S .
Of course, the function g(λ) in (2.98) may be taken as any constinuous function of λ. Below are some
familiar examples:
g(λ) = eλ , e−λ , sin λ, cos λ, tgλ, ctgλ, sin−1 λ, cos−1 λ, tg−1 λ, ctg−1 λ.
These familiar functions and (2.98) define the following symmetric tensors:
1 3 1 5
sin S = S − S + S +···,
3! 5!
1 2 1 4
cos S = I − S + S +···,
2! 4!
1 1
ln S = (SS − I ) − (SS − I )2 + (SS − I )3 + · · · ,
2 3
and many others. Although these tensor serieses may be instructive in relation to well-known power series
expansions of elementary real functions, it seems that the characteristic expression (2.98) through the
function g(λ) may be more appropriate and convenient from standpoints of application and computation.
It should be noted that the expression (2.98) is valid over the whole range of λ, whereas usually an infinite
power series is well-defined only within a restrictive range. That is the case for the last expression for ln S .
Moreover, it may be difficult to cope with an infinite tensor power series, whereas it may be much easier
to treat the expression (2.98) by utilizing the characteristic properties of S , as will be shown below and in
§5.5.
In expression (2.95), we may set two or the whole of the three eigenvalues λr coalescent. The two
degenerate cases yield
S = λ1 l 1 ⊗ l 1 + λ2 (ll 2 ⊗ l 2 + l 3 ⊗ l 3 ) ; (2.99)
S = λll i ⊗ l i = λII ; (2.100)
respectively.
From the characteristic expression (2.95), we can derive simple expressions for the basic and principal
invariants of S . The results are as follows:
where mσ is the algebraic multiplicity of eigenvalue λσ , i.e., the multiplicity as a repeated root of the
characteristic equation (2.77). Hence, each mσ is either 1 (simple root) or 2 (twice repeated) or 3 (triply
repeated); and, moreover, l 01 , · · ·, l 0mσ are mσ orthonormal eigenvectors subordinate to λ0σ .
For a repeated eigenvalue λ0σ , the above set of mσ orthonormal eigenvectors is non-unique. Actually
there exit infinitely many such sets. Nevertheless, each eigenprojection S σ given by (2.104) is uniquely
determined by S , as will be shown slightly later.
The eigenprojections have simple manipulable properties, as given below:
m
g (SS ) = ∑ g(λσ )SSα . (2.109)
σ=1
Now we aim to derive an explicit expression for each eigenprojection S θ . In what follows we choose
and fix a θ from 1, 2, · · ·, m. Removing the factor (SS − λθ I ) from the continued product (SS − λ1 I )(SS −
λ2 I ) · · · (SS m − λm I ), we obtain a continued product of the (m − 1) tensor (SS − λτ I ) with τ = 1,, 2, · · ·, m but
τ 6= θ. We designate this continued product by
m
∏(SS − λτ I ) .
τ6=θ
τ=1
42 CHAPTER 2. SECOND-ORDER TENSORS
Thus, observing S 1 = I for m = 1 and assuming that the above continued product vanishes for m = 1, we
obtain the following Sylvester’s formula in a unified form:
m
S − λτ I
S θ = δ1m I + ∏ . (2.111)
τ6=θ
λθ − λτ
τ=1
Now it may be clear that the characteristic expressions (2.107)–(2.109) can be calculated and deter-
mined in a straightforward manner by means of Sylvester’s formula (2.111) without involving computation
of the eigenbasis l r of S . Further discussion will be given in §5.5 of chapter 5.
The above results may be valid for n-dimensional symmetric tensors. Here, n = 3 and hence only three
cases for m need to be treated, as shown below.
For m = 3:
λ2 − λ3
S1 = − (SS − λ2 I )(SS − λ3 I ) , (2.112)
∆
λ3 − λ1
S2 = − (SS − λ3 I )(SS − λ1 I ) , (2.113)
∆
λ1 − λ2
S3 = − (SS − λ1 I )(SS − λ2 I ) , (2.114)
∆
where
∆ = (λ1 − λ2 )(λ2 − λ3 )(λ3 − λ1 ) . (2.115)
For m = 2:
S − λ2 I
S1 = , (2.116)
λ1 − λ2
S − λ1 I
S2 = . (2.117)
λ2 − λ1
For m = 1:
S1 = I . (2.118)
Q T = Q −1 . (2.119)
Namely, the transposition of an orthogonal tensor yields its inverse. Hence, we have
Qu ) · (Q
(Q QT (Q
Qv ) = v · (Q QT Q )[uu]) .
Q[uu])) = v · ((Q
Q[uu]) · (Q
(Q Q[vv]) = u · v . (2.121)
In particular, we have
Q[uu]|2 = (Q
|Q Q[uu]) = u · u = |uu|2 .
Q[uu]) · (Q (2.122)
The last two expressions indicates the geometric feature of orthogonal tensor Q as a transformation
quantity: Both the length (magnitude) of any vector and the angle between any two vectors will preserve
after the transformation performed by Q .
In mechanics of rigid bodies, we know that a rigid body is regarded as an imaginary or ideal material
body in which all its linear elements undergo no length changes and angle changes in course of motion.
such ideal kinematic behaviour can be characterized and represented exactly by an orthogonal tensor, in
conjunction with a displacement vector for any chosen point, such as the mass center, etc. In addition,
the importance of orthogonal tensors consists in the fact that they are essential to and necessary for the
characterization of geometric symmetry properties of internal structural elements of solid materials, such
as crystalline and quasi-crystalline materials, etc., as will be seen in Chapter 5.
In the above, that λ2 = 1 is derived from (2.122). Then we infer that each eigenvalue of Q is either 1 or −1.
The unit eigenvector l in (2.123) is called the axial vector of orthogonal tensor Q . Its importance will
be clear in the subsequence development.
Q [ e2 ] e2
θ Q [ e1 ]
θ
e1
Using (2.123) and (2.124), we can calculate the standard components of Q relative to the standard basis
(ll , e 1 , e 2 ) and then apply the standard expression (2.41) with (2.39) we obtain
Expression (2.125) is referred to as the canonical expression for orthogonal tensor Q , and the angle ϕ
therein as the rotation angle of Q . Using e 1 × e 2 = l and (3.41) given later and observing
e1 ⊗ e1 + e2 ⊗ e2 = I − l ⊗ l
for any two orthonormal vectors e 1 and e 2 that are orthogonal to l , we know that the canonical expression
(2.126) is an explicit expression in terms of the unit axial vector l and the rotation angle ϕ. Such explicit
expressions may be found, e.g., in Guo (1981) and Başar and Weichert (2002).
The powers of Q is given by
Moreover, from the canonical expression of Q we can calculate the basic and principal invariants. The
results are as follows:
I1 = λ + 2 cos ϕ, I2 = 1 + 2 cos ϕ, I3 = λ + 2 cos ϕ ,
(2.127)
J1 = λ + 2 cos ϕ, J2 = 1 + 2λ cos ϕ, J3 = λ .
Q = −1,
Q = 1, we call such a Q a proper orthogonal tensor. Otherwise, we have J3 = detQ
If J3 = detQ
Q and we call such a Q an improper orthogonal tensor.
For any vector u , we have
In the above, u 0 is the projection of the vector u in a plane normal to l . From this it may be clear that the
transformation performed by a proper orthogonal tensor Q is a rotation through the angle ϕ about an axis
in the direction of its axial vector l and may be thus referred to as a rotation straightforwardly. Besides, it
may be evident that any improper orthogonal tensor can be written as −Q Q with Q a rotation. As a result,
the transformation performed by an improper orthogonal tensor is the composition of a rotation and an
inversion that will be explained slightly later. These facts are indicated in Fig. 2.6.
ϕ ϕ=π
(u . l ) l
u^
u Q[ u ]
u
l
l
lx u ’
ϕ
u’ Q[ u ’]
Figure 2.6: (a) Transformation by proper orthogonal tensor (b) π-rotation and reflection
ϕ
designated by R l . Hence, the composition of the just-mentioned rotation and the inversion will be written
as −R Rθl .
There are four simple yet important orthogonal tensors. The first is the simplest, i.e., the identity tensor
I , which results in no transformation at all. The second is −II , called inversion, by which every vector u
and its opposed vector −uu are transformed into each other. The third is
R πl = 2ll ⊗ l − I , (2.129)
called π-rotation about axis l , by which each vector u and its symmetric counterpart ûu with respect to axis
l are transformed into each other, as shown in Fig. 2.6. Finally, the fourth is
Rπl = I − 2ll ⊗ l ,
−R (2.130)
called reflection with respect to a plane normal to l (ll -plane), by which each vector u and its symmetric
counterpart ūu with respect to an l -plane are transformed into each other, as shown also in Fig. 2.6.
The above four particular orthogonal tensors play basic roles in describing the geometric symmetry
properties of internal basic structural elements of solid materials, as will be seen in §4.2.
Ω T = −Ω
Ω. (2.131)
λ3 + J2 λ = 0 .
Since J2 > 0 for non-zero Ω , the only real root is λ = 0. This means that each skew-symmetric tensor Ω
has only one eigenvalue, given by 0. Now we introduce a vector ω by
Then we have
Ωω = 0 . (2.136)
Thus, the vector ω given by (2.135) is an eigenvector subordinate to the only eigenvalue of Ω . We refer to
ω as the associated axial vector of skew-symmetric tensor Ω .
46 CHAPTER 2. SECOND-ORDER TENSORS
Although the decomposition (2.143) is quite simple and straightforward, it can lead to important results.
Indeed, in kinematics of finite deformations of continuous bodies, the additive decomposition (2.143) for
the velocity gradient tensor L generates the (symmetric) stretching tensor (also: Eulerian strain rate, defor-
mation rate, rate of deformation, etc.) and the (skew-symmetric) vorticity tensor. It is commonly accepted
that the stretching tensor is a natural, far-reaching fundamental kinematic quantities in both non-linear
solid mechanics and fluid dynamics. In addition, the vorticity tensor is basic in understanding the rotatory
motion of vortex flows of fluids, etc.
From (2.143) we deduce that a general 2nd-order tensor may be represented equivalently by its sym-
metric and antisymmetric parts. Since the latter two may be treated relatively easily, the just-mentioned
fact may be useful in some cases. Application will be made in §5.7.2.
For a symmetric tensor A , a simple yet useful decompositions is as follows:
1
A)II + Ā
A = (trA A, (2.144)
3
2.9. DECOMPOSITIONS OF 2ND-ORDER TENSOR 47
where the first term is known as the spherical or uniform part of A and the second as the deviatoric part of
A , given by
1
A = A − (trA
à A)II . (2.145)
3
A is traceless, i.e.,
Note that the deviatoric part Ā
A = 0.
trĀ (2.146)
The decomposition (2.144)-(2.145) is also quite simple. But it can also lead to important results.
Indeed, if A represents stress tensor, then Ā A is the deviatoric stress, which is significant in describing
yield behaviour of solids, in particular, metals etc.
On the other hand, if A represents a finite strain tensor g (SS ) as defined by (2.98), where S is the
symmetric, positive definite tensor F F T or F T F (left and right Cauchy-Green deformation tensors) that
will be discussed below, then a relevant, significant question in kinematics of large deformation is whether
the decomposition (2.144) can achieve or realize a fully uncoupled separation of the volumetric deformation
from the total deformation. This requires that the uniform and deviatoric parts of g (SS ) depend merely on
the volumetric deformation detSS and the isochoric deformation (detSS )−1/3 S , respectively. It is known (see,
e.g., Richter 1949 and Lehmann 1960) that this is possible for g(λ) = ln λ, i.e., for Hencky’s logarithmic
strain 12 ln S . In fact, of all the strain measures given by (2.98), the latter is the only one having the just-
mentioned favourable property. A proof is as follows.
The spherical part of the tensor g (SS ) is given by
1 1
trgg(SS ) = (g(λ1 ) + g(λ1 ) + g(λ1 )).
3 3
Now suppose that this part relies only on the volumetric deformation J3 = detSS , i.e.,
1
(g(λ1 ) + g(λ1 ) + g(λ1 )) = f (J3 ).
3
Differentiating this expression with respect to λi and using (2.102)3 , i.e.,
J3 = λ1 λ2 λ3
and noting
∂J3 J3
= ,
∂λi λi
we deduce
1 0 J3
g (λi ) = f 0 (J3 ) ,
3 λi
i.e.,
3J3 f 0 (J3 ) = λi g0 (λi ).
Observing that each λi and the Jacobian J3 are independent of each other for general triaxial deformations,
we infer that
3J3 f 0 (J3 ) = λi g0 (λi ) = constant.
for every J3 and for every λi . From these and the normalized condition following (2.98), by setting λi = 1
we infer
1
3J3 f 0 (J3 ) = λi g0 (λi ) = (λi g0 (λi ))|λi =1 = .
2
This leads to
1 1
3J3 f 0 (J3 ) = , λi g0 (λi ) = .
2 2
Thus, we arrive at
1 1
g(λ) = ln λ, f (J3 ) = ln J3 .
2 6
48 CHAPTER 2. SECOND-ORDER TENSORS
Th above decomposition may easily derived by utilizing (2.98) with g(λ) = 12 ln λ and oberserving the fact
that S and its isochoric part S̃S have their eigenvectors in common and their eigenvalues are given by λi and
J3−1 λi , respectively.
Note that
−1/3
detS̃S = (J3 )3 detSS = 1, tr(ln S̃S ) = 0.
It may be evident that the decomposition (2.147)-(2.148) indeed realizes a fully uncoupled additive splitting
of the total deformation into the volumetric and the isochroic part.
Since
F F T )[ll r ] = (F
λr = l r · (F F T l r ) · (F
F Tlr)
and F is non-singular, we infer that each eigenvalue λr of F F T is positive. As a result, we can define
12 3 1
V = FFT = ∑ λr2 l r ⊗ l r . (2.151)
r=1
Hence we have
V 2 = FFT. (2.152)
Moreover, let
− 12
R = V −1 F = F F T F. (2.153)
Then, using (2.31) and (2.24) and (2.149) we have
R R T = V −1 F F TV −1 = V −1 F F T V −1 = V −1V 2V −1 = I .
F =VR. (2.154)
Then, the symmetric tensor U is also positive definite, its eigenvalues are still given by λr , and its eigen-
vectors are given by l̄l r . From (2.21), (2.120), (2.24) and (2.153) we derive
F = I F = R R T (V V R ) = R R TV R .
From this we deduce that there are a symmetric, positive definite tensor U given by (2.154) and an orthog-
onal tensor given by (2.152) such that
F = RU . (2.156)
√
Besides, using det R = ±1 and detU U = λ1 λ2 λ3 > 0, from the condition (2.146) and (2.156) and (2.56)
we deduce that detR R = 1, i.e., R is a proper orthogonal tensor. This may be understood by setting V = I in
(2.153).
Expressions (2.153) and (2.155) are known as the left and right polar decompositions of F . They
indicate that a 2nd-order tensor of the property (2.146) is the composite product of a proper orthogonal
tensor and a symmetric, positive definite tensor. If F represents the deformation (gradient) tensor, the
left polar decomposition (2.153) means that the deformation state at each material point may be carried
out by the composition of two relatively simple procedures: first a rigid-body rotation (R R) and then a
subsequent pure strain (V V ) without rigid-body rotation, whereas the right polar decomposition (2.155)
implies a converse composition: first a pure strain (U U ) without rigid-body rotation and then a subsequent
rigid-body rotation (RR).
From either (2.151) and (2.154) or (2.155)–(2.156) we derive the following useful result
3 1
F= ∑ λ 2 l r ⊗ l̄l r , (2.157)
r=1
3
R= ∑ l r ⊗ l̄l r , (2.158)
r=1
where λr are the three common eigenvalues of the symmetric, positive tensors F F T and F T F , called the left
and right Cauchy-Green deformation tensors, and l r and l̄l r are their respective subordinate orthonormal
eigenvectors.
The left and right polar decompositions are basic in kinematic analysis and constitutive formulations in
non-linear continuum mechanics.
50 CHAPTER 2. SECOND-ORDER TENSORS
Chapter 3
As a linear mapping transforming each vector into a vector, the notion of 2nd-order tensors supplies us
with a powerful, unified tool to resolve some basic, difficult problems, such as characterization and deter-
mination of the deformation state and the stressed state at each point in a deforming material body, etc. For
various purposes, however, we need more sophisticated quantities to resolve more involved problems. For
instance, we need quantities that can transform each vector into a 2nd-order tensor, each 2nd-order tensor
into a vector, and each 2nd-oder tensor into a 2nd-order tensor, etc. In what follows, a general formulation
of this consideration will first be presented below, and then follows a further account of frequently used
3rd- and 4th-order tensors.
3.1 Definitions
First, a 2nd-order tensor T is a linear mapping transforming each vector into a vector; refer to (2.7)–(2.8).
Let V and T2 are a vector space and a 2nd-order tensor space. Then, a 3rd-order tensor M is a linear
mapping transforming each vector into a 2nd-order tensor , i.e.,
M : u ∈ V −→ S ∈ T2 ,
(3.1)
S = M [uu] ,
with
M [α1 u 1 + α2 u 2 ] = α1 M [uu1 ] + α2 M [uu2 ] for any vectors u 1 , u 2 ∈ V . (3.2)
As in the case of 2nd-order tensors, we may define null 3rd-order tensor, addition, and multiplication by
number for 3rd-order tensors. Then, a 3rd-order tensor space is a collection of 3rd-order tensors in which
these two basic operations can be defined and performed in a self-contained manner.
Let T3 be a 3rd-order tensor space. Then, a 4th-order tensor L is a linear mapping transforming each
vector into a 3rd-order tensor, i.e.,
L : u ∈ V −→ M ∈ T3 ,
(3.3)
M = L[uu] ,
with
L[α1 u 1 + α2 u 2 ] = α1 L [uu1 ] + α2 L[uu2 ] for any vectors u 1 , u 2 ∈ V . (3.4)
Introducing null 4th-order tensor, addition, and multiplication by number in a similar manner as in the case
of 2nd-order tensors, we obtain 4th-order tensor spaces, denoted T4 .
Following and continuing the above recurrence procedure, we can define nth-order tensors, then null
nth-order tensor, addition, and multiplication by number for them and finally nth-order tensor spaces,
denoted by Tn . Here, n = 2, 3, 4, · · ·. In particular, for the sake of completeness, sometimes we call each
scalar and each vector zeroth- and first-order tensors, respectively.
Thus, we have defined nth-order tensors and nth-order tensor spaces Tn for each non-negative integer n.
However, it should be noted that vectors, i.e., 1st-order tensors, play a fundamental role in defining tensors
51
52 CHAPTER 3. TENSORS OF HIGHER ORDER
of different order. Before introducing and defining tensors, first of all, we have to know how to introduce
and define vectors and their basic operations. To stress this fact, we may call each Tn for each n ≥ 2 an
nth-order tensor space over a 3-dimensional base vector space. Each such tensor is called a 3-dimensional
tensor. We like to mention in passing that m-dimensional tensors in a general sense may be defined by
choosing an m-dimensional vector space as a base vector space. This respect will be touched on in §3.10
of this chapter.
Another fact is that a tensor of higer order possesses richer transformation functions as the order in-
creases. This will be explained later.
T ⊗ e )[uu] ≡ (uu · e )T
(T T for any vector u . (3.5)
Let vectors e1 , e2 and e3 form a standard basis. Then, by repeatedly using the notion of tensor product
we obtain the simplest qth-order tensors:
e i1 ⊗ · · · ⊗ e iq , 1 ≤ i1 , · · · , iq ≤ 3 . (3.6)
Note here that each index it runs over 1, 2 and 3. Hence, the total number of such tensors are 3q .
Let H be a qth-order tensor and u 1 , · · · , u q be q vectors. Then we may introduce q tensors H 1 , · · ·, H q
by the following recurrence procedure:
H0 = H ,
(3.7)
H t = H t−1 [uuq−t+1 ], t = 1, · · · , q ,
Note in the above that, for any given 1 ≤ t ≤ q, the tensor H t−1 is a (q−t +1)th-order tensor and transforms
vector u q−t+1 into the (q − t)th-order tensor H t . Finally, the above procedure gives a scalar till t = q,
designated by
H q ≡ H [uu1 , · · · , u q ] . (3.8)
Let H be the tensor given by (3.6) in the above procedure. Then we have
e i1 ⊗ · · · ⊗ e iq [uu1 , · · · , u q ] = (uu1 · e i1 ) · · · u q · e iq . (3.9)
In particular, we have
e i1 ⊗ · · · ⊗ e iq [eek1 , · · · , e kq ] = δi1 k1 · · · δiq kq . (3.10)
It will be seen that the notion of tensor product enables us to construct tensors of arbitrary order by
using a vector basis of vector spaces as building blocks. An extension of tensor product will be given in
§3.4.
M [uu] = (uu · e i )M
M [eei ] .
M [eei3 ] = Mi1 i2 i3 e i1 ⊗ e i2 .
3.4. ALGEBRAIC EXPRESSIONS: ARBITRARY BASES 53
where
Hi1 ···iq = H [eei1 , · · · , e iq ] . (3.14)
We refer to (3.13)–(3.14) as the standard expression of qth-order tensor H relative to the standard basis
ei , and the 3q scalars Hi1 ···iq as the standard components of H relative to the standard basis ei .
Of course, we can use another standard basis ēei to express H and we have
H = H̄i1 ···iq ēei1 ⊗ · · · ⊗ ēeiq ,
(3.15)
H̄i1 ···iq = H [ēei1 , · · · , ēeiq ] .
In some cases, tensors of higher order with certain index symmetric or skew-symmetric properties are
taken into account. Symmetry or skew-symmetry of qth-order tensor H (here q = 3) with respect to any
pair of indices in its standard components may be defined by following the corresponding definitions for a
2nd-order tensor. Thus, H is said to be symmetric (+) or skew-symmetric (-) with respect to its rth and sth
indices whenever
Hi1 ···ir ···is ···iq = ±Hi1 ···is ···ir ···iq . (3.17)
These definitions may be extended to cover several pairs of indices. A tensor may be partially symmetric
and/or partially skew-symmetric. Two particular cases are completely symmetric and completely skew-
symmetric. In these cases, tensor H is symmetric (skew-symmetric) with respect to any pair of its indices.
where
Hdi1 c · · · diq c = H [ggbi1 e, · · · , gbiq e] . (3.19)
In the above, a particular treatment is introduced for indices. First, each bit e and each dit c represent a
conjugate pair of subscript and superscript. The correspondence relationship between each such pair bit e
and each dit c is specified as follows: Each dit c can be freely taken as either of a superscript and a subscript,
and, corresponding with these two choices, each bit e is taken as a superscript and a subscript, respectively,
when bit e is taken as a subscript and a superscript, respectively. These are summarized below:
superscript if bit e is taken as subscript ,
dit c = (3.20)
subscript if bit e is taken as superscript .
With the above treatment in mind, (3.18)–(3.19) actually represent 2q different expressions and accord-
ingly we have 2q different kinds of components. For instance, we have 22 (= 4) different expressions for
2nd-order tensor (see (2.62)) and four different kinds of components (see (2.61)). For 3rd- and 4th-order
tensors, we have 23 (= 8) and 24 (= 16) different expressions and 8 and 16 different kinds of components.
Any of the 2q different kinds of components can determines all the others, as exemplified by (2.63)–(2.66)
for 2nd-order tensors. Here we no longer present a large variety of such results.
In (3.18)–(3.19) we can replace basis g i and its reciprocal basis g i with any other basis ḡgi and its
reciprocal basis ḡgi . Then the components of tensor H become
We also have 2q different expressions relative to the new pair of bases, ḡgi and ḡgi , and 2q corresponding
different kinds of components. It may be clear that a transformation rule may be derived for the same
kind of components and for any two kinds of components. Such transformation rules are in abundance,
as exemplified in the case of 2nd-order tensors (see (2.67)–(2.70)). Here we no longer supply such results
which increase so rapidly as the order q become larger.
It may be seen that the standard expression of a tensor in terms of a standard basis of vector spaces
is much simpler than its expression in terms of a general non-standard basis. Hence, it may be expedient
to use a standard basis. We would like to point out that, for any given tensor, a standard expression is
never less general than a so-called general expression in terms of an arbitrary basis. Here the essence
is that if either a vector or a tensor is already well-defined or introduced in a basis-free manner, then its
expression in terms of any chosen basis determines the expressions in terms of all the other bases. Of
course, if a tensor would be defined or introduced in terms of bases, then it should be demonstrated that
this definition is applicable to all possible choices of bases. That is why we define and introduce algebraic
operations for vectors and tensors, which yield vectors and tensors, from an abstract standpoint without
involving bases. So long as suitable definitions are ready, algebraic expressions in terms of standard basis
or arbitrary basis may considerably facilicate their understanding and related operations, as has been seen
in the developments till now and in future.
For different choices of standard basis, the standard components of tensors G and H are different. Now we
examine how the tensor defined by (3.21) changes with the change of basis.
3.5. DOT PRODUCTS 55
To this end, we choose any other standard basis ēei . The standard components of tensors G and H
relative to this basis are given by Ḡi1 ···i p and H̄k1 ···kq . According to (3.21), we construct the (p + q − 2t)th-
order tensor
T ≡ Ḡi1 ···i p−t j1 ··· jt H̄ j1 ··· jt k1 ···kq−t ēei1 ⊗ · · · ⊗ ēei p−t ⊗ ēek1 ⊗ · · · ⊗ ēekq−t .
T̄ (3.22)
Are the above two (p − q − 2t)th-order tensors, constructed by the same rule but corresponding to two
different bases, the same or different?
To answer this question, applying the transformation rule (3.16) to (3.22) and using the equalities (see
(1.39) and (1.48))
(ēea · e b )(ēea · e c ) = e b · e c = δbc , (ēea · e b )ēea = e b ,
we deduce
T
T̄ = (ēei1 · e l1 ) · · · ēei p−t · e l p−t (ēe j1 · e m1 ) · · · (ēe jt · e mt ) Gl1 ···l p−t m1 ···mt ×
(ēe j1 · e r1 ) · · · (ēe jt · e rt ) (ēek1 · e s1 ) · · · ēekq−t · e sq−t Hr1 ···rt s1 ···sq−t ×
ēei1 ⊗ · · · ⊗ ēei p−t ⊗ ēek1 ⊗ · · · ⊗ ēekq−t
= δm1 r1 · · · δmt rt Gl1 ···l p−t m1 ···mt Hr1 ···rt s1 ···sq−t ēel1 ⊗ · · · ⊗ e l p−t ⊗ e s1 ⊗ · · · ⊗ e sq−t
= Gi1 ···i p−t j1 ··· jt H j1 ··· jt k1 ···kq−t e i1 ⊗ · · · ⊗ e p−t ⊗ e k1 ⊗ · · · ⊗ e kq−t = T .
It turns out that expression (3.21) always yields the same (p + q − 2t)th-order tensor for all the choices
t
of basis e i . We designate this tensor by G H , namely,
t
G H ≡ Gi1 ···i p−t j1 ··· jt H j1 ··· jt k1 ···kq−t e i1 ⊗ · · · ⊗ e i p−t ⊗ e k1 ⊗ · · · ⊗ e kq−t . (3.23)
Generally, we say that (3.23) is the t-dot product of pth- and qth-order tensors G and H . It may be easily
understood that this dot product is bilinear, i.e.,
( t t t
G (α1 H 1 + α2 H 2 ) = α1 G H 1 + α2 G H 2 ,
t t t (3.24)
(α1 G 1 + α2 G 2 ) H = α1 G 1 H + α2 G 2 H ,
for any numbers α1 and α2 and for any pth- and qth-order tensors G 1 , G 2 and H 1 , H 2 .
A number of useful operations for tensors may be regarded as examples of the above definition. In
particular, we set
0
G⊗H ≡ G H, (3.25)
and we refer to this zero-dot product as the tensor product of tensors G and H . It may be clear that the
dyadic product of two vectors and the tensor product of a tensor and a vector, introduced by (2.36) and
(3.5), are its two particular cases.
For p ≥ q = t or q ≥ p = t we set
t
H] ≡ G
G [H H ≡G H (3.26)
for any pth-order tensor G and qth-order tensor H . In the case p > q = t, we say that tensor G transforms
linearly each qth-order tensor into a (p − q)th-order tensor through (3.26). Thus, a tensor of higher order is
endowed with multiple transformation functions: For each positive integer p < q, a qth-order tensor may
be used to linearly transform each pth-order tensor into a (q − p)th-order tensor. Clearly, (2.8), (3.1) and
(3.3) etc. are particular cases of this general notion.
When G and H are pth-order tensors, we have
p
G H ≡G H, (3.27)
which yields a scalar. We call this the scalar product of tensors G and H of the same order. Evidently, we
have
G|2 ≡ G G > 0 for each non-zero tensor G .
|G (3.28)
The scalar |G
G| is called the magnitude or norm of tensor G . In particular, we have
u·v = u v
for vectors u and v .
56 CHAPTER 3. TENSORS OF HIGHER ORDER
We say that the above defines the orthogonal transformation of tensor G performed by Q , and Q ? G is the
Q -trasformed tensor of G . The following properties of the above orthogonal transformation may be useful:
Q ? (α1 G 1 + α2 G 2 ) = α1 Q ? G 1 + α2 Q ? G 2 , (3.32)
Q1 Q 2 ) ? G = Q 1 ? (Q
(Q Q2 ? G ) , (3.33)
t t
G
Q ? (G Q ? G)
H ) = (Q Q ?H),
(Q (3.34)
|Q
Q ? G | = |G
G| , (3.35)
for any two numbers α1 , α2 , for any orthogonal tensors Q , Q 1 , Q 2 , for any pth-order tensors G , G 1 , G 2 , for
any qth-order tensor H , and for any integer falling within 0 and the smaller of p and q. The last expression
above just indicates that the orthogonal transformation of tensor preserves the magnitude.
In particular, the orthogonal transformation of vector u and 2nd-order tensor S performed by Q are
given by
Q ? u = Q[uu], Q ? S = Q S QT . (3.36)
In the succeeding sections we shall study certain kinds of 3rd- and 4th-order tensors that are useful in
mechanics and engineering and other related fields.
ε ≡ εi jk e i ⊗ e j ⊗ e k , (3.39)
where ε is the permutation symbol given by (2.53). The tensor ε given above is completely skew-symmetric
and known as 3rd-order permutation tensor.
Some applications of the permutation tensor ε are as follows:
1
u × v = (εε · v ) · u = ε [uu ⊗ v − v ⊗ u ] , (3.40)
2
ε [uu × v ] = u ⊗ v − v ⊗ u ; (3.41)
1 1
ω = ε [ΩΩ] = ε : Ω , (3.42)
2 2
ω] = ε · ω ;
Ω = ε [ω (3.43)
for any vectors u and v and for any skew-symmetric 2nd-order tensor Ω and its associated axial vector ω
(see (2.137)–(2.138)).
S A]) = A
(L[A (L[SS ]) (3.49)
I = ei ⊗ e j ⊗ ei ⊗ e j , (3.50)
1
Isym =
(eei ⊗ e j ⊗ e i ⊗ e j + e i ⊗ e j ⊗ e j ⊗ e i ) , (3.51)
2
1
Iskw = (eei ⊗ e j ⊗ e i ⊗ e j − e i ⊗ e j ⊗ e j ⊗ e i ) , (3.52)
2
1
Īsym = Isym − I ⊗ I . (3.53)
3
Evidently, we have the decomposition
I = Isym + Iskw , (3.54)
1
Isym = I ⊗ I + Īsym , (3.55)
3
which form a correspondence with the additive decompositions (2.142)–(2.143) and (2.144), respectively.
The standard components of the foregoing four tensors are given by
sym 1
Ii jkl = δik δ jl + δil δ jk , (3.57)
2
1
Iskw
i jkl = δik δ jl − δil δ jk , (3.58)
2
1 1
Īsym = δik δ jl + δil δ jk − δi j δkl , (3.59)
2 3
relative to any standard basis ei . A noticeable fact is that these components assume the same forms for all
the bases.
Moreover, the following properties hold:
I [SS ] = S ; (3.60)
1
Isym [SS ] =S + S T = symSS ;
(3.61)
2
1
Iskw [SS ] = S − S T = skwSS ;
(3.62)
2
for any 2nd-order tensor S . In particular, we have
Isym [A
A] = A, Isym [Ω
Ω] = O ; (3.63)
Iskw [A
A] = O , Iskw [Ω
Ω] = Ω ; (3.64)
Īsym [A
A] = Ā
A; (3.65)
for any symmetric and antisymmetric tensors A and Ω .
A useful operation for two 4th-order tensors T and L is called composite product of C and L, which
yields a 4th-order tensor and is denoted T ◦ L and defined by
We have
T ◦ L = T : L = Ti jrs Lrskl e i ⊗ e j ⊗ e k ⊗ e l . (3.67)
The following rules hold true:
L◦I = I◦L = L, (3.68)
3.9. COMPONENT REPRESENTATIONS 59
L ◦ L̂ = L̂ ◦ L = I . (3.71)
L ◦ L̂ = L̂ ◦ L = Isym . (3.72)
Such a L̂ is called the inverse tensor of 4th-order tensor L with minor symmetry, also denoted L−1 .
With the above definitions of inverse tensors and (3.66), we may invert (3.45) for a nonsingular L.
Indeed, we have
L−1 [SS ] = L−1 [L[A
A]) = L−1 ◦ L [A
A] .
From this and either (3.68) (non-symmetric S and E ) or (3.69) (symmetric S and E ), we derive
A further account of 4th-order tensors will be presented in §3.10 by extending the notions of vector and
2nd-order tensors.
Multiplication by number:
Composite products:
S T −→ Sir Tr j ; L ◦ C −→ Li jrs Crskl ; (3.78)
Tensor products:
Scalar products:
for any v 1 , v 2 , v ∈ V and for any α1 , α2 , α ∈ R . In V ∗ , we define addition of any two linear mappings
u ∗1 , u ∗2 ∈ V ∗ and multiplication of each linear mapping u ∗ by number α ∈ R by
T [xx1 , · · · , axxα + a0 x 0α , · · · , x r ] = aT
T [xx1 , · · · , x α , · · · , x r ] + a0 T [xx1 , · · · , x 0α , · · · , x r ],
as a rth-order tensor.
There are so many possibilities for the rth-order tensors defined above. Indeed, since there are two
choices for each V̄α , i.e., either V or V ∗ , totally we have 2r different Cartesian products V̄1 × · · · × V̄r of
spaces, and, accordingly, the foregoing definition actually defines 2r different types of rth-order tensors. In
particular, we have four different types of 2nd-order tensor, they are linear mappings from V × V to R ,
from V ∗ × V to R , from V × V ∗ to R , from V ∗ × V ∗ to R , separately. They can be also regarded as
linear transformations from V to V ∗ , from V to V , from V ∗ to V ∗ , from V ∗ to V , respectively.
From an abstract and general viewpoint, a vector space and its dual space may be essentially different,
and hence different types of vectors and tensors should be distinguished, as shown above. However, a
Euclidean vector space, in which scalar product can be defined, and its dual space coincide, the above
difference thus disappears.
Here we do not pursue a detailed account of general tensors along the above abstract line. For details,
refer to, e.g., Bowen and Wang (1976). This general notion may be used to treat tensor algebra and analysis
on manifolds related to deformable continua. Systematic and fruitful applications in nonlinear elasticity
and in nonlinear continuum mechanics are presented, e.g., by Marsden and Hughes (1983) and Bertram
(1989), et al. Some recent results are surveyed by Stumpf and Hoppe (1997).
The above general notion may lead to new understanding of the tensors introduced before. In the next
subsection we touch only on 4th-order tensors which are viewed as 2nd-order tensors over usual 2nd-order
tensor spaces. Their physical backgrounds are classical and micropolar elasticity tensors either in Cauchy’s
or in Green’s sense. Besides, photoelasticity is also characterized by 4th-order tensor. In this respect, refer
to Nye (1985) for details.
Following the same procedure in §2.6.2, we may derive the characteristic expression
C = λ1 A 1 ⊗ A 1 + · · · + λ6 A 6 ⊗ A 6 (3.87)
for each 4th-order tensor C with both minor and major symmetry properties, which is regarded to be a
symmetric 2nd-order tensor over a 6-dimensional vector space formed by symmetric 2nd-order tensors
over a 3-dimensional base vector space. The characteristic equation of the 4th-order tensor C as a 6-
dimensional 2nd-order tensor determines its six eigenvalues λ1 , · · ·, λ6 (see, e.g., Rychlewski 1984; Betten
1987, 1993). Now, each “eigenvector” A σ in the above, i.e., eigentensor, is a symmetric 2nd-order tensor
over a 3-dimensional vector space and the following orthonormalization condition holds:
Aσ A τ = δστ , σ, τ = 1, · · · , 6 , (3.88)
i.e., the six eigentensors A σ form a standard basis of usual symmetric 2nd-order tensor spaces as 6-
dimensional vector spaces. We have
A 1 ⊗ A 1 + · · · + A 6 ⊗ A 6 = Isym . (3.89)
For each 4th-order tensor L with major symmetry property, similarly we have the characteristic expres-
sion
L = λ1 E 1 ⊗ E 1 + · · · + λ9 E 9 ⊗ E 9 . (3.90)
Now each “eigenvector” E r in the above, i.e., eigentensor, is a 2nd-order tensor over a 3-dimensional vector
space, which need not be symmetric. The following properies hold:
Er E t = δrt , r, t = 1, · · · , 9 , (3.91)
C−1 = λ−1 −1
1 A 1 ⊗ A 1 + · · · + λ6 A 6 ⊗ A 6 , (3.93)
T = λ1 P1 + · · · + λr Pθ . (3.97)
In the above, Pθ is the eigenprojection of T subordinate to λθ . The following simple properties hold:
Note that the multiplication here is the composite product signified by “◦”. Accordingly, the expressions
(3.93)-(3.96) become
T−1 = λ−1 −1
1 P1 + · · · + λr Pr , (3.102)
A]|2 + · · · + λr |Pr [A
W = λ1 |P1 [A A]|2 , (3.103)
In mechanics and engineering, we treat various shapes of material bodies. In courses of motions and
deformations of these bodies, we introduce scalar, vector and tensor quantities at each material particle to
characterize and represent the kinematic and physical state at this particle, such as the deformation state
and stressed state, etc. Accordingly, we are concerned with scalar, vector and tensor quantities each of
which distributes and changes over a spatial domain. A basic question in applications is to study how these
quantities change with the spatial positions, which leads to various relevant differentiation and integarion
operations. These will be discussed in this chapter.
65
66 CHAPTER 4. SCALAR, VECTOR AND TENSOR FIELDS
The three standard components xi are referred to as the (rectangular) Cartesian coordinates of point x
relative to the origin. The (o; x1 , x2 , x3 ) with the origin o and the Cartesian coordinates xi is said to form a
(rectangular) Cartesian coordinate system.
Certain geometrical background and features of Cartesian coordinate systems will be elaborated on in
§4.5. At this stage, we remark that there are infinite possibilities of choosing standard or arbitrary bases,
since the choice may be different at different spatial points. Here we make the simplest choice: the same
standard basis at all points. Other useful choices will be discussed later on.
Relative to a Cartesian coordinate system, the foregoing scalar, vector and tensor fields as scalar-,
vector- and tensor-valued functions of the position vector x have the following standard representations
ξ = ξ(x1 , x2 , x3 ) ; (4.4)
We say that the field quantity ϕ(xx) is a continuous (differentiable) field , if it is continuous (differen-
tiable) at all points in the related spatial domain. A differentiable field quantity is continuous, whereas the
converse need not be true.
Some useful rules for gradients are as follows:
∂vv ∂vv
divvv ≡ tr = I. (4.13)
∂xx ∂xx
On the other hand, the curly or rotatory derivative of vector field v = v (xx) is a vector field, designated by
curlvv and defined by
∂vv
curl v ≡ ε . (4.14)
∂xx
In the above, I and ε are the 2nd-order identity tensor and the 3rd-order permutation tensor given by (3.39).
Sometimes, curlvv is also denoted rotvv.
In particular, if the tensor field v = v (xx) in (4.13) is replaced by the gradient of a scalar field ξ = ξ(xx),
then we treat the divergent derivative of the gradient of the scalar field ξ = ξ(xx), which is referred to as the
Laplacian derivative of this scalar field, or, simply, the Laplacian of ξ, and usually denoted ∇2 ξ. We have
∂ξ
∇2 ξ ≡ div . (4.15)
∂xx
TT
∂T
T≡
divT I. (4.16)
∂xx
In particular, if the tensor field T = T (xx) in (4.16) is given by the gradient of a vector field v = v (xx), then
we treat the divergent derivative of the gradient of the vector field v = v (xx), which is called the Laplacian
derivative of the vector field v = v (xx), or, simply, Laplacian of v = v (xx) and denoted
∂vv
∇2 v ≡ div . (4.17)
∂xx
68 CHAPTER 4. SCALAR, VECTOR AND TENSOR FIELDS
It may be evident each of the above derivatives is linear. Some useful rules for the derivatives introduced
above are recorded as follows.
∂ξ
div(ξvv) = ξdivvv + v · ; (4.18)
∂xx
∂ξ
T ) = ξdivT
div(ξT T+ ·T ; (4.19)
∂xx
∂ξ
T · v ) = (divT
div(T T)·v + ·T ; (4.20)
∂xx
∂ξ
curl(ξvv) = ξcurlvv + ×v; (4.21)
∂xx
∂(divvv)
curl(curlvv) = − ∇2 v ; (4.22)
∂xx
curl(divvv) = 0 , div(curlvv) = 0 ; (4.23)
for any differentiable scalar, vector, 2nd-order tensor fields as shown by (4.1).
It may be clear that the limit in the last equality above is just the partial derivative of the standard component
ϕi1 ···iq (x1 , x2 , x3 ) in the ith Cartesian coordinate xi . Thus, relative to a common Cartesian coordinate system,
we obtain the simple expression
∂ϕ ∂ϕi1 ···iq
= e i1 ⊗ · · · ⊗ e iq ⊗ e i (4.24)
∂xx ∂xi
for the gradient of qth-order tensor field as shown by (4.2) and (4.7). In particular, this expression produces
∂ξ ∂ξ
= ei ; (4.25)
∂xx ∂xi
∂vv ∂v j
= e j ⊗ ei ; (4.26)
∂xx ∂xi
T
∂T ∂T jk
= e j ⊗ ek ⊗ ei ; (4.27)
∂xx ∂xi
for scalar, vector, 2nd-order tensor field as shown by (4.1) and (4.4)–(4.6), respectively.
Moreover, from the last three expressions and the definitions (4.13)–(4.16) we derive
∂2 ξ ∂2 ξ ∂2 ξ ∂2 ξ
∇2 ξ = = 2+ 2+ 2; (4.28)
∂xi ∂xi ∂x1 ∂x2 ∂x3
4.5. DIFFERENTIATION IN CURVILINEAR COORDINATES 69
∂ξ
= ∇ξ, ∇2 ξ = (∇ · ∇)ξ,
∂xx
∂vv
= v ⊗ ∇, divvv = v · ∇, curlvv = v × ∇, ∇2 v = (∇ · ∇)vv,
∂xx
T
∂T
= T ⊗ ∇, T = ∇·T,
divT
∂xx
and the like. In the above, we formally regard the ∇ as a vector and each partial differential symbol ∂/∂xi
as its standard component. With this understanding and the relations
∂ ∂ξ ∂2 ∂2 ξ ∂ ∂v j ∂2 ∂2 v j ∂ ∂T jk
ξ= , 2
ξ= 2, vj = , 2
vj = 2 , T jk = ,
∂xi ∂xi ∂xi ∂xi ∂xi ∂xi ∂xi ∂xi ∂xi ∂xi
in mind, it may be clear that the foregoing expressions in terms of a single formal vector ∇ may facilitate
the memory of the related differential operations. However, without additional treatment, this advantage
will no longer be maintained in the case of using curvilinear coordinate systems.
4.5.1 Motivation
Various kinds of structural elements are designed and used for different purposes in engineering. Some of
them are shaped to assume regular and smooth curved boundary surfaces. For instance, that is the case
for shell-like structures of varied shapes and, in particular, for solid and hollow cylindrical and spherical
bodies, etc. Regular and smooth curved boundary shapes and surfaces of such bodies and structures leave
three problems outstanding. First, it might be awkward and inconvenient to use a Cartesian coordinate
system for treatment of regular, smooth curved surfaces and relevant integrations, but treatment in terms
of a “curvilinear coordinate system” defined directly through certain specific families of regular surfaces
would become quite simple and straightforward. Second, the boundary conditions in stress and strain
analyses would be most naturally and suitably formulated by means of the just-mentioned “curvilinear
70 CHAPTER 4. SCALAR, VECTOR AND TENSOR FIELDS
coordinate system”, whereas otherwise a formulation in terms of a Cartesian coordinate system would be
rather complicated and unnatural. Third, stress and strain distributions on the foregoing specific families of
regular surfaces may manifest themselves in a most simple and symmetric manner. Accordingly, it might
be most pertinent, efficient and convenient to treat the components of stress and strain relative to a suitable
“curvilinear coordinate system”. On the contrary, stress and strain components relative to any Cartesian
coordinate system would lose their features of simplicity and symmetry relative to a suitable “curvilinear
coordinate system”. In this case, it might not be so pertinent, efficient and convenient to deal with stress
and strain components relative to any Cartesian coordinate system.
The above problems and features may be clearly seen from a cylindrical body subjected to axially
symmetric loadings (see Fig. 5.1 given later). In this case, the aforementioned specific families of regular
surfaces are the cylindrical surfaces, the planes normal to or through the symmetric axis. The stress distri-
butions on these surfaces are axially symmetric on the just-mentioned three families of surfaces, whereas
that is not the case relative to a Cartesian coordinate system. In view of this, it may be natural and better
to treat the stress components on the foregoing surfaces and formulate them as functions of the so-called
cylindrical polar coordinates (r, θ, z), instead of the usual Cartesian stress components relative to any Carte-
sian coordinate system. The governing equations for the former and the related boundary conditions will
assume simple, natural forms, which would be most suitable for analyzing and solving the unknown stress
and strain in a cylindrical body.
From the above it may be clear that, for material bodies with regular curved boundary surfaces, such
as cylindrical and spherical bodies etc., we should consider three suitable specific families of surfaces
and introduce “curvilinear coordinate systems” generated by these families of surfaces, instead of the
conventional Cartesian coordinate systems. The main notion and idea will be detailed below.
below.
From the foregoing explanation, a basic idea of using the Cartesian coordinates to determine and repre-
sent the relative positions of spatial points is to introduce three suitable, simple ordered families of planes
and then determine the relative spatial positions of these families of planes by using three ordered numbers.
Accordingly, each spatial point is specified by the intersecting point of three intersecting planes pertaining
to the chosen three families of planes and hence by the three order numbers specifying the latter. Now an
idea for extension is clear: in the foregoing account we merely replace each simple family of planes with a
simple family of surface, as shown in Fig. 4.1. This is elaborated below.
any plane xi
any surface γi
(i) We replace three chosen families of planes in constructing a Cartesian coordinate system with three
chosen simple families of surfaces, (S1 , S2 , S3 ). Here by a simple family of surfaces, Si , we mean
that all the surfaces pertaining to Si are the same in geometrical feature, spread the whole space in
a specified, simple manner, and satisfy certain analytical properties. These are specified by (4.33)–
(4.36) given slightly later;
(ii) we choose and fix a surface Si0 in each family Si , called a base surface. Then, with a specified
simple manner for the spatial distribution of each of the three chosen family of surfaces, as will be
given by (4.33), the spatial position of each surface pertaining to each family Si may be determined
and represented by assigning to it a single number, say, γi , which need not have so straightforward a
geometrical meaning as a Cartesian coordinate, but should be related to certain geometric features for
the chosen family of surfaces and specified by the form of the function defining the family of surfaces,
as will be explained soon, e.g., the radius of a cylindrical or spherical surface, the intersecting angle
of two planes, etc. Accordingly, the spatial positions of any three surfaces pertaining to the chosen
three families of surfaces, separately, may be determined and represented by a triplet of ordered
numbers, (γ1 , γ2 , γ3 ), and the latter, at the same time, also determines and represents the position
of point x . Note that each spatial point is the intersecting point of three intersecting surfaces at
this point. Algebraically, this means that from system (5.34) we can solve one and only one triplet
(x1 , x2 , x3 ), i.e., the Cartesian coordinates. We refer to such a triplet of ordered numbers, (γ1 , γ2 , γ3 ),
as the curvilinear coordinates of the point x , and the (o; γ1 , γ2 , γ3 ) with an origin o and the curvilinear
coordinates γi as a curvilinear coordinate system. Note here that we use γi to denote curvilinear
coordinates, in order to avoid possible confusion with Cartesian coordinates;
(iii) There are three curves intersected by the triplet of intersecting surfaces at each point x . Hence, in a
certain manner we may choose, as a basis of vector spaces, three vectors g i (xx) in the directions of
the three tangents of these three intersecting curves at point x . Such a basis g i (xx) is called a local
basis at point x . Note that generally the local basis g i (xx) will changes from point to point. As a
consequence, a common basis is no longer generated, except for the particular case of three families
of parallel planes.
72 CHAPTER 4. SCALAR, VECTOR AND TENSOR FIELDS
Now we present an exact mathematical formulation of curvilinear coordinates. We choose three smooth
functions of the three Cartesian coordinate variables x i as follows:
For any given number γi , each such function γi (x1 , x2 , x3 ) determines a surface in 3-dimensional space
through the following equation in the three Cartesian coordinate variables xi :
γi (x1 , x2 , x3 ) = γi . (4.33)
For all numbers γi , the above equation for each i generates a family of surfaces. Then we obtain three
families of surfaces through equation (4.33) for i = 1, 2, 3. Since they are generated by the same function
γi (x1 , x2 , x3 ) through equation (4.33), all the surfaces in each such family are the same in geometric feature,
and, hence, the only difference in geometrcal feature between any two such surfaces are their relative spatial
positions, possibly except for their sizes.
For any given triplet of numbers, (γ1 , γ2 , γ3 ), we have
1
γ = γ1 (x1 , x2 , x3 ) ,
γ2 = γ2 (x1 , x2 , x3 ) , (4.34)
3
γ = γ3 (x1 , x2 , x3 ) .
We assume that from the above three equations we can work out a unique solution (x1 , x2 , x3 ) for any given
(γ1 , γ2 , γ3 ). Geometrically, this means that any three surfaces given by (4.33) will intersect at one and only
one spatial point and thus determines this point. Namely, any given triplet (γ1 , γ2 , γ3 ) will determine a
unique triplet (x1 , x2 , x3 ) through the above three equations. Specifically, we express this fact by
x = γ̄1 (γ1 , γ2 , γ3 ) ,
1
x2 = γ̄2 (γ1 , γ2 , γ3 ) , (4.35)
x3 = γ̄3 (γ1 , γ2 , γ3 ) .
Note that the two pairs of triplet of three functions in (4.34) and (4.35) can be derived from each other.
In fact, they are the inverses of each other. This fact is called a curvilinear coordinate transformation and
may be signified as follows:
In what follows we further assume that each function in the above pairs is continuously differentiable. In
addition, without loss of generality we may assume that the solution of (4.34) is xi = 0 for γi = 0, i.e.,
Each number γi determines a surface through equation (4.33) and, of course, the spatial position of
this surface. A triplet (γ1 , γ2 , γ3 ) determines a point (x1 , x2 , x3 ) and the spatial positions of the triplet
of intersecting surfaces at this point. Hence, they are just the curvilinear coordinates mentioned before.
Besides, (o; γ1 , γ2 , γ3 ) is a curvilinear coordinate system with the origin o coincident with the origin of the
Cartesian coordinate system at issue.
It may be essential to bear the following fact in mind: The curvilinear coordinate γi is derived from
the Cartesian coordinates x1 , x2 , x3 through the chosen function γi (x1 , x2 , x3 ), and the geometric meaning
the curvilinear coordinate γi should have is fully specified by the chosen functional form of γi (x1 , x2 , x3 ).
For example, the three functional forms given later by (4.65) in §4.5.4 specify the following geometrical
meanings: γ1 is the radius of the cylindrical surface S1 ; γ2 is the angle between the plane S2 and the
base plane S20 ; and γ3 is just a Cartesian coordinate, as indicated in Fig. 4.2 in §4.5.4, whereas the three
functional forms given later by (4.65) in §4.5.5 specify the following different geometrical meanings: γ1 is
the radius of the spherical surface S1 ; γ2 is the angle between the semicircle S2 and the base semicircle S20
meeting at the common axis S30 ; and γ3 is the angle of the cone S3 with S30 its symmetry axis; as indicated
4.5. DIFFERENTIATION IN CURVILINEAR COORDINATES 73
in Fig. 4.3 in §4.5.5. Note here that the common axis S30 itself is a base surface for the family of conic
surfaces, S3 .
The three functions γi (x1 , x2 , x3 ) generating the three simple families of surfaces determine the local
basis g i (xx) at point x . Indeed, the gradient of the function γi (x1 , x2 , x3 ), i.e., ∂γi /∂xx, is just the outer normal
to the surface (4.33) at point x . The tangent of the intersecting curve of any two surfaces at x is normal to
their outward normals. Observing the equalities
∂xx
g i (xx) = , (4.37)
∂γi
In summary, we may say that the essence of curvilinear coordinate system is to specify three contin-
uously differentiable functions γi (x1 , x2 , x3 ) which render the system (4.34) to have a unique solution for
any given triplet (γ1 , γ2 , γ3 ) and meet (4.36). Of them, the condition (4.36) is not necessary but only a
convenient requirement. We mention in passing that a Cartesian coordinate system is the trivial case when
the three functions γi (x1 , x2 , x3 ) are chosen as the simplest, i.e.
γi (x1 , x2 , x3 ) = xi .
To conclude this subsection, we may ask a seemingly naive question: why exactly three families of
surfaces are needed, instead of less or more than three? The answer is perhaps as follows. Three suitably
chosen surfaces meet at one and only one spatial point and hence specify this point, and more than three
surfaces are thus unnecessary. On the other hand, two surces usually meet at a curve and thus determine
not only one but so many spatial points.
74 CHAPTER 4. SCALAR, VECTOR AND TENSOR FIELDS
ξ˜ = ξ(γ
˜ 1 , γ2 , γ2 ) ; ṽi = ṽi (γ1 , γ2 , γ2 ) ; T̃i j = T̃i j (γ1 , γ2 , γ2 ) .
Following the foregoing objective, we first express a qth-order tensor field ϕ = ϕ(xx) relative to the local
standard basis ẽei = ẽei (xx) at each point x . Hence, we have
Now we aim to derive explicit expressions for the gradient and the curly, divergent and Laplacian
derivatives of the tensor field ϕ = ϕ(xx). Toward this goal, the main idea is to use the simple expression
(4.24) relative to a common Cartesian coordinate system, and the main procedure is to establish the re-
lationship between the local standard components given in (4.42) and the Cartesian components in (4.7).
Utilizing the transformation rule (3.16) we have
at each point x . According to (4.42) and (4.35), each local standard component ϕ̃k1 ·kq is first a function
of the curvilinear coordinates γ1 , γ2 , γ3 and then each γi is in turn a function of the Cartesian coordinates
4.5. DIFFERENTIATION IN CURVILINEAR COORDINATES 75
x1 , x2 , x3 . This fact just means a composition of two functions and the consequence is that each local
standard component ϕ̃i1 ·iq may be regarded as a function of the Cartesian coordinates x1 , x2 , x3 . With this
understanding and Leibniz’s chain rule for composite functions, by using (4.43) we infer
∂ϕi1 ···iq ∂
= (eei1 · ẽek1 ) · · · (eeiq · ẽekq )ϕ̃k1 ···kq
∂xi ∂xi
∂ϕ̃k1 ···kq
=(eei1 · ẽek1 ) · · · e iq · ẽekq +
∂xi
∂ẽekq
∂ẽek1
e i1 · · · · e iq · ẽekq + · · · + e iq · ẽekq · · · e iq · ϕ̃k1 ···kq (4.45)
∂xi ∂xi
∂ϕ̃k1 ···kq ∂γ j
=(eei1 · ẽek1 ) · · · eiq · ẽekq +
∂γ j ∂xi
∂ẽekq
∂ẽek1 ∂γ j
e i1 · · · · e iq · ẽekq + · · · + e iq · ẽekq · · · e iq · ϕ̃k ···k .
∂γ j ∂γ j ∂xi 1 q
From the above we deduce that if the partial derivatives of the local standard basis ẽer with respect
to each curvilinear coordinate γs are known, then the gradient of the qth-order tensor field ϕ(xx) may be
determined. We set
∂ẽea
= Γ̃abc ẽec , (4.46)
∂γb
where the coefficients
Γ̃abc = Γ̃abc (γ1 , γ2 , γ3 ) (4.47)
are known as the Christoffel symbols associated with the orthogonal curvilinear coordinate system (o; γi )
and calculated by
∂ẽea
Γ̃abc = · ẽec . (4.48)
∂γb
On the other hand, (4.25) yields
∂γ j ∂γ j
= ei . (4.49)
∂xx ∂xi
This and (4.40)1 and the relation
−1
∂γ j ∂xx
=
∂xx ∂γ j
result in
−1
∂γ j ∂γ j ∂xx
ei = ẽe j = ẽe j (no summation for j) . (4.50)
∂xi ∂xx ∂γ j
Using this and the expression
ẽek = (eea · ẽek )eea ,
from (4.44)3 and (4.47) we derive an expression for the gradient of qth.order tensor field ϕ = ϕ(xx) as
follows: !
∂ϕ ∂xx −1 ∂ϕ̃k1 ···kq q
= + ∑ Γ̃kα jt ϕ̃k1 ···t···kq ẽek1 ⊗ · · · ⊗ ẽekq ⊗ ẽe j . (4.51)
∂xx ∂γ j ∂γ j α=1
In the above, summation convention for each twice repeated subscript kα is taken granted, and here
summation for the triply repeated subscript j is also meant. The latter is an exception.
In particular, for q =0, 1, 2, i.e., for scalar, vector, 2nd-order tensor fields as shown by (4.1), we have
−1
∂ξ ∂xx ∂ξ˜
= ẽe j , (4.52)
∂xx ∂γ j ∂γ j
−1
∂vv ∂xx ∂ṽi
= + Γ̃i jk ṽk ẽei ⊗ ẽe j , (4.53)
∂xx ∂γ j ∂γ j
76 CHAPTER 4. SCALAR, VECTOR AND TENSOR FIELDS
−1
T
∂T ∂xx ∂T̃rs
= + Γ̃r jt T̃ts + Γ̃s jt T̃rt ẽer ⊗ ẽes ⊗ ẽe j , (4.54)
∂xx ∂γ j ∂γ j
for the gradients, and
! !
−1
2 ∂xx ∂ ∂γ j ∂ξ˜ ∂γk ∂ξ˜
∇ ξ= + Γ̃ j jk , (4.55)
∂γ j ∂γ j ∂xx ∂γ j ∂xx ∂γk
−1
∂xx ∂ṽ j
divvv = + Γ̃ j jk ṽk , (4.56)
∂γ j ∂γ j
−1
∂xx ∂ṽi
curlvv = + Γ̃i jk ṽk εi jk ẽek , (4.57)
∂γ j ∂γ j
!
−1
∂xx ∂2 ṽr ∂ṽt ∂ṽr
∇2 v = + Γ̃r jt + Γ̃ j jt ẽer , (4.58)
∂γ j ∂γ2j ∂γ j ∂γt
−1
∂xx ∂T̃r j
T=
divT + Γ̃r jt T̃t j + Γ̃ j jt T̃rt ẽer , (4.59)
∂γ j ∂γ j
for the curly and divergent derivatives of scalar field ξ = ξ(xx), vector field v = v (xx), and tensor field
T = T (xx). In the above, summation is meant for each repeated (twice or triply) subscript.
Now it may become clear, expressions for the gradient of qth-order tensor field ϕ = ϕ(xx) and other
associated differential operations are available, whenever we can work out the gradients ∂xx/∂γ j and the
Christoffel symbols Γ̃abc determined by (4.48). Now we explain how to calculate these quantities. The
main procedures are as follows:
(i) For a chosen orthogonal curvilinear coordinate system (o; γ1 , γ2 , γ3 ), we establish the curvilinear
coordinate transformation relationship (4.34) and (4.35), which will be the starting point of the sub-
sequent analyses and computations;
At this step, we work out the nine partial derivatives ∂xi /∂γ j ; then we work out the local standard
basis ẽe j by (cf., (4.60)1 and (4.49))
−1
∂xx ∂xi
ẽe j = ei (no summation for j) ; (4.62)
∂γ j ∂γ j
(iii) Using (4.40)1 and (4.48), we obtain an explicit expression for the Christoffel symbols as follows:
!
∂ ∂xx −1 ∂xx ∂xi ∂xi −1
Γ̃abc = . (4.63)
∂γb ∂γa ∂γa ∂γc ∂γc
In the above, no summation is meant for either of the twice repeated subscripts a and c. The Christof-
fel symbols may be calculated either by using the above formula or by differentiating the expression
(4.62) for the local standard basis.
4.5. DIFFERENTIATION IN CURVILINEAR COORDINATES 77
Generally, the Christoffel symbols consist of 27 ordered numbers and we need to compute them. Their
computations are a bit tedious. For an orthogonal curvilinear coordinate system, however, the conditions
(4.39) and hence (4.40)3 hold. From them we derive
Γ̃abc = −Γ̃cba .
Thus, it suffices to calculate only nine of the 18 non-vanishing Christoffel symbols, e.g.,
In the next two subsections, the above procedures will be used to treat two widely used orthogonal
curvilinear coordinate systems.
which is known as a cylindrical polar coordinate system. The γi given above are called cylindrical polar
coordinates. As indicated in Fig. 4.2, the geometrical meanings of these coordinates are as follows: the γ1
is the radius of a cylindrical surface, the γ2 is the angle between the base plane x2 = 0 and a plane through
the e 3 -axis, and the γ3 is just the Cartesian coordinate x3 .
~
e3 = ez
S3
γ1 ~
e2 = eθ
γ2
~
e1 = er
S1
z γ3
S2
S 30
S 20
S 10
The curvilinear coordinate transformation of (4.65) can be easily established and given below:
x = γ1 cos γ2 ,
1
x2 = γ1 sin γ2 , (4.66)
x3 = γ3 .
78 CHAPTER 4. SCALAR, VECTOR AND TENSOR FIELDS
which is known as a spherical polar coordinate system. The γi given above are called spherical polar
coordinates. As indicated in Fig. 4.3, the geometrical meanings of these coordinates are as follows: the γ1
is the radius of a spherical surface, the γ2 is the angle between the base plane x2 = 0 and a plane through
the e 3 -axis, and the γ3 is the angle between x and the e 3 -axis.
~e = e
1 r
S3
~e = e γ 3
2 ϕ
θ γ 1 = r ~e3 = eθ
γ2 = ϕ
S2
S 20
S 30
The 9 partial derivatives ∂xi /∂γ j for the spherical polar coordinates are given by
∂x1 ∂x1 ∂x1
= cos γ2 sin γ3 , = −γ1 sin γ2 sin γ3 , = γ1 cos γ2 cos γ3 ,
∂γ
∂x1
∂γ 2 ∂γ3
2 ∂x2 ∂x2
= sin γ2 sin γ3 , = γ1 cos γ2 sin γ3 , = γ1 sin γ2 cos γ3 , (4.80)
∂γ 1 ∂γ2 ∂γ3
∂x3 = cos γ3 , ∂x3 = 0 , ∂x3 = −γ1 sin γ3 .
∂γ1 ∂γ2 ∂γ3
Hence, (4.61) yields
∂xx ∂xx ∂xx
= 1; = γ1 sin γ3 , = γ1 . (4.81)
∂γ1 ∂γ2 ∂γ3
Then (4.62) produces the local standard basis
ẽe = e 1 cos γ2 sin γ3 + e 2 sin γ2 sin γ3 + e 3 cos γ3 ,
1
ẽe2 = −ee1 sin γ2 + e 2 cos γ2 , (4.82)
ẽe3 = e 1 cos γ2 cos γ3 + e 2 sin γ2 cos γ3 − e 3 sin γ3 .
80 CHAPTER 4. SCALAR, VECTOR AND TENSOR FIELDS
Moreover, either using (4.63) and (4.80)-(4.81) or differentiating the local standard basis given by
(4.82), we obtain the non-zero Christoffel symbols as follows:
Thus, relative to a spherical coordinate system, the expressions (4.52)–(4.59) for the gradients and for
the curly, divergent and Laplacian derivatives reduce to (see, e.g., Chou and Pagano 1992)
γ1 = r , γ2 = φ , γ3 = θ ,
In the above definitions, a Cartesian coordinate system (o; x1 , x2 , x3 ) is chosen. Note that the respective
right-hand sides of (4.91)–(4.93) are just the conventional integrations over D or on S or on s.
In the above, n is the outward unit normal to the oriented surface S , as indicated before. Besides, t is the
unit tangent vector along the closed bounding edge curve s, and dl is the linear element.
Relative to a Cartesian coordinate system, Stokes’s theorem (4.97) is of the form
Z I
∂vi
εi jk nk dA = viti dl . (4.98)
S ∂x j s
Stokes’s theorem may be extended to arrive at a general formulation, which is not discussed here.
Chapter 5
Tensor Functions
One of the central topics in continuum mechanics and theories of materials and in other related fields is to
establish rational, consistent mathematic models characterizing varied complicated behaviour of materials.
These models, known as constitutive relations of materials, tell us how certain basic physical quantities
(scalars, vectors, tensors) rely on certain other basic physical quantities (scalars, vectors and tensors).
Mathematic forms of these models are called constitutive tensor functions, each of which specifies how a
basic scalar or a basic vector or a basic tensor quantity depends on or is determined by certain basic scalar,
vector and tensor quantities. In Chapters 2-3 we are already concerned with particular forms of tensor
functions, namely, linear tensor functions, each of which specifies linear dependence of a tensor quantity
on another tensor quantity, e.g., (2.7)-(2.8), (3.37)–(3.38), and (3.44)–(3.46), etc. In this chapter, we shall
discuss some aspects of tensor functions from a general viewpoint.
issue in constitutive theories of various material behaviour and may be at deeper and more sophisticated level from both theoretical
and experimental viewpoints. Here, we assume that these basic quantities have been well defined and well chosen, and our task is to
study some aspects of functional relations among them.
83
84 CHAPTER 5. TENSOR FUNCTIONS
The above idea and related questions lead to constitutive relations and constitutive functions. Their
mathematical forms formulate functional relations between certain physical quantities, such as elastic
strain-energy, heat flux vector, stress tensor, stress increment (rate), etc., and a set of basic state quanti-
ties, such as electric field vector, strain tensor, strain increment (rate), etc. Then we have various kinds
of constitutive tensor functions for various material behaviour, e.g., strain-energy function; yield function;
stress-strain relation, in particular, generalized Hooke’s law; direct and converse piezoelectric relations;
stress-deformation rate relations for Newtonian and non-Newtonian fluids; stress rate-strain rate relations
for elastoplastic solids; and so many others. These constitutive functions, which represent “individualities”
of materials in response to deformation and other actions, must obey certain universal principles and their
final forms should be determined by adequate and suitable experimental data. A comprehensive account in
this respect may be found in, e.g., Truesdell and Noll (1965) and Haupt (2002).
In the subsequent sections, we shall focus attention on scalar- and symmetric tensor-valued functions
depending on one symmetric tensor, which are adequate in many cases of application. An introduction to
tensor functions in a general sense will be presented only in a brief manner.
Each basic structural element itself has certain rotation and/or rotation-reflection symmetric properties
about a center. In addition, the repeated periodicity of basic structural elements has translational symmetric
properties. The combination of these two kinds of symmetry leads to the space symmetry in a general sense.
For our purpose, we direct attention to the former, i.e., the symmetry of basic structural elements.
For a basic structural element, we can find a center in it. Then we may perform various kinds of
orthogonal transformations: inversion about the center, reflection about a plane through the center, rotations
about an axis through the center, and their combinations. If this structural element looks the same as before
after an orthogonal transformation, we say that this orthogonal transformation is a symmetry transformation
of this structural element. For instance, it may be evident that the symmetry transformations of a cube
shown in Fig. 5.1 include the inversion about the center O, the reflections about the four medium planes
through the center O, and the rotations through the angles k × 90◦ about each of the four axes through the
two centers of each pair of surfaces, and other not so obvious orthogonal transformations.
It may be clear that orthogonal tensors, introduced in §3.9, provide a natural tool to describe and
represent the symmetry transformations mentioned above. For the basic structural element of a material,
we put all its symmetric transformations, represented by certain orthogonal tensors, together to form a
collection, known as the symmetry group of this material. Each such group have the following properties
in common:
(iii) If two symmetry transformations Q and R are included, then their composition Q R is also included.
All the symmetry groups in the above sense have been determined and classified for all solid materials,
which are ready for various purposes of application. Here it is not our intention to know how to determine
and classify them. What matters to our purpose is to keep two points in mind: One is that the internal
structure of each solid material in a certain scale may be formed in a regular and periodic manner by
certain basic structural elements and the latter are endowed with certain symmetry properties. The other
is that these symmetry properties may be represented by a collection of orthogonal tensors known as the
(point) symmetry group of this material. Another perhaps essential point is that these symmetry properties
resulting from pure geometric symmetry of basic structural elements of materials in a certain scale will lead
to considerable simplifications of complicated tensor function forms of constitutive functions and relations
of materials, as will be seen in the subsequent development.
In the succeeding subsections, we record all classes of material symmetry groups for future reference.
In literature there have been several ways understanding and describing these classes. However, they might
be rather detailed and technical to our purpose. Here, we shall present them in a suitable, perhaps simple
manner for our purpose.
In the following, n and l 0 are two chosen orthonormal vectors, and n 1 , n 2 and n 3 are three given
orthonormal vectors.
Orth+ = {Q
Q | detQ
Q = 1, Q ∈ Orth} . (5.2)
86 CHAPTER 5. TENSOR FUNCTIONS
Another kind of frequently met materials include the rotations and/or rotation-reflections about a fixed
axis. Their material symmetry groups are given by the following five classes:
Rϕn , ±R
D∞h = {±R Rπl , | l = R ϕn l 0 ; +∞ < ϕ < −∞} , (5.3)
Some fiber-reinforced composite materials may be characterized by the above rhombic groups and
are usually known as orthotropic materials.
2kπ/3 2kπ/3
D3 = {R
Rn , R πl k | l k = R n l 0 , k = 0, 1, 2} , (5.18)
2kπ/3
S6 = {±R
Rn | k = 0, 1, 2} , (5.19)
2kπ/3
C3 = {R
Rn | k = 0, 1, 2} . (5.20)
5.2. MATERIAL SYMMETRY 87
2kπ/4 2kπ/4
C4v = {R
Rn Rπl k | l k = R n
, −R l 0 , k = 0, 1, 2, 3} , (5.22)
2kπ/4 2kπ/4
D2d = {(−1)k R n , ((−1)k R πl k | l k = R n l 0 , k = 0, 1, 2, 3} , (5.23)
2kπ/4 2kπ/4
D4 = {R
Rn , R πl k | l k = R n l 0 , k = 0, 1, 2, 3} , (5.24)
2kπ/4
C4h = {±R
Rn | k = 0, 1, 2, 3} , (5.25)
2kπ/4
S4 = {(−1)k R n | k = 0, 1, 2, 3} , (5.26)
2kπ/4
C4 = {R
Rn | k = 0, 1, 2, 3} . (5.27)
2kπ/6 2kπ/6
C6v = {R
Rn Rπl k | l k = R n
, −R l 0 , k = 0, 1, · · · , 5} , (5.29)
2kπ/6 2kπ/4
D3h = {(−1)k R n , ((−1)k R πl k | l k = R n l 0 , k = 0, 1, · · · , 5} , (5.30)
2kπ/6 2kπ/6
D6 = {R
Rn , R πl k | l k = R n l 0 , k = 0, 1, · · · , 5} , (5.31)
2kπ/6
C6h = {±R
Rn | k = 0, 1, · · · , 5} , (5.32)
2kπ/6
C3h = {(−1)k R n | k = 0, 1, · · · , 5} , (5.33)
2kπ/6
C6 = {R
Rn | k = 0, 1, · · · , 5} . (5.34)
2kπ/4 2tπ/6
Td = {(−1)k R ni Rπn i ±nn j ; R r k
, −R | k = 0, 1, 2, 3; j > i = 1, 2, 3} , (5.36)
2kπ/4 2tπ/6
O = {R
Rn i , R πn i ±nn j ; R r k | k = 0, 1, 2, 3; j > i = 1, 2, 3} , (5.37)
2tπ/6
Rπn i ±nn j ; ±R
Th = {±R Rr k | k = 0, 1, 2, 3; j > i = 1, 2, 3} , (5.38)
2tπ/6
Rπn i ±nn j ; R r k
T = {R | k = 0, 1, 2, 3; j > i = 1, 2, 3} , (5.39)
In the above, (nn1 , n 2 , n 3 ) are three given orthonormal vectors, as indicated before, and
r 0 = n1 + n2 + n3 , r 1 = n1 − n2 − n3 , r 2 = n2 − n3 − n1 , r 3 = n3 − n1 − n2 . (5.40)
2k π
n
n
l4
l3
l0 l2
l1
With these in mind, the above presentations of crystal and quasi-crystal classes would not only in a
clear and simple manner specify the distributions of all relevant material symmetry axes, but also at the
same time determine what symmetry transformations should be associated with these axes. Note that the
introduction of the factor (−1)k in (5.47-(5.48) and (5.51)-(5.52), in particular in (5.23), (5.26), (5.30),
5.2. MATERIAL SYMMETRY 89
(5.33), and (5.36), naturally indicates whether the 2wo-fold axis l k is associated with a π-rotation or a
refelection, without introducing any additional particular notations.
With the given presentations and the above explanations for geometric and group-theoretic features
of two kinds of axial vectors, it may be felt that the structures of crystal and quasi-crystal classes would
become simple and clear.
Icosahedral groups
There are two classes of icosahedral groups. These groups, characterizing the symmetry of icosahedra, are
among the most complicated yet fascinating material symmetry groups. Although Klein (1884) presented
a comprehensive account of icosahedra and icosahedral groups earlier in the nineteenth century, it has been
believed for a long time that there would be no solid materials with icosahedral symmetry. Icosahedral
quasicrystals, which have been discovered only in recent years, broaden the scope of crystallography and
possess intriguing internal geometric structures and cystallographic properties which have not been known
before. For details, refer to, e.g., Vainshtein (1994) and Senechel (1995). Here we confine ourselves to
some relevant respects of the icosahedral groups.
Let n 5 and n 6 be two given unit vectors meeting at a point and with their angle determined by
1
n5 · n6 = √ . (5.55)
5
Then we may generate five unit vectors n k by rotating the vector n 5 through the angle 2kπ/5 about an axis
in the direction of n 6 , i.e.,
2kπ/5
n k = R n6 n5, k = 1, 2, 3, 4, 5 . (5.56)
It may be clear that the five n k are located on a cone with its angle determined by (5.55) and with its axis
in the direction of n 6 . Hence we have
( √
n 1 · n 2 = n 2 · n 3 = n 3 · n 4 = n 4 · n 5 = n 5 · n 1 = 1/ 5 ,
√ (5.57)
n 1 · n 3 = n 3 · n 5 = n 5 · n 2 = n 2 · n 4 = n 4 · n 1 = −1/ 5 .
These produce 20 vectors and a half of them are the opposed vectors of the other half. Retaining one of
these two halves, we obtain ten such vectors and will be denoted r σ with σ = 1, · · · , 10.
The two classes of icosahedral groups are given as follows:
2kπ/5 2kπ/3
Ih = {±R
Rn k , ±R
Rr σ Rπn j ±nnk | j, k = 1, · · · , 6; σ = 1, · · · , 10} ,
, ±R (5.59)
2kπ/5 2kπ/3
Ih = {R
Rn k , Rrσ , R πn j ±nnk | j, k = 1, · · · , 6; σ = 1, · · · , 10} . (5.60)
In the above, each n k , each r σ and each n j ± n k represent a 5-fold, a 3-fold and 2-fold axis of the groups Ih
and I. It should be noted that all the vectors n j ± n k produce only 15 two-fold axes.
The symmetry groups of solid materials are exhausted by the above crystal and quasi-crystal classes,
the full and proper orthogonal groups, and the cylindrical groups.
Detailed geometric features of each class above may be given, some of which have been indicated
above. However, for our purpose here it may be adequate to know the fact that there are many classes of
material symmetry groups and they distinguish themselves by their respective orthogonal transformations.
The latter are listed in the above subsections, separately.
90 CHAPTER 5. TENSOR FUNCTIONS
ξ = f (uu1 , · · · , u a ; A 1 , · · · , A b ) ,
v = ρ(uu1 , · · · , u a ; A 1 , · · · , A b ) ,
S = Φ (uu1 , · · · , u a ; A 1 , · · · , A b ) ,
which specify a scalar quantity ξ, a vector quantity v and a tensor quantity S relying on or determined by the
vector and tensor variables u 1 , · · · , u a and A 1 , · · · , A c . As constitutive tensor functions modeling material
behaviour, each of them should obey the objectivity principle of material behaviour. This principle is
formulated in monographs on continuum mechanics, e.g., Truesdell and Noll (1992), Bertram (1989),
Haupt (2002), and many others. A state-of-the-art study and formulation may be found in, e.g., Svendsen
and Bertram (1999) and Bertram and Svendsen (2001).
From the objectivity principle of material behaviour, reduced forms of constitutive tensor functions may
be derived. Here we assume that the above forms are already such reduced forms obeying the objectivity
principle. Now the material symmetry principle further requires that
Qu 1 , · · · , Q u a ; Q A 1 Q T , · · · , Q A b Q T ) = Q Φ (uu1 , · · · , u a ; A 1 , · · · , A b )Q
Φ (Q QT for every Q ∈ G , (5.63)
where G is the material symmetry group of the material under consideration. These conditions simply
mean that if each variable (vector or tensor) is rotated through a symmetric transformation Q of the material
in question, then the corresponding functional value should be invariant (scalar-valued) or form-invariant
(vector- and tensor-valued). Usually, when G = Orth, Orth+ , the above tensor functions obeying the
material symmetry restriction are known as isotropic or hemitropic functions. Otherwise, they are referred
to as anisotropic functions.
The above material symmetry principle places invariance restrictions on the form of constitutive tensor
functions. For all materials characterized by the same class of material symmetry groups, the invariance
restrictions resulting from the material symmetry supply universal conditions for studying and reducing
the form of various kinds of constitutive tensor functions. Then, it is possible from them to derive a
general reduced form of complicated constitutive tensor functions from a purely mathematical procedure,
which thus would substantially facilitate both further theoretical study and experimental determination of
constitutive tensor functions.
According to the material symmetry principle, the invariance conditions for constitutive tensor func-
tions will differ from one another for materials characterized by different classes of material symmetry
groups. It may be important to derive general reduced forms of various kinds of constitutive tensor func-
tions for various types of material symmetry groups, which automatically satisfy the invariance condition
resulting from the material symmetry. This has been the main topic in the theory of representations for
tensor functions. Here it is impossible to present a comprehensive account of this theory in a general sense.
In the subsequent development, we shall study some particular yet commonly-considered cases. Some
general remarks will be given at the end of this chapter.
5.3. SCALAR-VALUED FUNCTIONS OF SYMMETRIC TENSOR 91
QA Q T ) = f (A
f (Q A) for each Q ∈ G . (5.65)
For each material symmetry group G , the above invariance restriction supplies conditions by means of
which the form of the tensor function f (A A) may be reduced. Such conditions are different for materials
characterized by different classes of material symmetry groups. In what follows, we shall present a general
A) for each class of material symmetry groups listed before, which
reduced form of the scalar function f (A
automatically obeys the invariance restriction (5.65).
A) = g(I1 (A
f (A A), I2 (A
A), I3 (A
A); A 1 , A 2 , A 3 ) .
QA Q T ) = Ir (A
Ir (Q A) ,
and, besides, it may be evident that the eigenprojections of Q A Q T are given by Q A r Q T . Hence we have
QA Q T ) = g(I1 (A
f (Q A), I2 (A A); Q A 1 Q T , Q A 2 Q T , Q A 3 Q T ) .
A), I3 (A
A), I2 (A
g(I1 (A A); Q A 1 Q T , Q A 2 Q T , Q A 3 Q T ) = g(I1 (A
A), I3 (A A), I2 (A
A), I3 (A
A); A 1 , A 2 , A 3 )
A), I2 (A
g(I1 (A A), I3 (A
A); A 1 , A 2 , A 3 )
92 CHAPTER 5. TENSOR FUNCTIONS
always yields the same value for all possible choices of its last three variables A r , and, therefore, it should
be independent of A r . Thus, we arrive at
A) = h(I1 (A
f (A A), I2 (A
A), I3 (A
A)) = h(I1 , I2 , I3 ) (5.67)
A) meeting (5.66).
for each isotropic invariant f (A
From (5.67) follows that every isotropic invariant of A is expressible as a single-valued function of the
three basic invariants of A . Since the basic invariants of A can be determined either by the three principal
A) or by the three eigenvalues λr , we have the following two alternative reduced forms for
invariants Jr (A
isotropic invariants:
A) = h̄(J1 (A
f (A A), J2 (A
A), J3 (A
A)) = h̄(J1 , J2 , J3 ) , (5.68)
A) = h̄(λ1 (A
f (A A), λ2 (A
A), λ3 (A
A)) = ĥ(λ1 , λ2 , λ3 ) . (5.69)
In the last expression, the function f (λ1 , λ2 , λ3 ) should be symmetric with respect to its any two variables.
f (A A), · · · , Kt (A
A) = h(K1 (A A)) . (5.71)
We call a set of specified invariants, K1 , · · · , Kt , that can attain the above goal is a functional basis of the
invariants relative to the material symmetry group G . Further, to have a sharp, compact general reduced
form, it is required that a functional basis should be irreducible in the following sense: a subset resulting
from the removal of any invariant Iα can no longer serve as a functional basis. Thus, our task is to find
an irreducible functional basis, K1 , · · · , Kt , in order to obtain an irreducible representation of the invariants
relative to each material symmetry group G .
Expressions (5.67)–(5.69) supply three irreducible representations for isotropic invariants. Accord-
ingly, (I1 , I2 , I3 ), (J1 , J2 , J3 ) and (λ1 , λ2 , λ3 ) supply three irreducible functional bases for isotropic invari-
ants. These examples show that even irreducible functional basis need not be unique.
Except for some simple cases, determination of irreducible representations of anisotropic invariants
relative to various types of material symmetry groups does not appear to be easy. The derivation is usually
too technical. After some remarks on literature, we shall simply cite the final results.
The classical results for anisotropic invariants were restricted to the case that f (A A) is a polynomial
function of the components of A , and attention was directed toward the 32 crystal classes and transverse
isotropy groups. Earlier attempt was made by von Mises (1928) and later was known to be unsuccessful.
Rivlin and Smith (1958) and Smith (1962) derived well-known results for polynomial invariants relative
to all the 32 crystal classes. These results are recorded in Green and Adkins (1960), Truesdell and Noll
(1992), Spencer (1971), Smith (1994), Haupt (2002), et al. On the other hand, an extensive, modern study
of anisotropic invariants as elastic strain-energy functions has been presented by Hackl (1999) based upon
symmetric irreducible tensors and group representation theory.
Complete and irreducible representations in a general sense covering both polynomial and non-polynomial
invariants are obtained in recent years. In Xiao (1996d), general results are presented for 32 crystal classes.
Results in unified forms are derived in Xiao (1998a) and Bruhns, Xiao and Meyers (1999a) for all cystal
classes and quasi-crystal classes. It is found that these new results in a general sense could be even sharper
and more compact than the classical results in a restrictive sense in many cases.
5.3. SCALAR-VALUED FUNCTIONS OF SYMMETRIC TENSOR 93
Below we record these new results in a general sense. The classical results may be found in the refer-
ences mentioned before.
Cylindrical groups
Crystal classes
G = C1 , S2 (triclinic)
A) = h(A11 , A22 , A33 , A12 , A23 , A31 ) .
f (A (5.74)
In the above, Ai j are the components of A relative a standard basis e i . It should be noted that the above
form depends on the choice of basis e i .
G = C2h ,C1h ,C2 (monoclinic)
A) = h(A11 , A22 , A33 , A12 , A223 , A231 , A23 A31 ) .
f (A (5.75)
In the above, Ai j are the components of A relative to a standard basis (ee1 , e 2 , e 3 ), where e 3 = n and e 1 and
e 2 are two orthonormal vectors in a plane normal to n .
G = D2h ,C2v , D2 (rhombic; orthotropic)
A) = h(A11 , A22 , A33 , A212 , A213 , A223 , A12 A23 A31 ) .
f (A (5.76)
In the above, Ai j are the components of A relative to the standard basis (nn1 , n2 , n3 ).
G = D3d ,C3v , D3 (trigonal)
A) =h(A33 , A11 + A22 , (A11 − A22 )2 + 4A212 , A223 + A231 , A23 (3C31
f (A 2 − A2 ),
23
(A11 − A22 )A23 + 2A31 A12 , (A11 − A22 )(12A212 − (A11 − A22 )2 ), (5.77)
(A11 − A22 )(A231 − A223 ) + 4A12 A23 A31 )) .
In the above and (5.79)–(5.83) given later, Ai j are the components of tensor A relative to the standard basis
e1 = l 0, e2 = n × l 0, e3 = n . (5.78)
G = S6 ,C3 (trigonal)
A) =h(A33 , A32 (3A231 − A223 ), A31 (3C23
f (A 2 − A2 ), A + A ,
31 11 22
The above result is derived by simplifying that given in Xiao (1998; p. 1236).
G = C4h , S4 ,C4 (tetragonal)
A) =h(A33 , A11 + A22 , (A11 − A22 )2 − 4A212 , A12 (A11 A22 ),
f (A
A413 + A423 − 6A213 A223 , A13 A23 (A213 − A223 ),
(5.81)
(A11 − A22 )(A213 − A223 ) + 4A12 A23 A31 ,
A12 (A23 A223 ) − (A11 − A22 )A13 A23 ) .
G = Oh , Td , O (cubic)
A) =h(A11 + A22 + A33 , A211 + A222 + A233 , A11 A22 A33 ,
f (A
A212 + A223 + A231 , A412 + A423 + A431 , A12 A23 A31 , (5.84)
A11 A223 + A22 A231 + A33 A212 , A211 A223 + A222 A231 + A233 A212 ) .
G = Th , T (cubic)
A) =h(A11 + A22 + A33 , A211 + A222 + A233 , A11 A22 A33 ,
f (A
A212 + A223 + A231 , A412 + A423 + A431 , A12 A23 A31 ,
A11 A223 + A22 A231 + A33 A212 , A211 A223 + A222 A231 + A233 A212 , (5.85)
(A11 − A22 )(A22 − A33 )(A33 − A11 ),
(A223 − A231 )(A231 − A212 )(A212 − A223 )) .
Quasi-crystal classes
In the following results for quasicrystal classes, we take a standard basis e i as given by (5.78) and
denote
1
p = A31 e 1 + A32 e 2 , q = (A11 − A22 )ee1 + A12 e 2 , (5.86)
2
φ = cos−1 (pp · e 1 ) , ψ = cos−1 (qq · e 1 ) . (5.87)
Hence we have
1
|pp|2 = A213 + A223 , |qq|2 = (A11 − A22 )2 + A212 . (5.88)
4
G = D2mh ,C2mv , D2m , D2m̄−1h , D2m̂d for m ≥ 2
5.3. SCALAR-VALUED FUNCTIONS OF SYMMETRIC TENSOR 95
A) = h(K1 , · · · , K8 ) ,
f (A (5.89)
where
A − n · A [nn] , K3 = |pp|2 , K4 = |qq|2 ,
K = n · A [nn] , K2 = trA
1
K5 = |pp|2m cos 2mφ, K6 = |qq|m cos mψ , (5.90)
K7 = |pp|2 |qq| cos(2φ − ψ) , K8 = |pp|2 |qq|m−1 cos(2φ + (m − 1)ψ) .
In the above and below, m=1, 2, 3, · · ·, and m̄ = (m − 1)/2 for an odd m, and m̂ = m/2 for an even m.
G = C2mh ,C2m ,C2m̄−1h , S4m̂ for m ≥ 2
An irreducible representation is given by (5.89) with
K1 = n · A [nn] , K2 = trA
A − n · A [nn];
K3 = |pp|2m cos 2mφ , K4 = |pp|2m sin 2mφ,
(5.91)
K5 = |qq|m cos mψ , K6 = |qq|m sin mψ ,
K7 = |pp|2 q | cos(2φ − ψ) , K8 = |pp|2 |qq| sin(2φ − ψ) .
The above results exhaust all the quasicrystal classes as subgroups of cylindrical groups. A result for
icosahedral classes is given in Xiao (1998a). However, this result is a bit complicated, and it is not yet
known whether it is irreducible or not. Hence it is not listed here.
A0 ) − f (A
| f (A A)| < δ (5.95)
96 CHAPTER 5. TENSOR FUNCTIONS
A0 − A | < ε .
|A (5.96)
∂f ∂f
= ei ⊗ e j . (5.100)
A ∂Ai j
∂A
Some simple rules for derivative are as follows:
∂( f1 + f2 ) ∂ f1 ∂ f2
= + , (5.101)
∂AA A
∂A A
∂A
∂( f1 f2 ) ∂ f2 ∂ f1
= f1 + f2 (product rule) , (5.102)
∂AA ∂AA ∂AA
for any two differential scalar functions f1 = f (A A) and f2 = f2 (A
A); and
∂h ∂h ∂K1 ∂h ∂Ks
= +···+ (Leibniz’s chain rule) , (5.103)
A ∂K1 ∂A
∂A A A
∂Ks ∂A
5.4. SYMMETRIC TENSOR-VALUED FUNCTIONS OF SYMMETRIC TENSOR 97
where h = h(K1 , · · · , Ks ) is a differentiable function of scalars K1 , · · · , Ks and each of the latter is a differen-
tial scalar functions of tensor variable A .
It may be useful to have expressions for derivatives of some frequently used invariants. An example is
the stress-strain relation of a hyperelastic body, which should be derived from the strain-energy function.
In the following, we take isotropic invariants into consideration.
For the basic invariants of A, i.e., Ir = Ir (AA)(cf. (2.76))), we have
∂λσ
= Aσ , (5.107)
A
∂A
where A σ is the eigenprojection of A subordinate to the eigenvalue λσ of A . This means that the derivative
of each eigenvalue is just the eigenprojection subordinate to this eigenvalue.
Twice or higher derivative may be defined, which will be postponed till §5.4.3.
A) .
S = Φ (A (5.108)
For the sake of simplicity, (5.108) will be abbreviated to tensor-valued function. An obvious fact is
QA Q T ) = Q Φ (A
Φ (Q QT = Φ (A
A)Q A) , Q = −II , (5.109)
A). This fact implies that the conditions (5.110) are the same for any
for every tensor-valued function Φ(A
two groups G1 and G2 related by
G2 = {QQ, −QQ | Q ∈ G1 } .
If (5.108) represents a constitutive relation of a material with the material symmetry group G , then the
material symmetry principle formulated in §5.2.5 leads to the form-invariance restriction
QA Q T ) = Q Φ (A
Φ (Q QT
A)Q for each Q ∈ G . (5.110)
98 CHAPTER 5. TENSOR FUNCTIONS
A) satisfying
In what follows, we shall present a general reduced form of the tensor-valued function Φ (A
the form-invariance restriction (5.110) for each class of material symmetry group G .
A) is linear in A , then we have (3.45), i.e.,
In particular, if Φ (A
A] ,
S = L [A
where L is a constant 4th-order tensor with minor symmetry (3.47). Then, the restriction (5.110) reduces
to
Qip Q jq Qkr Qls L pqrs = Li jkl for each Q ∈ G .
Usually, tensors L obeying the above restriction are known as isotropic 4th-order tensors for G =Orth, Orth+
or anisotropic 4th-order tensors for any other G . The general forms of 4th-order isotropic and anisotropic
tensors may be found in, e.g., Nye (1985) and Gurtin (1972). The isotropic 4th-order tensor will be dis-
cussed next subsection. Further study of isotropic and anisotropic 4th-order tensors in modern sense may
be found, e.g., in the relevant references mentioned in §3.10.2 and in Huo and Del Piero (1991), Forte
and Vianello (1996, 1997), Xiao (1998c), Chadwick, Vianello and Cowin (2001), as well as the references
therein.
QT = (Q
Q (aar ⊗ a r )Q Qa r ) ⊗ (Q
Qa r ) = a r ⊗ a r , Q = R πa k , k = 1, 2, 3.
Q A QT = A , Q = Rπa k , k = 1, 2, 3.
Q Φ (A QT = Φ (A
A)Q A) , Q = R πa k , k = 1, 2, 3.
where the sr are three eigenvalues of S and the s r are three subordinate orthonormal eigenvectors of A .
Hence,
3
Q S Q T = Q Φ (A QT =
A)Q ∑ sr (QQs r ) ⊗ (QQs r ) .
r=1
Without losing generality, we assume that the three eigenvalues sr of S are distinct. Then the last expression
means that both s r ⊗ s r (no summation) and (QQs r ) ⊗ (Q
Qs r ) (no summation) are the same eigenprojection
of S subordinate to λr . This fact results in
Q s r = −ssr , Q = R πak , k = 1, 2, 3.
5.4. SYMMETRIC TENSOR-VALUED FUNCTIONS OF SYMMETRIC TENSOR 99
This is possible only when s r = ±aar , namely, S and A have three orthonormal eigenvectors in common.
Thus, we have
A) = ∑3r=1 sr A r ,
S = Φ (A
sr = A r Φ (A A) = sr (λ1 , λ2 , λ3 ) , (5.111)
sτ(r) (λτ(1) , λτ(2) , λτ(3) ) = sr (λ1 , λ2 , λ3 ) ,
for every isotropic function S = Φ (AA). In the above, A r = a r ⊗ a r (no summation) is the eigenprojection
subordinate to λr , and τ is any permutation transforming the three numbers 1, 2, 3 into themselves.
Applying Sylvester’s formula (cf. (2.112)–(2.115)) for eigenprojections A r to (5.111), we arrive at the
well-known Rivlin-Ericksen theorem:
A) = f0 I + f1 A + f2 A 2 ,
S = Φ (A (5.112)
where each scalar coefficient ft is an isotropic invariant of A and hence may be expressed as one of the
three forms as given by (5.67)–(5.69).
It turns out that each isotropic tensor-valued function Φ (AA) is simply a linear combination of the first
three powers of tensor variable A , i.e., A α with α =0, 1, 2, where each scalar coefficient is an isotropic
invariant. Note here that each A α is a specified isotropic tensor-valued functions of A . On account of the
general reduced form (5.118), we say that the three specified isotropic tensor-valued functions I , A , A 2 form
a generating set for all the isotropic tensor-valued functions Φ (A A), and that (5.118) is the representation
for the isotropic tensor-valued functions Φ (AA) in terms of the generating set {II , A , A 2 }.
The simplest yet most widely considered case is the linear case of the tensor function (5.108), i.e.,
S = C[A
A] ,
where C a constant 4th-order tensor with minor symmetry (cf. (3.47)). In this particular case, the form-
invariance condition (5.110) with G = Orth reduces to the requirement
Qip Q jq Qkr Qls C pqrs = Ci jkl for each Q ∈ Orth . (5.113)
Such a 4th-order tensor is known as isotropic tensor, as mentioned before. Its general form will be derived
below.
In general, each coefficient fl in (5.118) is a function of the three basic invariants Ir of the tensor A , as
shown by (5.67). For a linear tensor-valued function Φ (A A), we may set
A) ,
f0 = ΛI1 = Λ(trA f1 = 2G , f2 = 0 ,
in (5.118). In the above, Λ and G are two scalar constants. Thus, we derive a general reduced form of
A) as follows:
isotropic linear tensor-valued function Φ (A
A] = Λ(trA
S = C[A A)II + 2GA
A. (5.114)
From this we deduce that a general form of isotropic 4th-order tensor is given by
C = ΛII ⊗ I + 2G Isym . (5.115)
This and the decomposition (3.55) yield an alternative form below:
1
C = (3Λ + 2G) I ⊗ I + 2G Īsym . (5.116)
3
Since
1 1 1 1
I ⊗I ◦ I ⊗ I = I ⊗ I, I ⊗ I ◦ Īsym = O, Īsym ◦ Īsym = Īsym , (5.117)
3 3 3 3
we infer that (5.116) is just the characteristic expression of the isotropic 4th-order tensor C. Then we have
(cf., (3.101)
1
C−1 = (3Λ + 2G)−1 I ⊗ I + (2G)−1 Īsym . (5.118)
3
100 CHAPTER 5. TENSOR FUNCTIONS
Cylindrical groups
Crystal classes
G = S2 ,C1 (triclinic)
Φ (A A)eei ⊗ e j .
A) = Si j (A (5.125)
Φ(A A) + · · · + f9 G9 (A
A) = f1 G1 (A A) , (5.128)
D1 = e1 ⊗ e1 − e2 ⊗ e2 , D2 = e1 ∨ e2 , D3 = n ∨ e1 , D4 = n ∨ e2 . (5.130)
Φ (A A) + · · · + f8 G 8 (A
A) = f1 G 1 (A A) , (5.131)
G1 = I , G2 = A , G3 = A2 , G4 = r ⊗ r , G5 = S ,
(
(5.132)
A , G 7 = A 2 (εε[rr ]) − (εε[rr ])A
G 6 = A (εε[rr ]) − (εε[rr ])A A2 , G 8 = S (εε[rr ]) − (εε[rr ])SS ,
where
A) = A23 n 1 + A31 n 2 + A12 n 3 ,
r = r (A (5.133)
A) = (A22 − A33 )nn1 ⊗ n 1 + (A33 − A11 )nn2 ⊗ n 2 + (A11 − A22 )nn3 ⊗ n 3 .
S = S (A (5.134)
G = Oh , Td , O (cubic)
An irreducible representation is given by (5.128) with
G 1 =II = n 1 ⊗ n 1 + n 2 ⊗ n 2 + n 3 ⊗ n 3 ,
G 2 =A11 n 1 ⊗ n 1 + A22 n 2 ⊗ n 2 + A33 n 3 ⊗ n 3 ,
G 3 =A12 n 1 ∨ n 2 + A23 n 2 ∨ n 3 + A31 n 3 ∨ n 1 ,
G 4 =A223 n 1 ⊗ n 1 + A231 n 2 ⊗ n 2 + A212 n 3 ⊗ n 3 ,
G 5 =A423 n 1 ⊗ n 1 + A431 n 2 ⊗ n 2 + A412 n 3 ⊗ n 3 ,
(5.135)
G 6 =A211 n 1 ⊗ n 1 + A222 n 2 ⊗ n 2 + A233 n 3 ⊗ n 3 +
A23 A31 n 1 ∨ n 2 + A31 A12 n 2 ∨ n 3 + A12 A23 n 3 ∨ n 1 ,
G 7 =A312 n 1 ∨ n 2 + A323 n 2 ∨ n 3 + A331 n 3 ∨ n 1 ,
G 8 =A33 A12 n 1 ∨ n 2 + A11 A23 n 2 ∨ n 3 + A22 A31 n 3 ∨ n 1 ,
G 9 =A233 A12 n 1 ∨ n 2 + A211 A23 n 2 ∨ n 3 + A222 A31 n 3 ∨ n 1 .
Quasi-crystal classes
A) and q = q (A
In the following, the vectors p = p (A A) are given by (5.86) and the angles φ = φ(AA) and
ψ = ψ(A A) by (5.87), where the standard basis e i is given by (5.78). Moreover, the four tensors D α are
given by (5.130).
G = D2mh ,C2mv , D2m , D2m̄−1h , D2m̂d for m ≥ 2
An irreducible representation is given by (5.131) with
G1 = I , G2 = n ⊗ n , G3 = A , (5.136)
G 4 = |pp|2 (D
D1 cos 2φ + D 2 sin 2φ) , (5.137)
G 5 = |pp|2m (D
D1 cos 2mφ − D 2 sin 2mφ) , (5.138)
G6 = |pp|2m+1 (D
D3 cos(2m + 1)φ − D4 sin(2m + 1)φ , (5.139)
m
G 7 = |qq| (D
D1 cos mψ − D 2 sin mψ) , (5.140)
G 8 = |pp|2m−1 |qq|(D
D3 cos((2m − 1)φ + ψ) − D 4 sin((2m − 1)φ + ψ)) . (5.141)
G 5 = |pp|2m−1 (D
D1 sin(2m − 1)φ + D 2 cos(2m − 1)φ) , (5.142)
G 6 = |pp|2m (D
D3 sin 2mφ + D 4 cos 2mφ) , (5.143)
2m
G 7 = |qq| (D
D1 cos 2mψ − D 2 sin 2mψ) , (5.144)
m
G 8 = |qq| (D
D3 sin mψ + D 4 cos mψ) , (5.145)
G 9 = |qq|m+1 (D
D3 sin(m + 1)ψ − D 4 cos(m + 1)ψ) . (5.146)
G 5 = A (εε[nn]) − (εε[nn])A
A, (5.147)
G 6 = |pp|2 (D
D1 sin 2φ − D 2 cos 2φ) , (5.148)
G 7 = |qq|m (D
D3 sin mψ + D 4 cos mψ) , (5.149)
m
G 8 = |qq| (D
D3 cos mψ − D 4 sin mψ) . (5.150)
G = Ih , I (icosahedral)
A representation is given by (5.128) with
6
Gk = ∑ (nnα · A[nnα ])k−1 nα ⊗ nα , k = 1, 2, · · · , 6 , (5.151)
α=1
A) is differentiable at A , then Φ (A
If Φ (A A) is continuous at A . The converse needs not be true, as in the
case of real functions.
Assume that Φ(A A) is Fréchet-differentiable at A. Using (5.154) and replacing A0 in (5.155) with A + εX
X
X
where the number ε may go to vanish but the symmetric 2nd-order tensor need not be so, we derive
Φ (A X ) − Φ (A
A + εX A) ∂ΦΦ
lim = X]
[X (5.156)
ε→0 ε A
∂A
for any symmetric 2nd-order tensor X .
Expression (5.156) in its own right leads to the notion of Gâteaux-differentiability property and Gâteaux-
derivative of the tensor-valued function Φ (A A). Generally, this notion need not imply the notion introduced
before. Note that (5.156) is derived as a consequence of Fréchet-differentiability property.
Given a fixed standard basis e i , a tensor-valued function Φ (AA) may be regarded as six component func-
tions of the six standard components Ai j relative to this basis. In this case, we have the useful expression
Φ ∂Φi j
∂Φ
= ei ⊗ e j ⊗ ek ⊗ el . (5.157)
A
∂A ∂Akl
A) and Φ 2 (A
Some useful rules for derivative are given as follows. First,for any two differentiable Φ 1 (A A),
we have
∂(ΦΦ1 + Φ 2 ) ∂Φ Φ1 ∂Φ Φ2
= + . (5.158)
A
∂A ∂AA A
∂A
A) and for any differentiable tensor-valued
In addition, f or any differentiable scalar-valued function f = f (A
function Φ (AA), we have
∂( f Φ ) Φ
∂Φ ∂f
=f +Φ ⊗ (product rule) . (5.159)
∂AA A
∂A A
∂A
Let Φ (SS ) be a differentiable tensor-valued function of tensor variable S and the latter in turn be a
A). Then, we have
differentiable tensor-valued function of tensor variable A , i.e., S = S [A
Φ(SS ) ∂Φ
∂Φ Φ ∂SS
= ◦ (Leibniz’s chain rule) . (5.160)
A
∂A ∂SS ∂A
A
On the other hand, let Φ 1 (A A) and Φ 2 (A
A) be two differentiable tensor-valued functions of A . Then, the
derivatives of their scalar product and composite product may be calculated by the rules
∂(ΦΦ1 Φ2 ) Φ2
∂Φ Φ1
∂Φ
X ] = Φ1
[X X ] + Φ2
[X X] ,
[X (5.161)
A
∂A A
∂A A
∂A
Φ1 Φ2 )
∂(Φ Φ2
∂Φ Φ1
∂Φ
X ] = Φ1
[X X] +
[X X ] Φ2 .
[X (5.162)
A
∂A A
∂A A
∂A
The derivative of scalar-valued function f (AA), ∂ f /∂A
A is a tensor-valued function of A . We refer to the
A as the twice derivative of the scalar-valued function f (A
derivative of the tensor-valued function ∂ f /∂A A)
and designate it by ∂2 f /∂A
AA , i.e.,
∂2 f
∂ ∂f
≡ . (5.163)
∂AA∂AA ∂A A ∂A A
Below we record some results for derivatives of frequently used isotropic tensor-valued functions.
∂II A
∂A A2
∂A
X] = O ,
[X X] = X ,
[X = AX + X A . (5.164)
A
∂A A
∂A A
∂A
In general, for each positive integer t we have
At
∂A t
= ∑ At−α X A α−1 . (5.165)
A
∂A α=1
5.5. EIGENPROJECTION METHOD 105
If A is non-singular, we have
A−1
∂A
[X A−1 X A −1 .
X ] = −A (5.166)
A
∂A
This may be derived by differentiating A A −1 = I and using (5.162). Further, utilizing Leibniz’s chain rule
(cf. (5.160)), for any positive inter t we have
A−t
∂A A−1 )t ∂A
∂(A A−1
X] =
[X [ X ]] .
[X
A
∂A A−1
∂A A
∂A
From this and (5.165)–(5.166) we derive
A−t
∂A t
X ] = − ∑ A α−t−1 X A −α
[X (5.167)
A
∂A α=1
A + ε(aai ⊗ a j + a j ⊗ a i ) ,
∂gg A + εaai ⊗ a i ) − g (A
g (A A)
[aai ⊗ a i ]= lim
A
∂A ε→0
ε
g(λi + ε) − g(λi )
= lim ai ⊗ ai (5.168)
ε→0 ε
∂g
= ai ⊗ ai .
∂λi
106 CHAPTER 5. TENSOR FUNCTIONS
On the other hand, let λi and λ j be any two eigenvalues of A and the other eigenvalue of A be λk and
let a i , a j , a k be their subordinate orthonormal eigenvectors of A . Then we have
A = λi a i ⊗ a i + λ j a j ⊗ a j + λk a k ⊗ a k , (5.169)
(
A + ε(aai ⊗ a j + a j ⊗ a i ) = A i j + λk a k ⊗ a k ,
(5.170)
A i j = λi a i ⊗ a i + λ j a j ⊗ a j + ε(aai ⊗ a j + a j ⊗ a i ) .
Then, the three eigenvalues of the perturbation tensor variable are given by λk and
¯ i = (λi + λ j + p)/2 , λ¯ j = (λi + λ j − p)/2 ,
(
λ
p (5.171)
p = (λi − λ j )2 + 4ε2 .
Let āai and āa j be any given two orthonormal eigenvectors of the perturbation tensor variable subordinate to
¯ i and λ
λ ¯ j . Then
(
āai ⊗ āai + āa j ⊗ āa j = a i ⊗ a i + a j ⊗ a j ,
(5.172)
¯ i āai ⊗ āai + λ
λ ¯ j āa j ⊗ āa j = A i j .
¯ i 6= λ
From (5.168) we know that λ ¯ j for any ε 6= 0 and then from (5.169) we derive
(
āai ⊗ āai = 1 ¯ j (aai ⊗ a i + a j ⊗ a j )) ,
Ai j − λ
¯ i −λ
λ ¯ j (A
1 ¯ i (aai ⊗ a i + a j ⊗ a j )) , (5.173)
āa j ⊗ āa j = ¯ j −λ
λ
Ai j − λ
¯ i (A
It may be readily understood that the three orthonormal vectors āai , āa j , āak = a k supply an eigenbasis of the
foregoing perturbation tensor. From the above facts and (5.116) we infer
3
A + ε(aai ⊗ a j + a j ⊗ a i ))=
g (A ∑ g(λ¯ r )āar ⊗ āar (5.174)
r=1
¯ i )āai ⊗ āai + g(λ
= g(λ ¯ j )āa j ⊗ āa j + g(λk )aak ⊗ a k .
∆gg(A A + ε(aai ⊗ a j + a j ⊗ a i )) − g (A
A, ε)= g (A A)
= g(λ¯ i )āai ⊗ āai + g(λ
¯ j )āa j ⊗ āa j − g(λi )aai ⊗ a i − g(λ j )aa j ⊗ a j (5.175)
= α(aai ⊗ a i + a j ⊗ a j ) + β(aai ⊗ a i − a j ⊗ a j ) + γaai ∨ a j ,
where
¯ i ) − g(λi ) + g(λ
α = g(λ ¯ j ) − g(λ j ) , (5.176)
λi − λ j ¯ ¯ j )) − 1 (g(λi ) − g(λ j )) ,
β= (g(λi ) − g(λ (5.177)
2p 2
ε ¯ ¯ j )) .
γ = (g(λi ) − g(λ (5.178)
p
∂gg ∆gg(A
A, ε)
[aai ⊗ a j + a j ⊗ a i ] = lim ,
A
∂A ε→0 ε
it suffices to calculate the three limits
α β γ
lim , lim , lim .
ε→0 ε ε→0 ε ε→0ε
5.5. EIGENPROJECTION METHOD 107
The particular case when a i and a j coincide has been treated before. For orthonormal a i and a j , two cases
will be discussed below.
First, suppose λi = λ j . Then
p = 2|ε| , ¯ i = λi + |ε| ,
λ ¯ i = λi − |ε| .
λ
Hence, we have
α g(λi + |ε|) − g(λi ) g(λi − |ε|) − g(λi )
lim = lim − = 0;
ε→0 ε ε→0 ε ε
β g(λi + |ε|) − g(λi − |ε|)
lim = − lim = 0;
ε→0 ε ε→0 2ε
γ g(λi + |ε|) − g(λi − |ε|
lim = lim = g0 (λi ) .
ε→0 ε ε→0 2|ε|
and therefore (
∂gg 0
A [a i ⊗ a j + a j ⊗ a i ] = g (λi )(a i ⊗ a j + a j ⊗ a i ) ,
∂A
a a
(5.179)
λi = λ j , i 6= j .
Next, let λi 6= λ j . There is no harm in assuming λi > λ j . Then
p = λi − λ j + 2η + o1 (ε2 ) , ¯ i = λi + η + o2 (ε2 ) ,
λ ¯ j = λ j − η + o3 (ε2 ) ,
λ
where
ε2 ok (ε2 )
η= , lim = 0, k = 1, 2, 3.
λi − λ j ε→0 ε2
we derive
3
∂gg g(λi ) − g(λ j )
X ]= ∑
[X (aai ⊗ a i )X
X (aa j ⊗ a j )
A
∂A i, j=1 λi − λ j
!
3 3
g(λi ) − g(λ j )
=∑ ∑ λi − λ j (aai ⊗ a i )XX (aa j ⊗ a j )
i=1 j=1 !
3 m
g(λi ) − g(λ˜ τ)
=∑ ∑ λ − λ˜ (aai ⊗ a i )XX A τ
i=1 τ=1 i τ !
m 3
g(λi ) − g(λ˜ τ) m ˜ σ ) − g(λ
g(λ ˜ τ)
= ∑ ∑ λ − λ˜ (aai ⊗ a i )XX A τ = ∑ λ˜ − λ˜ A σ X A τ .
τ=1 i=1 i τ σ,τ=1 σ τ
In the above and below, we no longer use λ ˜ σ but use λσ to designate the distinct eigenvalues of A and we
use A 1 , · · · , A m to denote the subordinate eigenprojections of A .
Thus, substituting Sylvester’s formula (2.111) into (5.183), we may obtain an explicit expression of the
form
m−1
∂gg
X ] = ∑ ρrs A r X A s .
[X (5.183)
A
∂A r,s=0
In the above, each coefficient ρrs = ρsr is a symmetric function of the distinct eigenvalues of A . Explicit
expressions for these coefficients may be found in Xiao (1995a).
A) and Φ (A
The above procedure may be used to treat isotropic scalar- and tensor-valued function f (A A).
A), we have (2.71) and its derivative
In fact, for isotropic scalar-valued function f (A
m
∂f ∂ĥ
= ∑ Aσ , (5.184)
A σ=1 ∂λσ
∂A
Results for the derivative of isotropic tensor-valued function may be derived by the same procedure,
but not as simple as above.
(i) The eigenbasis of a basic symmetric tensor variable A , such as the Cauchy-Green deformation tensor
etc., is chosen as a standard basis of vector spaces;
(ii) the results for various relevant problems associated with A , such as the derivatives of isotropic scalar-
and tensor-valued functions of A etc., are derived and presented by means of the eigenbasis and the
simple characteristic properties of A , in which the characteristic expressions of A and A + εaai ∨ a j ,
i.e., (5.169)–(5.173) may play a basic role;
(iii) the final results are expressed in terms of the eigenprojections of A and then Sylvester’s formula
(2.111) is meant;
5.6. CONTINUOUSLY DIFFERENTIABLE REPRESENTATION 109
(iv) In some cases, e.g., in studying issues concerning finite strains and large rotations, such as work-
conjugacy notions either in a classical sense or in an extended sense, it is required to work out the
inverse tensors of certain relevant 4th-order tensors, such as the derivative ∂gg/∂A
A of the finite strain
A), etc. Here, eigenprojection method proves simple and powerful. Indeed, the expression
tensor g (A
(5.183) in terms of eigenprojections is just the characteristic expression of the derivative ∂gg/∂A
A. We
have
∂gg
[aai ⊗ a j + a j ⊗ a i ] = ρi j (aai ⊗ a j + a j ⊗ a i ) (no summation) (5.186)
A
∂A
for any eigenvectors a i and a j of A . Thus, its inverse tensor may be obtained directly by applying
(3.94) in §3.10.2. Details in this respect may be found in Xiao (1995a), Xiao, Bruhns and Meyers
(1997, 1998d), and Bruhns, Meyers and Xiao (2002).
As indicated before, earlier Hill (1968, 1978) (see also, e.g., Ogden 1984) introduced and systematically
and successfully used the well-known principal axis method. Later, Carlson and Hoger (1986), Simo and
Taylor (1991), Miehe (1993, 1994, 1997), and Šilhaý (1997), et al. used the characteristic expression of
the tensor variable A to treat particular or general isotropic tensor-valued function Φ (A A). On the other
hand, eigenprojections play a basic role in the study of the characteristic structure of the elasticity tensor
by Rychlewski (1984a, b, 1995, 2000).
Most recently, the foregoing ideas and procedures have been developed and systematically used to
treat various problems in finite strains and large rotations of continuous deformable bodies and in rational,
consistent modelling of inelastic behaviour at finite deformations, including the derivative of Hill’s strain
tensors and their work-conjugate stresses; a natural extension of Hill’s work-conjugate notion; the finding
and the proof of the unique, intrinsic relationship between Hencky’s logarithmic strain tensor and the
stretching tensor via corotational frames and the introduction of the logarithmic rate; a general study of
corotational rates and their defining spin tensors; a general study of non-corotational rates of Oldroyd’s
type; the finding and the uniqueness proof of the exactly integrable Eulerian elastic rate equations in linear
form and in general form , etc. For details, refer to Xiao (1995a), Xiao, Bruhns and Meyers (1997, 1998a,
b, d, 1999a, 2000d, 2002), Bruhns, Xiao and Meyers (1999b, 2002), Meyers (1999a, b), Meyers, Schiesse
and Bruhns (2000), Bruhns, Xiao and Meyers (1999), and Bruhns, Meyers and Xiao (2002).
The eigenprojection method described above may be extended to cover non-symmetric 2nd order ten-
sor and related isotropic functions. Systematic development and detailed account in a general sense are
presented by Meyers (1999a). Some recent results for tensor power series of a non-symmetric tensor are
derived by Itskov (2002) and Itskov and Aksel (2002).
tk = α0 + α1 ak + α2 a2k , k = 1, 2, 3,
where tk and ak are eigenvalues of the value tensor T and the argument tensor A , respectively. The above
three equations provide explicit expressions for the three αi in terms of the eigenvalues tk and ak , i.e.,
Such expressions and other equivalent forms were derived and discussed by many researchers from differ-
ent contexts, e.g., Fitzjerald (1981), Betten (1984, 1987, 1993, 1998), Ting (1985), and Man (1994, 1995),
110 CHAPTER 5. TENSOR FUNCTIONS
et al. Another approach based upon the spectral decomposition of the argument tensor A was proposed
and used by Simo and Taylor (1991), Miehe (1993, 1994), and Šilhavý (1997), et al., as mentioned be-
fore. These expressions, however, require special treatment for the singly and twice coalescent cases of the
eigenvalues (a1 , a2 , a3 ) of the argument tensor A . In these cases, one has to either resort to a limiting pro-
cess resulting from the assumption of the continuous differentiability properties (see the relevant account
below) of these expressions or use the alternative expressions
A) = β0 I + β1 A
S = Φ (A
Let G 1 (A A), · · · , G s (A
A), G 2 (A A) form a generating set for the tensor functions Φ (A
A) that are form-invariant
under a given material symmetry group G (see (5.110)). Then we have the representation (5.119), where
A) is an invariant relative to the group G (see (5.65)). We introduce the following new notion of
each fi (A
representation for isotropic and anisotropic tensor-valued functions.
We say that a representation (5.119) for the tensor functions Φ (A A) that are form-invariant under a
given material symmetry group G is said to be a continuous representation, if for every form-invariant,
continuous tensor function Φ (A A), each term fi (A
A)GGi (A
A) in the representation (5.119) is continuous.
Further, way say that a representation (5.119) for the tensor functions Φ (A A) that are form-invariant
under a given material symmetry group G is said to be a continuously differentiable representation, if it
is a continuous representation and, moreover, for every form-invariant, continuously differentiable tensor
function Φ (AA), each term fi (A A)GGi (A
A) in the representation (5.119) is also continuously differentiable.
It should be pointed out that a representation with a generating set composed of C∞ smooth generators,
e.g., polynomial generators, need not be a continuous representation, let alone a continuously differentiable
representation. In fact, the standard representation formula (5.118) for isotropic tensor functions is not a
continuous representation, as indicated before. In classical invariant theory, “polynomial” representations
derived from integrity bases were exclusively taken into account, e.g., see, Spencer (1971), Smith (1994),
et al. Each such representation is generated by a set of polynomial tensor generators and expresses each
form-invariant polynomial tensor functions as a linear combination of the polynomial tensor generators
of this set with each representative coefficient a polynomial invariant. However, generally such a poly-
nomial representation need not be a continuous representation. The main reason is that, in contrast with
the foregoing definitions, the definition of a “polynomial representation” is concerned with the polynomial
tensor functions only, the latter being the simplest class of analytic (C∞ ) tensor functions. Indeed, the
representation (5.118) may be regarded to be a “polynomial” representation that can be derived from the
integrity basis {trA A2 , trA
A, trA A3 }, but even it is not a continuous representation for isotropic tensor functions,
as mentioned before.
Thus, if a classical result for representations of isotropic and anisotropic functions fails the new require-
ments, it seems that we need to derive an alternative representation in the sense of the new representation
notion introduced above. Here, our attention is directed to the isotropic tensor-valued functions.
A) = c0 I + c1 Ā
S = Φ (A A + c2 G (A
A) , (5.188)
where
A|2 A 2 − Ā A 2 Ā
A) = |Ā
G (A A A (5.189)
and each ci is an isotropic invariant of A .
A, G (A
The three specified isotropic tensor-valued functions I , Ã A) given above form an orthogonalized
A), which may be derived from the usual generating
generating set for isotropic tensor-valued functions Φ (A
set formed by I , A and A 2 by means of Schmidt’s orthogonalization procedure. We have
I A = 0,
Ā
S G (AA)
c2 = . (5.192)
|G A)|2
G(A
Accordingly, from these and Eq. (5.188) we obtain
S I S Ā A S G (AA)
A) =
S = Φ (A I+ A+
Ā A) .
G (A (5.193)
3 A|2
|Ā |G A)|2
G(A
As a result, if (5.188) describe a stress-deformation relation, then its representative coefficients ci may be
determined by means of experimental data for the deformation (deformation rate) tensor A and the stress
tensor S through expressions (5.190)–(5.192). We would like to point out that, unlike the expressions of
the form (5.187), expressions (5.190)–(5.192) do not involve the eigenvalues of both A and S . Given data
for both A and S in any chosen coordinate system, expressions (5.190)–(5.192) involve only multiplicative
calculations concerning the tensors ĀA, A 2 and S .
The representation formula (5.193) assumes a unified form for the different cases of coalescence of the
eigenvalues of A. Indeed, we have (5.193) for the case when the three eigenvalues of A are distinct to each
other, and
S I S Ā A
S = Φ(A A) = I+ A
Ā (5.194)
3 A|2
|Ā
for the case when the three eigenvalues of A are singly coalescent, say, a1 6= a2 = a3 , and
S I
A) =
S = Φ (A I (5.195)
3
for the case when the three eigenvalues of A are twice coalescent, i.e., a1 = a2 = a3 . Note that the coeffi-
cients in the last two expressions have the same forms as those of their counterparts in Eq. (5.193). Eqs.
(5.194) and (5.195) may be obtained by setting the last term and the last two terms in Eq. (5.193) to be
zero, respectively. This is exactly in accordance with the respective continuity properties of the last two
terms. It is demonstrate in Xiao, Bruhns and Meyers (2002) that the representation (5.188) is a continu-
ous representation and that the last term and the last two terms in Eq. (5.188) or (5.193) are indeed zero,
respectively, whenever the three eigenvalues of A are singly and twice coalescent, respectively. Evidently,
A vanishes whenever the eigenvalues of A becomes twice coalescent. Moreover, we have
the tensor Ā
1 2
|G A)|2 = |Ā
G(A A| (a1 − a2 )2 (a2 − a3 )2 (a3 − a1 )2 .
3
Hence, the tensor G (AA) vanishes whenever the eigenvalues ai of A becomes singly and twice coalescent,
separately.
In addition to the above remarkable features, it is further demonstrated that the representation (5.188)
or (5.193) is a continuously differentiable representation for isotropic tensor-valued tensors Φ (A A). For
details, refer to Xiao, Bruhns and Meyers (2002). In this reference, the above results is used to derive a
A).
direct, unified formula for calculating any given Hill’s strain tensor g (A
(i) For the general scalar-valued function ξ(uu1 , · · · , A c ) of vectors and 2nd-order tensors that
is invariant under the invariance condition (5.61), find out a finite number of specified scalar
functions, say,
I1 = I1 (uu1 , · · · , u a ; A 1 , · · · , A b ), · · · , I p = I p (uu1 , · · · , u a ; A 1 , · · · , A b ),
We say that such p specified scalar functions meeting (5.61) and the above requirement form
a functional basis of the general scalar-valued function ξ(uu1 , · · · , A c ) of vectors and 2nd-order
tensors that is invariant under the invariance condition (5.61).
(ii) For the general vector- or tensor-valued function φ (uu1 , · · · , A c ) of vectors and 2nd-order
tensors that is form-invariant under the invariance condition (5.62 and 5.63), find out a finite
number of specified vector- or tensor-valued functions, say,
G1 = G1 (uu1 , · · · , ua ; A1 , · · · , Ab ), · · · , Gq = Gq (uu1 , · · · , ua ; A1 , · · · , Ab ),
each of which meets the form-invariance condition (5.62) or (5.63), such that
φ (uu1 , · · · , u a ; A 1 , · · · , A b ) = f1 G 1 + · · · + fq G q , (5.197)
The above general respects are concerned with a large variety of topics for isotropic and anisotropic
functions of vectors and 2nd-order tensors relative to various kinds of material symmetry groups as listed
in §§5.2.2-5.2.4. These topics have been extensively investigated in the past decades and many results have
been obtained. A biefe summary is given below.
Earlier, attention was focused on irreducible polynomial representations mainly for scalar-valued func-
tions and for vector-valued and tensor-valued functions in some cases; see, e.g., the monographs by Trues-
dell and Noll (1992), Spencer (1971), Kiral and Eringen (1990), Betten (1993) and Smith (1994) for many
related results. These results were closely related to the classical theory of algebraic invariants intensively
studied by nineteenth century mathematicians and derived in a restrictive sense merely for polynomial
functions of components of vector variables and tensor variables.
Irreducible non-polynomial representations in a general sense were considered later for isotropic func-
tions of vectors and 2nd-order tensors by Wang (1970) and Smith (1971). These results were sharpened by
Boehler (1977), and their irreducibility was proved by Pennisi and Trovato (1987).
Isotropic functions are related to particular yet widely considered solid materials of the highest sym-
metry. It has played a basic role in the subsequent development in studying anisotropic functions, as will
be explained below.
Unlike isotropic functions, anisotropic functions are distinguished by many different kinds of material
symmetry groups, such as crystal and quasicrystal classes, etc. Usually, they should be treated one by one
by means of different procedures. In so doing, the mathematical problems to be treated might be formidably
enormous. Indeed, there are 5 clasees of transversely isotropic solids, 32 classes of crystalline solids, and
infinitely many classes of quasicrystalline solids, as shown in §§5.2.2-5.2.4.
114 CHAPTER 5. TENSOR FUNCTIONS
Q ∈ G ⇐⇒ (Q
Q ? M 1 = M 1 , · · · , Q ? M t = M t , Q ∈ Orth) , (5.198)
namely,
Q ∈ Orth | Q ? M 1 = M 1 , · · · , Q ? M t = M t } = G .
{Q (5.199)
We say that the subset of the full orthogonal group Orth given by the left-hand side of (5.202) is the
symmetry group of tensors M 1 , · · ·, M t . In particular, when only one tensor M is involved, this gives
the symmetry group of tensor M . Either of the above two expressions means that tensors M 1 , · · ·, M t are
structural tensors of a given material symmetry group G , whenever the intersection of the symmetry groups
of these tensors exactly coincides with G .
Hence, if M 1 , · · ·, M t are structural tensors of the material symmetry group G , then an anisotropic
function ϕ(uu1 , · · · , A c ) of vectors and 2nd-order tensors relative to the group G , as defined in §5.2.5, may
be given by
ϕ(uu1 , · · · , u a ; A 1 , · · · , A b ) = ϕ̄(uu1 , · · · , u a ; A 1 , · · · , A b ; M 1 , · · · , M t ) . (5.200)
In the above, the latter tensor function ϕ̄ is an extended isotropic function with the structural tensors M 1 ,
· · ·, M t as additional variables. Thus, once the structural tensors are determined, the relationship (5.200)
reduces an anisotropic function to an extended isotropic function with more vector and tensor variables.
It is expected that representations for the extended isotropic function may be derived by using the much
well-known results for isotropic functions by Wang (1970) and Smith (1971). Here, a perhaps essential
point is brought to attention: Till now results are available only for conventional isotropic functions rely-
ing merely on vector variables and 2nd-order tensor variables. Little has been known for unconventional
isotropic functions involving at least one tensor variable of order higher than two. Even if the latter may be
shown to be tractable in future, it may be expected that complete functional bases and generating sets for
them would be extremly large and unduly complicated. Indeed, it appears that that is the case even for one
of the simplest examples, i.e., isotropic invariants of the 4th-order elastic tensor. In this respect, refer to,
e.g., Betten and Helisch (192), Boehler and Onat (1994), and Xiao (1997c) for details.
Nevertheless, for certain simple types of material symmetry groups G , including the cylindrical groups
and the triclinic, monoclinic, rhombic crystal classes, each structural tensor M α may chosen to be either
a vector or a 2nd-order tensor. Then, the foregoing extended isotropic function with additional tensor
variables, as shown by (5.200), is indeed an isotropic function of vectors and 2nd-order tensors, which
have been treated by Wang (1970) and Smith (1971). For any other material symmetry group G , however,
it is known that at least one structural tensor has to be of order higher than two. The main reason is as
follows.
For a vector v 6= 0, we have
{QQ ∈ Orth | Q ? v = v} = C∞v .
In the above, C∞v is the cylindrical group given by (5.5) and here the unit vector n is in the direction of v .
On the other hand, for a 2nd-order symmetric tensor S , we have
D2h , S = ∑3i=1 λi n i ⊗ n i , λ1 6= λ2 6= λ3 6= λ1 ,
{Q
Q ∈ Orth | Q ? S = S } = D∞h , S = λ1 I + (λ2 − λ1 )nn ⊗ n , λ1 6= λ2 ,
Orth, S = λII ,
5.7. ON GENERAL ISOTROPIC AND ANISOTROPIC FUNCTIONS 115
Note that the above three cases correspond with the cases when the three eigenvalues of S are distinct,
singly coalescent, and doubly coalescent. In the above, D2h is the maximal rhombic (orthotropy) group
given by (5.13), and D∞h is the maximal cylindrical group given by (5.3). Moreover, we have
{Q
Q ∈ Orth | Q ?W
W = W } = C∞h
for a 2nd-order skewsymmetric tensor W 6= O . In the above, C∞h is the cylindrical group given by (4.6)
and here the unit vector is in the direction of the associated axial vector ω of the skewsymmetric tensor W .
The above results provide the symmetry group of vector v , symmetric 2nd-order tensor tensor S ,
skewsymmetric 2nd-order tensor W , respectively.
M 1 , · · ·, M t ) be any finite set of vectors and 2nd-order tensors given by2
Now let (M
M 1 , · · · , M t ) = (vv1 , · · · , v n , S 1 , · · · , S m ,W
(M W 1 , · · · ,W
W l ),
where each v α is a non-zero vector, each S β is a non-trivial symmetric 2nd-order tensor, and each W θ is a
non-zero skewsymmetric 2nd-order tensor, i.e. 3 ,
v α 6= 0 , S β 6= λβ I , W θ 6= O , α = 1, · · · , n, β = 1, · · · , m, θ = 1, · · · , l.
Then, from the foregoing results for the symmetry groups of vector and symmetric and skewsymmetric
2nd-order tensor, we infer that the symmetry group of any finite number of vectors and 2nd-order tensors is
nothing else but one of the cylindrical groups and the triclinic, monoclinic, rhombic groups. Note that the
former is simply intersection of some rhombic (orthotropy) groups D2h and some cylindrical groups C∞v ,
C∞h , D∞h with different 2-fold symmetry axes n i and different ∞-fold symmetry axes n . Thus, any other
material symmetry group G except the cylindrical groups and the triclinic, monoclinic, rhombic groups
could not be characterized and represented by a set of vectors and 2nd-order tensors, no matter how many
they may be. Namely, for each just-mentioned not so simple material symmetry group G , we could not
find out a set of vectors and 2nd-order tensors, (M M 1 , · · · , M t ), such that the condition (5.198) or (5.202) is
fulfilled. As a result, at least one structural tensor M α of order higher than two has to be introduced.
Consequently, the foregoing extended isotropic function with additional tensor variables, as shown by
(5.200), is no loner an isotropic function of vectors and 2nd-order tensors, but relies on at least one tensor
variable of order higher than two. It seems that little has been known for isotropic functions with tensor
variable(s) of order higher than two, as remarked in a review article by Rychlewski and Zhang (1991).
Utilizing the idea of the structural tensors described above, results for certain simple types of anisotropic
functions of vectors and 2nd-order tensors have been derived in certain cases, including those relative to
the cylindrical groups and the triclinic, monoclinic, rhombic crystal classes; see, e.g., Boehler (1978, 1979,
1987), Liu (1982), Betten (1987, 1988, 1993), Zheng (1993), et al.
The above classical or traditional methods lead to resolutions of a number of important problems.
Here it is not our intention to reproduce the huge body of literature in this respect and other respects. For
detail, refer to the monographs mentioned before and the reviews articles by Rychlewski and Zhang (1991),
Zheng (1993), and Betten (1998), as well as the relevant references therein. However, a large variety of
topics for anisotropic functions in a general sense remain to be investigated until very recently, including
those relative to trigonal, tetragonal, hexagonal, cubic crystal classes (totally 24 crystal classes) and all the
quasicrystal classes.
and, therefore, the much well-known results by Wang (1970) and Smith (1971) can be directly used to
derive desired results for various kinds of anisotropic functions. This idea was originated from a summary
in Xiao and Guo (1993) and later shown to be applicable to all kinds of material symmetry groups.
Specifically, let
X = (uu1 , · · · , u a ; A 1 , · · · , A b )
for the sake of simplicity. Then, the foregoing general anisotropic tensor function will be designated by
ϕ(X). According to the new isotropic extension method, introducing a set of well-chosen vector-valued and
2nd-order tensor-valued polynomial functions that are form-invariant under the material symmetry group
G , i.e., S(X), we may establish a new isotropic extension relationship for the anisotropic function ϕ(X)
relative to G as follows:
ϕ(X) = ϕ̂(X, S(X)). (5.201)
In the above, the latter tensor function ϕ̂ is indeed an extended isotropic function of the augemented set
of vector variables and 2nd-order tensor variables, i.e., (X, S(X)). The set of additional vector variables
and 2nd-order tensor variables, i.e., S(X), has been determined and presented for each crystal class and for
each quasicrystal class. Details may be found in Xiao (1996b, 1997b).
For instance, according to the new isotropic extension method, each anisotropic function ϕ(A A) of a
symmetric tensor variable A that is invariant or form-invariant under the cubic crystal group Oh (see (5.35)
with (5.40)) may be converted to
A) = ϕ̂(A
ϕ(A A, O h [A A2 ]) ,
A], O h [A (5.202)
with
3
Oh = ∑ ni ⊗ ni ⊗ ni ⊗ ni . (5.203)
i=1
The latter tensor function in (5.202) is an isotropic function of three symmetric 2nd-order tensors. Thus,
from this extended isotropic function it may be a straightforward matter to derive a complete representation
for the aniotropic function ϕ(A A) relative to the cubic group Oh applying the results by Wang (1970) and
Smith (1971). For any other crystal or quasicrystal class, the same procedure may be used and the only
difference is that the additional 2nd-order tensor or vector variables will be different for different kinds of
material symmetry groups.
The results presented in §§5.3-5.4 are just derived by applying the above idea and procedure. Details
may be found in Bruhns, Xiao and Meyers (1999a).
Next, it is demonstrated in Xiao (1996c, 2000) that representations for arbitrary-order tensor-valued
anisotropic or isotropic functions of any finite number of vector and 2nd-order tensor variables are deriv-
able simply from combinations of those for the same type of anisotropic or isotropic functions of not more
than four variables. Thus, such decomposition theorems given in the just-mentioned references reduce rep-
resentative problems involving any finite number of vector and 2nd-order tensor variable to those involving
not more than four vector and 2nd-order tensor variables.
Furthermore, a unified procedure for constructing irreducible non-polynomial representations for scalar-
, vector-, and 2nd-order tensor-valued isotropic and anisotropic functions is designed in Xiao (1998b) and
Xiao, Bruhns and Meyers (1999c, 2000a, b). Usually, the just-mentioned three kinds of tensor functions
have to be treated by means of different procedures, separately.
With the above new ideas, systematic results for scalar-, vector-, and 2nd-order tensor-valued anisotropic
functions of vectors and 2nd-order tensors are derived in a general sense by these authors. Details may be
found in Xiao (1998b) and Xiao, Bruhns and Meyers (1999c, 2000a, b).
In recent years, 4th-order tensor-valued isotropic functions of a symmetric 2nd-order tensor or a non-
symmetric 2nd-order tensor have been taken into consideration, which are useful in formulating hypoelastic
constitutive relations in general form (see Truesdell and Noll 1965) and in simulating the texture induced
elastic anisotropy (see Böhlke 2001, Böhlke and Beteram 2001a). Results in this respect are given by
Böhlke (2001) and Böhlke and Beteram (2001a, b).
A most recent development is to consider minimal representations for vector- and tensor-valued isotropic
and anisotropic functions in a suitable sense, and, on the other hand, to study further requirements resulting
from continuity and differentiability properties. These further notions are proposed in most recent works
by these authors; see Xiao, Bruhns and Meyers (2000c, 2003). It appears that studies in these respects
5.7. ON GENERAL ISOTROPIC AND ANISOTROPIC FUNCTIONS 117
are merely at the initial stage. Results in some particular yet perhaps significant cases are presented in the
just-mentioned references and Xiao, Bruhns and Meyers (1998c). These results are concerned with mini-
nal representations for symmetric tensor-valued anisotropic constitutive functions of one symmetric tensor
relative to certain material symmetry groups, as presented in §5.4,and for isotropic symmetric tensor-
valued functions of one skewsymmetric tensor and one symmetric tensor, and for symmetric tensor-valued
transversely isotropic functions of any finite number of vectors and 2nd-order tensors relative to the five
cylindrical groups, as well as the continuouly differentiable representation for symmetric tensor-valued
isotropic constitutive functions of one symmetric tensor, as given in §5.6.
118 CHAPTER 5. TENSOR FUNCTIONS
References
Ball, J.M., 1984. Differentiability properties of symmetric and isotropic functions. Duke Math. J. 51, 699-728.
Başar, Y., Weichert, D., 2000. Nonlinear Continuum Mechanics of Solids. Springer, Berlin etc.
Berteram, A., 1989. Axiomatische Einfürung in die Kontinuumsmechanik. Wissenschaftsverlag, Mannheim, Wien,
Zürich.
Berteram, A., Olschewski, J., 1991. Formulation of anisotropic linear viscoelastic constitutive laws by a projection
method.In: Freed, A., Walker, K. (eds.), High Temperature Constitutive Modeling: Theory and Application,
ASME, MD-Vol. 26, AMD-Vol. 121, pp. 129-137.
Berteram, A., Olschewski, J., 1993. Zur Formulierung anisotroper linearen anelastischen Stoffgleischungen mit Hilfe
einer Projektion Methode. Z. Angew. Math. Mech. 73, T401-T403.
Bertram, A., Svendsen, B., 2001. On material objectivity and reduced constitutive equations. Arch. Mechanics 54.
Betten, J., 1984. Interpolation methods for tensor functions. In: X.J.R. Avula, et al. (eds.), Mathematical Modelling
in Science and Technology, pp. 52-57. Pergoman Press, New York.
Betten, J., 1988. Applications of tensor functions to the formulation of yield criteria for anisotropic materials. Int. J.
Plasticity 4, 29-46.
Betten, J., 1998. Anwendungen von Tensorfunktionen in der Kontinuumsmechanik anisotroper Materialien. Z.
Angew. Math. Mech. 78, 507-521.
Betten, J., Helisch, W., 1992. Irreduzible Invarianten eines Tensors vierter Stufe. Z. Angew. Math. Mech. 72, 45-57.
Bischhoff-Beiermann, B., Bruhns, O.T., 1994. A physically motivated set of invariants and tensor generators in the
case of transverse isotropy, Int. J. Eng. Sci. 32, 1531-1552.
Boehler, J.P., 1977. On irreducible representations for isotropic scalar functions, Z. Angew. Math. Mech. 57,
323-327.
Boehler, J.P., 1978. Lois de comportement anisotrope des milieux continus, J. de Mechanique 17, 153-190.
119
120 References
Boehler J.P., 1979, A simple derivation of representations for non-polynomial constitutive equations in some cases of
anisotropy. Z. Angew. Math. Mech. 59, 157-167.
Boehler J.P. (ed.), 1987. Applications of tensor functions in solid mechanics. CISM Courses and Lectures no. 292,
Springer, Wien, etc.
Boehler, J.P., Kirillov, A.A., Onat, E.T., 1994. On the polynomial invariants of the elasticity tensor. J. Elasticity 34,
97-110.
Boer, R. de, 1982. Vector- und Tensorrechnung für Ingenieure. Springer, Berlin.
Böhlke, T., 2001. Crystallographic Texture Evolution and Elastic Anisotropy: Simulation, Modeling, and Applica-
tions. Dissertation, Otto-Von-Guerick-Universität Magdeburg, Shaker Verlag, Aachen.
Böhlke, T., Beteram, A., 2001a. The evolution of Hooke’s law due to texture development in FCC polycrystals. Int.
J. Solids Structures 38, 9437-9459.
Böhlke, T., Beteram, A., 2001b. The 4th-order isotropic tensor-valued function of a symmetric 2nd-order tensor with
application to anisotropic elasto-plasticity. Z. Angew. Math. Mech. 81, 125-128.
Bowen, R.M., Wang, C.C., 1976. Introduction to vectors and tensors. Vol. 1-2. Plenum Press, New York.
Bruhns, O.T., Lehmann, Th., 1993. Elemente der Mechanik I. Vieweg, Braunschweig, Wisbaden.
Bruhns, O.T., Meyers, A., Xiao, H., 2002. On non-corotational rates of Oldroyd’s type and relevant issues in rate
constitutive formulations. Preprint (to be published).
Bruhns, O.T., Xiao, H., Meyers, A., 1999a. On representations of yield functions for crystals, quasicrystals and
transversely isotropic solids. Eur. J. Mech. A/Sloids 18, 47-671.
Bruhns, O.T., Xiao, H., Meyers, A., 1999b. Self-consistent Eulerian rate type elastoplasticity models based upon the
logarithmic stress rate. Int. J. Plasticity 15, 479-520.
Bruhns, O.T., Xiao, H., Meyers, A., 2002. New results for the spin of Eulerian triad and the logarithmic spin and rate.
Acta Mechanica. 155, 95-109.
Carlson, D.E., Hoger, A., 1986. The derivative of a tensor-valued function of a tensor. Q. Appl. Math. 44, 409-423.
Chadwick, P., Vianello, M., Cowin, S.C., 2001. A new proof that the number of linear elastic symmetries is eight. J.
Phys. Mech. Solids 49, 2471-2492.
Chou, P.-C., Pagano, N.J., 1992. Elasticity: Tensor, Dyadic, and Engineering Approaches. Dover Publ., Inc., New
York.
Cowin, S.C., Mehrabadi, M.M., 1992. On the structure of the linear anisotropic elastic symmetries. J. Phys. Mech.
Solids 40, 1459-1472.
References 121
Cowin, S.C., Mehrabadi, M.M., 1995. Anisotropic symmetries of linear elasticity. Appl. Mech. Rev. 48, 247-285.
Doyle, T.C., Ericksen, J.L., 1956. Nonlinear elasticity. Adv. Appl. Mech. 4, 53-115.
Fitzjerald, J.E., 1980. A tensorial Hencky measure of strain and strain rate for finite deformations. J. Appl. Phy. 51,
5111-5115.
Forte, S., Vianello, M., 1996. Symmetry classes for elasticity tensors. J. Elasticity 43, 81-108.
Forte, S., Vianello, M., 1997. Symmetry and harmonic decomposition for photoelasticity tensors. Int. J. Engrg. Sci.
35, 1317-1326.
Green, A.E., Adkins, J.E., 1960. Large Elastic Deformations and Nonlinear Continuum Mechanics. Clarendon,
Oxford.
Gurtin, M.E., 1972. The linear theory of elasticity. In: Flügge, S. (ed.), Handbuch der Physik, Vol. VIa/2. Springer,
Berlin etc.
Hackl, K., 1999. On the representation of anisotropic elastic materials by symmetric irreducible tensors. Contin.
Mech. Thermodyn. 11, 353-369.
Hahn, T., 1987. Space-Group Symmetry. International Tables for Crystallography. vol. A, 2nd edition, Reidel,
Dordrecht.
Haupt, P., 2002. Continuum Mechanics and Theories of Materials. Springer, Berlin, etc.
Hill, R., 1968. Constitutive inequalities for simple materials. J. Mech. Phy. Solids 16, 229-242.
Hill, R., 1978. Aspects of invariance in solid mechanics. Adv. Appl. Mech. 18, 1-75.
Huo, Y.Z., Del Piero, G., 1991. On the completeness of the crystallographic symmetries in the description of the
symmetries of the elastic tensor. J. Elasticity 25, 203-246.
Itskov, M., 2002. The derivative with respect to a tensor. some theoretical aspects and applications. Z. Angew. Math.
Mech. 82, 535-544.
Itskov, M., Aksel, N., 2002. A closed-form representation for the derivative of non-symmetric tensor power series.
Int. J. Solids Structures 39, 5963-5978.
Kiral, E., Eringen, A.C., 1990. Constitutive Equations of Electromagnetic-Elastic Crystals, Springer, Berlin, etc.
Klein, F., 1884. Lectures on the Icsahedron. English version, reprinted in 1957, Dover, New York.
Lehmann, Th., 1960. Einige Betrachtungen zu den Grundlagen der Umformtechnik. Ing. Archiv 29, 316-330.
Liu, I.S., 1982. On representations of anisotropic invariants, Int. J. Eng. Sci. 20, 1099-1109.
Lokhin, V.V., SEDOV, L.I., 1963. Nonlinear tensor functions of several tensor arguments, Prikl. Mat. Mekh., 29,
393-417.
A) = αII + βA
Man, C.-S., 1994. Remarks on the continuity of the scalar coefficients in the representation H (A A2
A + γA
for isotropic tensor functions. J. Elasticity 34, 229-238.
A) = αII + βA
Man, C.-S., 1995. Smoothness of the scalar coefficients in the representation H (A A2 for isotropic
A + γA
r
tensor functions of class C . J. Elasticity 40, 165-182.
Man, C.-S., Guo, Z.-H., 1993. A basis-free formula for time rate of Hill’s strain tensors. Int. J. Solids Struct. 30,
2819-2842.
Marsden, J.E., Hughes, T.J.R., 1983. Mathematical Foundations of Elasticity. Prentice Hall, Inc., Englewood Cliffs,
New Jersey.
Mehrabadi, M.M., Cowin, S.C., 1990. Eigentensors of linear anisotropic materials. Q. J. Mech. Appl. Math. 43,
15-41.
Mentzel, A., Steinmann, P., 2003. A view on anisotropic finite hyper-elasticity. Eur. J. Mech. A/Solids 22, 71-87.
Meyers, A., 1999b. On the consistency of some Eulerian strain rates. Z. Angew. Math. Mech. 79, 171-177.
Meyers, A., Shieße, P., Bruhns, O.T., 2000. Some comments on objective rates of symmetric Eulerian tensors with
application to Eulerian strain rate. Acta Mechanica 139, 91-103.
Miehe, C., 1993. Computations of isotropic tensor functions. Commun. Numer. Methods Eng. 9, 889-896.
Miehe, C., 1994. Aspects of the formulation and FE implementation of large strain isotropic elasticity. Int. J. Numer.
Methods Eng. 37, 1981-2004.
Miehe, C., 1997. Comparison of two algorithms for the computation of fourth-order isotropic tensor functions.
Computer and Structures 66, 37-43.
Mises, R. von, 1928. Mechanik der plastichen Formänderung von Kristallen. Z. Angew. Math. Mech. 8, 161-185.
Morman, K.N., 1986. The generalized strain measure with application to non-homogeneous deformations in rubber-
like solids. J. Appl. Mech. 53, 726-728.
Ogden, R.W., 1984. Nonlinear Elastic Deformations. Ellis Horwood Ltd., Chichester.
Pennisi, S., Trovato, M., 1987. On irreducibility of Professor G.F. Smith’s representations for isotropic functions, Int.
References 123
Richter, H., 1949. Verzerrungstensor, Verzerrungsdeviator und Spannungs tensor bei endlichen Formänderungen. Z.
Angew. Math. Mech. 29, 65-75.
Rychlewski, J., 1983. CEIIINOSSSTTUV: Mathematical structure of elastic materials (in Russian). Preprint No.
217, Inst. Mech. Probl. USSR Acad. Sci., Moscow.
Rychlewski, J., 1984a. On Hooke’s law. Prikl. Matem. Mekh. 48, 303-314.
Rychlewski, J., 1984b. Elastic energy decompositions and limit criteria. Advances in Mechanics 7 (3), 51-80.
Rychlewski, J., 1991. Symmetry of Causes and Effects Wydawnictwo Naukowe PWN, Warsaw.
Rychlewski, J., 1995. Unconventional approach to linear elasticity. Archives of Mechanics 47, 149-171.
Rychlewski, J., 2000. A qualitative approach to Hooke’s tensor. Part I. Archives of Mechanics 52, 737-759.
Rychlewski, J., Zhang, J.M., 1991. On representations of tensor functions: A review. Advances in Mechanics 14 (1),
75-94.
Scheidler, M., 1991. Time rates of generalized strain tensors. Part I: Component formulas. Mech. Mater. 11,
199-210.
Scheidler, M., 1991. Time rates of generalized strain tensors. Part II: Approximate basis-free formulas. Mech. Mater.
11, 211-219.
Senechal, M., 1995. Quasicrystals and Geometry. Cambridge University Press, Cambridge.
Serrin, J.B., 1959. The derivation of stress-deformation relations for a Stokesian fluid. J. Math. Mech. 8, 459-468.
Seth, B.R., 1964. Generalized strain measures with applications to physical problems. In: Reiner, M. and Abir,
D. (eds.), Second-Order Effects in Elasticity, Plasticity and Fluid Dynamics, pp. 162-172. Pergamon Press,
Oxford.
Šilhavý, M., 1997. The Mechanics and Thermodynamics of Continuous Media. Springer, Berlin.
Simo, J.C., Taylor, R.L., 1991. Quasi-incompressible finite elasticity. Continuum basis and numerical algorithms.
Compt. Meth. Appl. Mech. Eng. 85, 273-310.
Smith, G.F., 1962. On the yield function for anisotropic materials. Q. Appl. Math. 20 (1962) 241-247.
Smith, G.F., Rivlin, R.S., 1958. The strain-energy function for anisotropic elastic materials. Trans. Am. Math. Soc.
88 175-193.
Smith, G.F., 1971. On isotropic functions of symmetric tensors, skewsymmetric tensors and vectors, Int. J. Eng. Sci.
9, 899-916.
124 References
Smith, G.F., 1994. Constitutive Equations for Anisotropic and Isotropic Materials. Elsevier, New York.
Spencer, A.J.M., 1971. Theory of invarians. In: Eringen, A.C. (ed.), Continuum Physics, vol. I. Academics Press,
New York.
Stumpf, H., Hoppe, U., 1997. The application of tensor algebra on manifolds to nonlinear continuum mechanics. Z.
Angew. Math. Mech. 77, 327-339.
Sutcliffe, S., 1992. Spectral decomposition of the elasticity tensor. J.Appl. Mech. 59, 762-773.
Svendsen, B., 1994. On the representation of constitutive relations using structural tensors. Int. J. Engng Sci. 32,
1889-1892.
Svendsen, B., Bertram, A., 1999. On frame-indifference and form-invariance in constitutive theory. Acta Mechanica
132, 195-207.
Svendsen, B., Reese, S., 2003. On the modelling of internal variables as structural tensors in anisotropic inelasticity.
Int. J. Plasticity 15.
Thomson, W. (Lord Kelvin), 1856. On six principal strains of an elastic solid. Phil. Trans. Roy. Soc. 166, 495-498.
Thomson, W. (Lord Kelvin), 1878. Elasticity. In: Encylopaedia Britannica, Adam and Charles Black, Edinburgh.
Ting, T.C.T., 1985. Determination of C 1/2 and C −1/2 and more general isotropic tensor functions. J. Elasticity 15,
319-323.
Ting, T.C.T., 1987. Invariants of anisotropic elastic constants. Q. J. Mech. Appl. Math. 40, 431-448.
Truesdell, C.A., Noll, W., 1992. The nonlinear field theories of mechanics. In: Flügge, S. (ed.), Handbuch der Physik,
vol. III/3. 2nd edition, Springer, Berlin.
Vainshtein, B.K., 1994. Modern Crystallography. 1. Fundamental of Crystals. Springer, Berlin, etc.
Wang, C.-C., 1970. A new representation theorem for isotropic functions, Part I and II, Arch. Rat. Mech. Anal. 36,
166-223, 1970;ibid, 43, 392-395.
Wang, W.B., Duan, Z.P., 1991. On the invariant representation of spin tensors with applications. Int. J. Solids Struct.
27, 329-341.
Xiao, H., 1995a. Unified explicit basis-free expressions for time rate and conjugate stress of an arbitrary Hill’s strain.
Int. J. Solids Structures 32, 3327-3340.
Xiao, H., 1995b. General irreducible representations for constitutive equations of anisotropic elastic crystals and
transversely isotropic elastic solids. J. Elasticity 39, 47-73.
Xiao, H., 1995c. Invariant characteristic representations for classical and micropolar anisotropic elasticity tensors. J.
Elasticity 40, 233-265.
References 125
Xiao, H., 1996a. On minimal representations for constitutive equations of anisotropic elastic materials. J. Elasticity
45, 13-32.
Xiao, H., 1996b. On isotropic extension of anisotropic tensor functions, Z. Angew. Math. Mech. 76, 205-214.
Xiao, H., 1996c. Two general representation theorems for arbitrary-order-tensor-valued isotropic and anisotropic
tensor functions of vectors and second order tensors, Z. Angew. Math. Mech. 76, 151-162.
Xiao, H., 1996d. On anisotropic scalar functions of a single symmetric tensor. Proc. Roy. Soc. Lonon A452,
1545-1561.
Xiao, H., 1997a. On constitutive equations of Cauchy’s elastic solids: All kinds of crystals and quasicrystals. J.
Elasticity 48, 241-283.
Xiao, H., 1997b. A unified theory of representations for scalar-, vector- and 2nd order tensor-valued anisotropic
functions of vectors and second order tensors, Archives of Mechanics 50, 275-313.
Xiao, H., 1997c. On isotropic invariants of the elasticity tensor. J. Elasticity 46, 115-149.
Xiao, H., 1998a. On anisotropic invariants of a single symmetric tensor: crystal classes, quasicrystal classes and
others. Proc. Roy. Soc. London A454, 1217-1240.
Xiao, H., 1998b. On scalar-, vector- and second-order tensor-valued anisotropic functions of vectors and second-order
tensors relative to all kinds of subgroups of the transverse isotropy group C∞h . Phil. Trans. Roy. Soc. London
A 356, 3087-3122; see also: Archives of Mechanics 50, 281-319.
Xiao, H., 1998c. On symmetries and anisotropies of classical micropolar linear elasticities. A new method based
upon a complex vector basis and some systematic results. J. Elasticity 49, 129-162.
Xiao, H., 1999. A new representation theorem for elastic constitutive equations of cubic crystals. J. Elasticity 53,
37-45.
Xiao, H., 2000. Further results on general representation theorems for arbitrary-order-tensor-valued isotropic and
anisotropic tensor functions of vectors and second order tensors, Z. Angew. Math. Mech. 80, 497-503.
Xiao, H., Bruhns, O.T., Meyers, A., 1997. Logarithmic strain, logarithmic spin and logarithmic rate. Acta Mechanica
124, 89-105.
Xiao, H., Bruhns, O.T., Meyers, A., 1998a. Strain rates and material spins. J. Elasticity 52, 1-41.
Xiao, H., Bruhns, O.T., Meyers, A., 1998b. On objective corotational rates and their defining spin tensors. Int. J.
Solids Structures 35, 4001-4014.
Xiao, H., Bruhns, O.T., Meyers, A., 1998c. On symmetric tensor-valued isotropic functions of a symmetric tensor
and a skewsymmetric tensor and related transversely isotropic functions. Archives of Mechanics 50, 731-741.
Xiao, H., O.T. Bruhns, Meyers, A.,1998d. Objective corotational rates and unified work-conjugacy relation between
Lagrangian and Eulerian stress and strain measures. Archives of Mechanics 50, 1015-1045.
126 References
Xiao, H., Bruhns, O.T., Meyers, A., 1999a. Existence and uniqueness of the integrable-exactly hypoelastic equation
o
τ ∗ = λ(trD
D)II + 2µD
D and its significance to finite inelasticity. Acta Mechanica 138, 31-50.
Xiao, H., Bruhns, O.T., Meyers, A., 1999b. On anisotropic invariants of N symmetric second-order tensors: crystal
classes and quasi-crystal classes as subgroups of the cylindrical group D∞h . Proc. Roy. Soc. London A 455,
1993-2020.
Xiao, H., Bruhns, O.T., Meyers, A., 1999c. Irreducible representations for constitutive equations of anisotropic solids
I: crystal and quasicrystal classes D2mh , D2m and C2mv . Archives of Mechanics 51, 559-603.
Xiao, H., Bruhns, O.T., Meyers, A., 2000a. Irreducible representations for constitutive equations of anisotropic solids
II: crystal and quasicrystal classes D2m+1d , D2m+1 and C2m+1v . Archives of Mechanics 52, 55-88.
Xiao, H., Bruhns, O.T., Meyers, A., 2000b. Irreducible representations for constitutive equations of anisotropic solids
III: crystal and quasicrystal classes D2m+1h and D2md . Archives of Mechanics 52, 347-395.
Xiao, H., Bruhns, O.T., Meyers, A., 2000c. Minimal generating sets for symmetric 2nd-order tensor-valued trans-
versely isotropic functions of vectors and 2nd-order tensors. Z. Angew. Math. Mech. 80, 565-569.
Xiao, H., Bruhns, O.T., Meyers, A., 2000d. A consistent finite elastoplasticty theory combining additive and multi-
plicative decomposition of the stretching and the deformation gradient. Int. J. Plasticity 16, 143-177.
Xiao, H., Bruhns, O.T., Meyers, A., 2002. Basic issues concerning finite strain measures and isotropic stress-
deformation relations. J. Elasticity 67, 1-23.
Xiao, H., Guo, Z.-H., 1993. A general representation theorem for isotropic tensor functions. In: Chien, W.Z., Guo,
Z.-H. and Guo, Y.Z. (eds.), Proceedings of 2nd Conference on Nonlinear Mechanics, pp. 206-210. Peking
University Press, Beijing.
Zhang, J.M., 1991. Material anisotropy and plasticity formulations. Eur. J. Mech. A/Solids 10, 157-174.
Zhang, J.M., Rychlewski, J., 1990. Structural tensors for anisotropic solids. Arch. Mechanics 42, 267-277.
Zheng, Q.S., 1994. On transversely isotropic, orthotropic and relatively isotropic functions of symmetric tensors,
skewsymmetric tensors and vectors. Part I-V. Int. J. Engng Sci. 31, 1399-1453.
Zheng, Q.S., 1994. Theory of representations for tensor functions: A unified invariant approach to constitutive
equations, Appl. Mech. Rev. 47, 545-587.
Zheng, Q.S., Spencer, A.J.M., 1993. Tensors which charactrize anisotropies. Int. J. Engng Sci. 31, 679-693.