0% found this document useful (0 votes)
5 views9 pages

Lec 21

math

Uploaded by

osamabahadali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views9 pages

Lec 21

math

Uploaded by

osamabahadali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

LECTURE 21: TENSORS AND DIFFERENTIAL FORMS

1. Tensors as multi-linear maps


¶ Multi-linear maps.
Let V1 , · · · , Vk be finite dimensional vector spaces.
Definition 1.1. A function T : V1 × · · · × Vk → R is called multi-linear if it is linear in each entry,
i.e. for each i and any fixed vectors v1 ∈ V1 , · · · , vi−1 ∈ Vi−1 , vi+1 ∈ Vi+1 , · · · , vk ∈ Vk , the map
Ti : Vi → R, vi 7→ T (v1 , · · · , vi , · · · , vk )
is linear.

Note that if T1 , T2 are two multi-linear maps on V1 × · · · × Vk , so is their linear combinations.


Thus the set of all multi-linear maps on V1 × · · · × Vk is a vector space.
Example. For any f 1 ∈ V1∗ , · · · , f k ∈ Vk∗ (the dual spaces), we define
f 1 ⊗ · · · ⊗ f k : V1 × · · · × Vk → R, (v1 , · · · , vk ) 7→ f 1 (v1 ) · · · f k (vk ).
Obviously f 1 ⊗ · · · ⊗ f k is a multi-linear map. Note that by definition, for each 1 ≤ i ≤ k and
λ ∈ R, we have
f 1 ⊗ · · · ⊗ f i−1 ⊗ λf i ⊗ f i+1 ⊗ · · · ⊗ f k = λf 1 ⊗ · · · ⊗ f i−1 ⊗ f i ⊗ f i+1 ⊗ · · · ⊗ f k .

Not surprisingly, any multi-linear map is a linear combination of these special maps:
n(i)
Theorem 1.2. Let {fi1 , · · · , fi } be a basis of Vi∗ . Then the set of multi-linear maps
{f1i1 ⊗ f2i2 ⊗ · · · ⊗ fkik | 1 ≤ ij ≤ n(j)}
form a basis of the vector space of multi-linear maps on V1 ×· · ·×Vk . In particular, dim ⊗k V ∗ = nk .
n(i)
Proof. We will denote by {ei1 , · · · , ein(i) } the basis in V that is dual to the basis {fi1 , · · · , fi } of
Vi∗ . For any multi-index I = (i1 , · · · , ik ), we will denote F I = f1i1 ⊗ f2i2 ⊗ · · · ⊗ fkik . Then the fact
F I (e1j1 , · · · , ekjk ) = δji11 ,··· ,ik
,··· ,jk

implies that the multi-linear maps F I ’s are linearly independent.


Moreover, for any multi-linear map T on V1 × · · · × Vk , if we let TI = T (e1i1 , · · · , ekik ), and
consider the multi-linear map X
S=T− TI F I ,
I
then S(e1j1 , · · · , ekjk )
= 0 for any multi-index J = (j1 , · · · , jk ). It follows from multi-linearity that
S ≡ 0. In other words, T = TI F I is a linear combination of these F I ’s.
P


1
2 LECTURE 21: TENSORS AND DIFFERENTIAL FORMS

Notation: We denote the vector space of multi-linear maps on V1 × · · · × Vk by V1∗ ⊗ · · · ⊗ Vk∗ .


Any element in this space is called a k-tensor.

Note that if T ∈ V1∗ ⊗ · · · ⊗ Vk∗ and S ∈ Vk+1 ∗
⊗ · · · ⊗ Vk+l , then we can define the tensor
∗ ∗
product T ⊗ S ∈ V1 ⊗ · · · ⊗ Vk+l to be the tensor
(T ⊗ S)(v1 , · · · , vk+l ) = T (v1 , · · · , vk )S(vk+1 , · · · , vk+l ).
By definition it is easy to see that ⊗ is a bilinear map and is associative:
(T ⊗ S) ⊗ R = T ⊗ (S ⊗ R).
In fact, we may simply regard elements in Vi∗ as 1-tensors on Vi . Then the k-tensor f 1 ⊗ · · · ⊗ f k
that we used above is the tensor product of the 1-tensors f 1 , · · · , f k . [One may need to check the
consistency of the definition. What do I mean by consistency?]
Similarly we can define V1 ⊗ · · · ⊗ Vk as the vector space of multi-linear maps on V1∗ × · · · × Vk∗ .
By definition it has a basis {e1i1 ⊗ e2i2 ⊗ · · · ⊗ ekik | 1 ≤ ij ≤ n(j)}.
Remark. For abstract vector spaces (which could be infinite dimensional), one can’t regard V as
(V ∗ )∗ but one can still define the tensor products algebraically. More precisely, we can define V ⊗W
to be the quotient space V ⊗ W = F (V × W )/ ∼, where F (V × W ) is the (infinite dimensional)
free vector space over V × W , and ∼ is the equivalence relations generated by (v, w) ∼ (v, w),
(c1 v1 + c2 v2 , w) ∼ c1 (v1 , w) + c2 (v2 , w) and (v, c1 w1 + c2 w2 ) ∼ c1 (v, w1 ) + c2 (v, w2 ). For any finite
dimensional vector spaces V and W , one has a natural linear isomorphism V ⊗ W ∗ ' L(W, V ),
where L(W, V ) is the set of all linear maps from W to V . (Details will be left as an exercise.)

¶ Tensor powers of a vector space.


Now let V be an n-dimensional vector space, and V ∗ its dual space. We will call
⊗l,k V := (⊗l V ) ⊗ (⊗k V ∗ )
the space of (l, k)-tensors on V . In other words, T ∈ ⊗l,k V if and only if
T = T (β 1 , · · · , β l , v1 , · · · , vk )
is multi-linear with respect to each β i ∈ V ∗ and each vj ∈ V .
Remark. Note that ⊗1,0 V = V and ⊗0,1 V = V ∗ . We will also abbreviate ⊗k,0 V = ⊗k V . For the
case k = 0, we denote ⊗0 V = R.

Next we will define a very useful operation on (l, k)-tensors.


Definition 1.3. For any 1 ≤ r ≤ l and 1 ≤ s ≤ k, we define the (r, s)-contraction to be the map
Csr : ⊗l,k V → ⊗l−1,k−1 V given by
X
Csr (T )(β 1 , · · · , β k−1, v1 ,· · ·, vl−1 ) = T (β 1 , · · · , β r−1, f i , β r , · · · , β k−1 , v1 ,· · · , vs−1 , ei , vs , · · · , vl−1)
i

where {e1 , · · · , en } is a basis of V , and {f 1 , · · · , f n } the dual basis.


LECTURE 21: TENSORS AND DIFFERENTIAL FORMS 3

One should check that this definition is independent of the choices of the basis {ei } of V .
Moreover, Csr (T ) is the (l − 1, k − 1)-tensor obtained from T by pairing the rth vector in T with
the sth co-vector in T :
Lemma 1.4. Let T be an (l, k)-tensor. For 1 ≤ r ≤ l, 1 ≤ s ≤ k, we have
(1) The definition of Csr is independent of the choices of the basis {ei } of V .
(2) For any v1 , · · · , vl ∈ V and β 1 , · · · , β k ∈ V ∗ ,
Csr (v1 ⊗· · ·⊗ vl ⊗ β 1 ⊗· · ·⊗ β k ) = β s (vr )v1 ⊗· · ·⊗ vbr ⊗· · ·⊗ vl ⊗ β 1 ⊗· · ·⊗ βbs ⊗· · ·⊗ β k ,
where b· means “remove the corresponding entry”.

Proof. Left as an exercise. 


Example. For example, if v, w ∈ V and α, β, γ ∈ V ∗ , one has
C21 (v ⊗ w ⊗ α ⊗ β ⊗ γ) = β(v)w ⊗ α ⊗ γ.
To see this, we compute by definition:
X
C21 (v ⊗ w ⊗ α ⊗ β ⊗ γ)(β 1 , v1 , v2 ) = v ⊗ w ⊗ α ⊗ β ⊗ γ(f i , β 1 , v1 , ei , v2 )
i
X
= f i (v)β 1 (w)α(v1 )β(ei )γ(v2 )
i
!
X
= f i (v)β(ei ) β 1 (w)α(v1 )γ(v2 )
i
= β(v)β 1 (w)α(v1 )γ(v2 )
= β(v)w ⊗ α ⊗ γ(β 1 , v1 , v2 ).

2. Linear p-forms
¶ Symmetric and anti-symmetric tensors.
Now let’s fix a vector space V , and consider a k-tensor T on V , i.e. T ∈ ⊗k V ∗ = ⊗0,k V .
Definition 2.1. Let T ∈ ⊗k V ∗ be a k-tensor on V .
(1) We say T is symmetric if for any permutation σ of (1, 2, · · · , k),
T (v1 , · · · , vk ) = T (vσ(1) , · · · , vσ(k) ).
(2) We say T is alternating (or a linear k-form) if it is skew-symmetric, i.e.
T (v1 , · · · , vi , · · · , vj , · · · , vk ) = −T (v1 , · · · , vj , · · · , vi , · · · , vk ).
for all v1 , · · · , vk ∈ V and any 1 ≤ i 6= j ≤ k
Example. • An inner product on V is a positive symmetric 2-tensor.
• det is a linear n-form on Rn .
4 LECTURE 21: TENSORS AND DIFFERENTIAL FORMS

We will denote the vector space of k-forms by Λk V ∗ . Note that Λk V ∗ is not a brand new
space: it is a linear subspace of ⊗k V ∗ . We will set Λ1 V ∗ = ⊗1 V ∗ = V ∗ and Λ0 V ∗ = R.
Recall that a permutation σ ∈ Sk is called even or odd, depending on whether it is expressible
as a product of an even or odd number of simple transpositions. For any k-tensor T and any
σ ∈ Sk , we define another k-tensor T σ by
T σ (v1 , · · · , vk ) = T (vσ(1) , · · · , vσ(k) ).
Clearly
• For all k-tensor T , (T σ )π = T π◦σ for all σ, π ∈ Sk .
• A k-tensor T is symmetric if and only if T σ = T for all σ ∈ Sk .
• A k-tensor T is a k-form if and only if T σ = (−1)σ T for all σ ∈ Sk , where (−1)σ = 1 if σ
is even, and (−1)σ = −1 if σ is odd.

¶ Anti-symmetrization.
For any k-tensor T on V , we consider the anti-symmetrization map
1 X
Alt(T ) = (−1)π T π .
k! π∈S
k

Lemma 2.2. The map Alt is a projection from ⊗k V ∗ to Λk V ∗ , i.e. it is a linear map satisfying
(1) For any T ∈ ⊗k V ∗ , Alt(T ) ∈ Λk V ∗ .
(2) For any T ∈ Λk V ∗ , Alt(T ) = T .
Proof. (1) For any T ∈ ⊗k V ∗ and any σ ∈ Sk ,
1 X 1 X
[Alt(T )]σ = (−1)π (T π )σ = (−1)σ (−1)σ◦π T σ◦π = (−1)σ Alt(T ).
k! π∈S k! π∈S
k k

k ∗ π π
(2) If T ∈ Λ V , then each summand (−1) T equals T . So Alt(T ) = T since |Sk | = k!. 
We will need
Lemma 2.3. Let T, S, R be k-, l-, and m-forms respectively. Then
(1) Alt(T ⊗ S) = (−1)kl Alt(S ⊗ T ).
(2) Alt(Alt(T ⊗ S) ⊗ R) = Alt(T ⊗ S ⊗ R) = Alt(T ⊗ Alt(S ⊗ R)).
Proof. Exercise. 

¶ The wedge product.


Now we can define a “product operation” for linear forms:
Definition 2.4. The wedge product of T ∈ Λk V ∗ and S ∈ Λl V ∗ is the (k + l)-form
(k + l)!
T ∧S = Alt(T ⊗ S).
k!l!
The wedge product operation satisfies
LECTURE 21: TENSORS AND DIFFERENTIAL FORMS 5

Proposition 2.5. The wedge product operation ∧ : (Λk V ∗ ) × (Λl V ∗ ) → Λk+l V ∗ is


(1) Bi-linear: (T, S) 7→ T ∧ S is linear in T and in S.
(2) Anti-commutative: T ∧ S = (−1)kl S ∧ T .
(3) Associative: (T ∧ S) ∧ R = T ∧ (S ∧ R).
Proof. (1) follows from Definition 2.4. (2) follows from Lemma 2.3(1). (3) follows from Definition
2.4 and Lemma 2.3(2). 
So it makes sense to talk about wedge products of three or more linear forms. For example,
if T ∈ Λk V ∗ , S ∈ Λl V ∗ and R ∈ Λm V ∗ , then we have
(k + l + m)!
T ∧S∧R= Alt(T ⊗ S ⊗ R).
k!l!m!
One can easily extend this to wedge products of more than three linear forms. In particular, by
definition we have: if f 1 , · · · , f k ∈ V ∗ , then
f 1 ∧ · · · ∧ f k = k!Alt(f 1 ⊗ · · · ⊗ f k ).
As a consequence,
Proposition 2.6. For any f 1 , · · · , f k ∈ V ∗ and v1 , · · · , vk ∈ V ,
(f 1 ∧ · · · ∧ f k )(v1 , · · · , vk ) = det(f i (vj )).
Proof. We have
(f 1 ∧ · · · ∧ f k )(v1 , · · · , vk ) = k! Alt(f 1 ⊗ · · · ⊗ f k )(v1 , · · · , vk )
X
= (−1)σ f 1 (vσ(1) ) · · · f k (vσ(k) )
σ∈Sk

= det((f i (vj ))).



¶ The vector space of linear k-forms.
Now we are ready to prove
Theorem 2.7. Let {f 1 , · · · , f n } be a basis of V ∗ . Then the set of k-forms
{f i1 ∧ f i2 ∧ · · · ∧ f ik | 1 ≤ i1 < i2 < · · · < ik ≤ n}
form a basis of Λk V ∗ . In particular, dim Λk V ∗ = nk .


Proof. Again we denote by {e1 , · · · , en } the dual basis in V . For any multi-index I = (i1 , · · · , ik )
with i1 < · · · < ik , we let ΩI = f i1 ∧ · · · ∧ f ik . Then for any multi-index J = (j1 , · · · , jk ) with
j1 < · · · < jk ,
ΩI (ej1 , · · · , ejk ) = det((f ir (ejs ))1≤r,s≤k ) = δji11 ,··· ,ik
,··· ,jk .
It follows that these ΩI ’s are linearly independent.
Moreover, since any T ∈ Λk V ∗ is a k-tensor, we can write T = I TI F I , where I = (i1 , · · · , ik )
P
runs over all 1 ≤ i1 , · · · , ik ≤ n, and F I is as in the proof of Theorem 1.2. Note that ΩI =
6 LECTURE 21: TENSORS AND DIFFERENTIAL FORMS

k!Alt(F I ). Here, the indices I need not be increasing, but we have: ΩI = 0 if two indices in
0
I equal, and ΩI = ±ΩI if I contains no equal indices, where I 0 is the re-arrangement of I in
increasing order. So
!
X 1 X X
T = Alt(T ) = TI Alt(F I ) = (±TI 0 ) ΩI
all I
k! I increasing I 0 =I as sets

is a linear combination of ΩI with I’s being only increasing indices. 


Remark. As an immediate consequence, we see
• dim Λn (V ∗ ) = 1.
– So any n-form on an n-dimensional vector space V is a multiple of the non-trivial
n-form “det”.
• Moreover, for k > n, Λk (V ∗ ) = 0.

¶ The interior product and the pull-back.


Finally we define two more operators on linear k-forms.
Definition 2.8. The interior product of a vector v ∈ V with a linear k-form α ∈ Λk (V ∗ ) is the
(k − 1)-covector
ιv α(X1 , · · · , Xk−1 ) := α(v, X1 , · · · , Xk−1 ).
Definition 2.9. Let L : W → V be linear. The pullback L∗ : Λk (V ∗ ) → Λk (W ∗ ) is defined to be
(L∗ α)(X1 , · · · , Xk ) := α(L(X1 ), · · · , L(Xk ))

The following proposition will be important in the rest of this semester. The proof is left as
an exercise.
Proposition 2.10. Let α be a linear k-form on V and β a linear l-form on V . Then
(1) For any v ∈ V , ιv ιv α = 0.
(2) For any v ∈ V , ιv (α ∧ β) = (ιv α) ∧ β + (−1)k α ∧ ιv β.
(3) For any linear map L : W → V , L∗ (α ∧ β) = L∗ α ∧ L∗ β.
Proof. Left as an exercise. 

3. Reading: Tensors fields and differential forms on smooth manifolds


¶ Cotangent spaces.
Let M be a smooth manifold. We have associated to each p ∈ M a vector space Tp M . If we
take any local chart (ϕ, U, V ) around p, then we can write down an explicit basis for Tp M :
∂(f ◦ ϕ−1 )
∂i |p : C ∞ (U ) → R, ∂i |p (f ) = (ϕ(p)), (1 ≤ i ≤ n).
∂xi
Note that not only the ∂i |p ’s form a basis for the tangent space Tp M , but in fact ∂i ’s are smooth
vector fields on U , and for any q ∈ U , the ∂i |q ’s form a basis for the tangent space Tq M .
LECTURE 21: TENSORS AND DIFFERENTIAL FORMS 7

Now let’s study the dual space Tp∗ M of Tp M . We introduced Tp∗ M in PSet 3-1-6. It is called
the cotangent space of M at p, and elements in Tp∗ M are called cotangent vectors at p. It is also
quit easy to write down an explicit basis of Tp∗ M , (and in fact a basis of Tq∗ M , for each q ∈ U ,
varying smoothly in q), in any given local chart (ϕ, U, V ): We first note that for each 1 ≤ i ≤ n,
xi ◦ ϕ : U → R
is a smooth function on U . The differential of this function, which we will denote by dxi for
simplicity, is a linear map (when restricted to any q ∈ U )
dxi |q : Tq M = Tq U → Txi ◦ϕ(q) R = R.
In other words, each dxi |q is an element in Tq∗ M . More over, by definition,
dxi |q (∂j |q ) = ∂j |q (xi ◦ ϕ) = δji .
So we conclude
Proposition 3.1. In any local chart (ϕ, U, V ), {dxi |q : 1 ≤ i ≤ n} is a basis of Tq∗ M . Moreover,
this basis is the dual basis to the basis {∂i |q : 1 ≤ i ≤ n} of Tq M .

In fact, for any f ∈ C ∞ (U ), by the same way we get a linear map dfq : Tq M → R. In other
words, we get a cotangent vector dfq ∈ Tq∗ M . By definition, dfp (∂i |p ) = ∂i |p (f ). It follows
dfp = (∂1 |p f )dx1 |p + · · · + (∂n |p f )dxn |p ,
and moreover, for any X ∈ Γ∞ (T U ),
df (X) = Xf,
where both sides are regarded as functions on U . We will call df a 1-form on U .

¶ Tensor fields on smooth manifolds.


Now we are ready to define (Compare: the definition of vector fields on manifolds)
Definition 3.2. An (l, k)-tensor field T on M is an assignment that assigns to each point p ∈ M
an (l, k)-tensor Tp ∈ ⊗l,k Tp M .
Remark. By definition, T is a tensor at if and only if it is point-wise linear in each entry. It follows
that T is tensor field on M if and only if it is “function-linear” in each entry. So it is more than a
multi-linear map, i.e., we also have (where ω’s are 1-forms on U , and X’s are vector fields on U )
T (f1 ω 1 , · · · , fl ω l , g 1 X1 , , · · · , g k Xk ) = f1 · · · fl g 1 · · · g k T (ω1 , · · · , ωl , X1 , · · · , Xk ).

If we fix any local chart (ϕ, U, V ) near p, then we can write


X i ···i
Tp = Tj11···jkl ∂i1 |p ⊗ · · · ⊗ ∂il |p ⊗ dxj1 |p ⊗ · · · ⊗ dxjk |p ,
···il
where Tji11···jk
’s are constants (which depends on p). In other words, in any coordinate chart U one
can write X i ···i
T = Tj11···jkl ∂i1 ⊗ · · · ⊗ ∂il ⊗ dxj1 ⊗ · · · ⊗ dxjk ,
···il
where Tji11···j k
’s are functions on U .
8 LECTURE 21: TENSORS AND DIFFERENTIAL FORMS

Definition 3.3. We say a (l, k)-tensor field T on M is smooth if in any coordinate chart U , the
···il
functions Tji11···j k
’s are smooth.

Note that when (l, k) = (1, 0), we will get a smooth vector field on M . The set of all smooth
(l, k)-tensors is denoted by Γ∞ (⊗l,k T M ). Again this is an infinite dimensional vector space.
···il
Remark. The coefficient functions Tji11···j k
’s are only defined in local charts. If one uses another
chart U 0 , one gets another set of coefficient functions (even if at the same point).
Example. A symmetric positive smooth (0, 2)-tensor field g on M is called a Riemannian metric
on M . Locally a Riemannian metric is of the form
X
g= gij (x)dxi ⊗ dxj ,
where (gij (x)) is a positive definite symmetric matrix depending smoothly on x. We have seen
the existence of Riemannian metric on any smooth manifold in PSet 3-2-5.

¶ Differential forms on smooth manifolds.


Similarly one can define smooth k-forms on a smooth manifold M :
Definition 3.4. A k-form ω on a smooth manifold M is an assignment that assigns to each point
p ∈ M a linear k-form ωp ∈ Λk Tp∗ M . A k-form ω is smooth if locally one can write
X X
ω= ωI dxI = ωi1 ,··· ,ik dxi1 ∧ · · · ∧ dxik ,
I I

where the summation is over increasing k-tuples I = {1 ≤ i1 < · · · < ik ≤ n}, and each ωI ∈ C ∞ (U ).

Since k-forms will be frequently used in the rest of this course, we will denote the set of all
smooth k-forms by Ωk (M ) (instead of the lengthy expression Γ∞ (Λk T ∗ M )). Note that any smooth
function on M can be viewed as a smooth 0-form. So
Ω0 (M ) = C ∞ (M ).
Since there is no linear k-form on Tp M for k > n = dim M , we get
Ωk (M ) = 0, ∀k > n.
Note that if ω ∈ Ωk (M ), and X1 , · · · , Xk ∈ Γ∞ (T M ), then ω(X1 , · · · , Xk ) ∈ C ∞ (M ).

¶ Operations on differential forms on smooth manifolds.


Of course the pointwise operations for linear k-forms that we learned last time still make sense
for differential forms on manifolds. So on differential forms we have the following operations:
• The wedge product ∧ : Ωk (U ) × Ωl (U ) → Ωk+l (U ).
– For example,
(dx1 + 2dx2 ) ∧ (dx1 ∧ dx2 − dx2 ∧ dx3 + 3dx1 ∧ dx3 ) = −7dx1 ∧ dx2 ∧ dx3 .
• For any X ∈ Γ∞ (T U ), one has the interior product ιX : Ωk (U ) → Ωk−1 (U ).
LECTURE 21: TENSORS AND DIFFERENTIAL FORMS 9

– For example,
X
ιX (dxi1 ∧ · · · ∧ dxik ) = (−1)r−1 dxir (X)dxi1 ∧ · · · ∧ dx
d ir ∧ · · · ∧ dxik .

• For any smooth map ϕ : U → U , one has the pull-back ϕ∗ : Ωk (U ) → Ωk (U 0 ).


0

– This is defined pointwise via the linear map dϕp : Tp U 0 → Tϕ(p) U . So if ω ∈ Ωk (U ),


then
(ϕ∗ ω)p (X1 , · · · , Xk ) = ωϕ(p) (dϕp (X1 ), · · · , dϕp (Xk )).
Note: if k = 0, then ϕ∗ is exactly the pull-back ϕ∗ : C ∞ (U ) → C ∞ (U 0 ) on functions.
These operations are all linear (where ∧ is bilinear). Here we list some important properties of
these operations.
Proposition 3.5. Suppose ω ∈ Ωk (U ), η ∈ Ωl (U ), X ∈ Γ∞ (T U ) and ϕ ∈ C ∞ (U 0 , U ) and ψ ∈
C ∞ (U, U
e ). Then
(1) ω ∧ η = (−1)kl η ∧ ω.
(2) ϕ∗ (ω ∧ η) = ϕ∗ ω ∧ ϕ∗ η.
(3) ιX (ω ∧ η) = (ιX ω) ∧ η + (−1)k ω ∧ ιX η.
(4) ιX ◦ ιX = 0.
(5) (ψ ◦ ϕ)∗ = ψ ∗ ◦ ϕ∗ .
Proof. All follows from definitions and corresponding results for linear differential forms. 

You might also like