0% found this document useful (0 votes)
21 views70 pages

2 - Modelo Neoclásico de Crecimiento

The document provides an overview of the deterministic neoclassical growth model. It describes the key assumptions: a one-sector closed economy with labor-augmenting technological progress and constant population growth. Capital accumulation depends on investment and depreciation. Utility depends on per capita consumption. The model is solved in closed form by making additional assumptions like Cobb-Douglas production and logarithmic utility. Key variables are normalized by the level of technology and population.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views70 pages

2 - Modelo Neoclásico de Crecimiento

The document provides an overview of the deterministic neoclassical growth model. It describes the key assumptions: a one-sector closed economy with labor-augmenting technological progress and constant population growth. Capital accumulation depends on investment and depreciation. Utility depends on per capita consumption. The model is solved in closed form by making additional assumptions like Cobb-Douglas production and logarithmic utility. Key variables are normalized by the level of technology and population.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Neoclassical Growth Model

Macroeconomı́a II
Maestrı́a en Economı́a
UTDT

Francisco J. Ciocchini

fciocchini@[Link]

2023

1/70
Plan

I In this lecture

I Study a deterministic version of the Neoclassical Growth Model.

I Then solve a stochastic version of the model.

I In both cases, use dynamic programming as solution method.

I Discuss some concepts, like the steady-state of a stochastic model, and


impulse-response functions.

2/70
Deterministic Neoclassical Growth Model

I Discrete time, infinite horizon: t 2 {0, 1, 2, ...}.


I One-sector model.
I Closed economy.
I No government.
I Aggregate production function:

Yt = F (Kt , At Lt )

I The function F (•) is di↵erentiable and displays standard neoclassical proper-


ties: constant returns to scale in (K, L), positive marginal products, dimin-
ishing marginal products, Inada conditions.
I Labor-augmenting technological progress:

At = (1 + g)t A0 A0 > 0, g 0

I Constant population (= labor) growth:

Lt = (1 + n)t L0 L0 > 0

3/70
Deterministic Neoclassical Growth Model

I Capital accumulation:

Kt+1 = It + (1 )Kt

where It is gross investment at t.


I Resource constraint:
Y t = C t + It

I Utility of the representative household:


1
X
b=
U bt u (b
c t ) Lt
t=0

ct ⌘ C
where b t
Lt
= consumption per capita, and b ⌘ 1
1+⇢
= discount factor,
with ⇢ > 0 (⇢ is the discount rate).
I The function u(•) is di↵erentiable, with u0 > 0 and u00 < 0, and satisfies
standard Inada conditions.

4/70
Particular Case

I The following assumptions will allow us to solve the model in closed form:

I Cobb-Douglas production function

Yt = Kt↵ (At Lt )1 ↵
↵ 2 (0, 1)

I This function satisfies all the neoclassical properites listed before.

I Logarithmic utility
ct ) = ln b
u(b ct
I This function satisfies all the properties stated above.

I Full depreciation
= 1 ) Kt+1 = It

5/70
Normalization
I For any variable X, define:
Xt
xt ⌘
A t Lt
I Take the aggregate production function: Yt = F (Kt , At Lt ).
I By constant returns to scale in (K, L) :

Y t = F ( Kt , A t L t ) 8 >0

I Choosing = 1
:
At L t
⇣ ⌘
Yt Kt
At L t
=F At Lt
,1

I Then:
yt = F (kt , 1)
I Then:
yt = f (kt )
where f (kt ) ⌘ F (kt , 1).
I When F (Kt , At Lt ) = Kt↵ (At Lt )1 ↵ we get f (kt ) = kt↵ . Then:
yt = kt↵

6/70
Normalization
I Substituting It = Kt+1 (1 )Kt into the resource constraint we get:

Ct + Kt+1 (1 )Kt = Yt

I Dividing by At Lt :
Ct Kt+1
At Lt
+ At L t
(1 ) AKt Lt t = Yt
At Lt

Ct Kt+1 At+1 Lt+1


At L t
+ At+1 Lt+1 At L t
(1 ) AKt Lt t = Yt
At Lt

I Then:
ct + kt+1 (1 + g)(1 + n) (1 )kt = yt

I Using yt = f (kt ) :

ct + kt+1 (1 + g)(1 + n) (1 )kt = f (kt )

I With Cobb-Douglas technology and = 1 we get:


ct + kt+1 (1 + g)(1 + n) = kt↵

7/70
Normalization
I With log utility:
1
X
b=
U bt ln(b
ct )Lt
t=0

I Then:
1 ⇣
P ⌘t ⇣ ⌘
b
U = 1
ln Ct
(1 + n)t L0
1+⇢ Lt
t=0

1 ⇣
P ⌘t ⇣ ⌘
1+n Ct
= L0 1+⇢
ln Lt
t=0

P
1 ⇣ ⌘
t
= L0 ln At ACt Lt t
t=0

P
1
t
= L0 ln (At ct )
t=0

P
1
t
= L0 (ln At + ln ct )
t=0

1+n
where we have defined ⌘ 1+⇢
, and we’ll assume < 1 (⇢ > n).
8/70
Normalization
I Log utility (cont.)
I Then:
1
P 1
P
b
U = L0 t ln At + L0 t ln ct
t=0 t=0

1
P 1
P
= L0 t [ln A + t ln(1 + g)] + L0 t ln ct
0
t=0 t=0

1
P 1
P 1
P
= L0 ln A0 t + L0 ln(1 + g) tt + L0 t ln ct
t=0 t=0 t=0

1
P
L0 ln A0 L0 ln(1+g) t
= 1
+ (1 )2
+ L0 ln ct
t=0

I Hence:
1
X
b = constant1 + constant2
U t
ln ct
t=0

I Since constant2 = L0 > 0, we can work with:


1
X
t
U ⌘ ln ct
t=0

9/70
Planner’s Problem

I The planner solves:

P
1
t
max 1 ln ct
{ct ,kt+1 }t=0 t=0

s. t. ct + (1 + n)(1 + g)kt+1 = kt↵

k0 > 0 given

I We didn’t include non-negativity constraints because our assumptions ensure


that consumption and capital will be strictly positive at the optimum.

I Notice that the constraint can be rewritten as follows:


kt↵ ct
kt+1 = (1+n)(1+g)

I The expression above is of the form kt+1 = G(kt , ct ).

10/70
Bellman Equation

I The structure of the planner’s problem fits exactly into the setup we studied
last class.
I Consequently, we can solve it using Dynamic Programming.
I For any period t, the Bellman Equation is:

V (kt ) = max ln ct + V (kt+1 )


ct ,kt+1

kt↵ ct
s. t. kt+1 = (1+n)(1+g)

kt given

11/70
Value Function Iteration

I Computation of V1
I Setting V0 ⌘ 0 (i.e., V0 (k) = 0 8k) we get:

V1 (kt ) = max ln ct
ct ,kt+1

kt↵ ct
s. t. kt+1 = (1+n)(1+g)

kt+1 0

kt given

I With V0 ⌘ 0 we need to impose kt+1 0 explicitly, otherwise the problem has


no solution (kt+1 ! 1 and ct ! 1).
I Solution:
kt+1 = 0

ct = kt↵

V1 (kt ) = ↵ ln kt

12/70
Value Function Iteration

I Computation of V2
I Bellman Equation:
V2 (kt ) = max ln ct + V1 (kt+1 )
ct ,kt+1

kt↵ ct
s. t. kt+1 = (1+n)(1+g)

kt given

I We know V1 (kt+1 ) = ↵ ln kt+1 . Then, using the constraint to eliminate


kt+1 : ⇣ ⌘
kt↵ ct
V2 (kt ) = max ln ct + ↵ ln (1+n)(1+g)
ct

I FOC:
1 ↵ 1
ct
+ ↵ c
kt t (1+n)(1+g)
=0
(1+n)(1+g)

I Then:
1 ↵
ct kt↵ ct
=0

13/70
Value Function Iteration
I Computation of V2 (cont.)
I Then:
1
ct = 1+↵
kt↵

I Substituting the expression above into the constraint:


kt↵ 1
1+↵
kt↵
kt+1 = (1+n)(1+g)

I Then:
1 ↵
kt+1 = (1+n)(1+g) 1+↵
kt↵

I Then:

⇣ ⌘ ⇣ ⌘
1 1 ↵
V2 (kt ) = ln 1+↵
kt↵ + ↵ ln (1+n)(1+g) 1+↵
kt↵
⇣ ⌘ ⇣ ⌘
1 ↵
= ln 1+↵
+ ↵ ln kt + ↵ ln (1+n)(1+g)(1+↵ )
+ ↵2 ln kt
h ⇣ ⌘ ⇣ ⌘i
1 ↵
= ln 1+↵ + ↵ ln (1+n)(1+g)(1+↵ )
+ ↵ (1 + ↵ ) ln kt

= C + ↵ (1 + ↵ ) ln kt

14/70
Value Function Iteration
I Computation of V3
I Bellman Equation:
V3 (kt ) = max ln ct + V2 (kt+1 )
ct ,kt+1

kt↵ ct
s. t. kt+1 = (1+n)(1+g)

kt given

I We know V2 (kt+1 ) = C + ↵(1 + ↵ ) ln kt+1 . Then, using the constraint to


eliminate kt+1 :
✓ ◆
kt↵ ct
V3 (kt ) = max ln ct + C + ↵(1 + ↵ ) ln
ct (1 + n)(1 + g)

I FOC:
1 ↵(1+↵ ) 1
ct
+ ↵ c
kt t (1+n)(1+g)
=0
(1+n)(1+g)

I Then:
1 ↵ (1+↵ )
ct kt↵ ct
=0

15/70
Value Function Iteration

I Computation of V3 (cont.)
I Then:
1
ct = k↵
1+↵ +(↵ )2 t

I Substituting the expression above into the constraint:


kt↵ 1 kt↵
1+↵ +(↵ )2
kt+1 = (1+n)(1+g)

I Then:
1 ↵ (1+↵ )
kt+1 = k↵
(1+n)(1+g) 1+↵ +(↵ )2 t

I Then:
✓ ◆ ✓ ◆
↵ (1+↵ )
V3 (kt ) = ln 1 kt↵ + C+ ↵ (1 + ↵ ) ln 1 kt↵
1+↵ +(↵ )2 (1+n)(1+g) 1+↵ +(↵ )2

✓ ◆ ✓ ◆
1 1 ↵ (1+↵ )
= ln + C + ↵ (1 + ↵ ) ln
1+↵ +(↵ )2 (1+n)(1+g) 1+↵ +(↵ )2
+ ↵ ln kt + ↵ (1 + ↵ ) ↵ ln kt
 ✓ ◆ ✓ ◆
1 ↵ (1+↵ )
= ln + C + ↵ (1 + ↵ ) ln
1+↵ +(↵ )2 (1+n)(1+g)[1+↵ +(↵ )2 ]
+ ↵[1 + ↵ + (↵ )2 ] ln kt

16/70
Value Function Iteration

I Computation of V3 (cont.)
I Replacing C :

2 ⇣ ⌘ ⇣ ⌘ ⇣ ⌘ 3

ln 1
1+↵ +(↵ )2
+ ln 1
1+↵ + ↵ 2 ln (1+n)(1+g)(1+↵ )
V3 (kt ) = 4 ⇣ ⌘ 5
↵ (1+↵ )
+↵ (1 + ↵ ) ln 2
(1+n)(1+g)[1+↵ +(↵ ) ]

+↵[1 + ↵ + (↵ )2 ] ln kt

I Then:
V3 (kt ) = D + ↵[1 + ↵ + (↵ )2 ] ln kt

17/70
Value Function Iteration
I Computation of V4
I Bellman Equation:
V4 (kt ) = max ln ct + V3 (kt+1 )
ct ,kt+1

kt↵ ct
s. t. kt+1 = (1+n)(1+g)

kt given

I We know V3 (kt+1 ) = D + ↵[1 + ↵ + (↵ )2 ] ln kt+1 . Then, using the


constraint to eliminate kt+1 :
⇣ ⌘
kt↵ ct
V4 (kt ) = max ln ct + D + ↵[1 + ↵ + (↵ )2 ] ln (1+n)(1+g)
ct

I FOC:
1 ↵[1+↵ +(↵ )2 ] 1
ct
+ ↵ c
kt t (1+n)(1+g)
=0
(1+n)(1+g)

I Then:
1 ↵ [1+↵ +(↵ )2 ]
ct kt↵ ct
=0

18/70
Value Function Iteration
I Computation of V4 (cont.)
I Then:
1
ct = k↵
1+↵ +(↵ )2 +(↵ )3 t

I Substituting the expression above into the constraint we get:


1 ↵ [1+↵ +(↵ )2 ]
kt+1 = k↵
(1+n)(1+g) 1+↵ +(↵ )2 +(↵ )3 t

I Substituting the expressions above into the objective we find:


⇣ ⌘ ⇣ ⌘
1 1
V4 (kt ) = ln 1+↵ +(↵ )2 +(↵ )3
+ ln 1+↵ +(↵ )2
⇣ ⌘ ⇣ ⌘
2 1 3 ↵
+ ln 1+↵ +↵ ln (1+n)(1+g)(1+↵ )

⇣ ⌘
2 ↵ (1+↵ )
+↵ (1 + ↵ ) ln (1+n)(1+g)[1+↵ +(↵ )2 ]
✓ ◆
↵ [1+↵ +(↵ )2 ]
+↵ [1 + ↵ + (↵ )2 ] ln (1+n)(1+g)[1+↵ +(↵ )2 +(↵ )3 ]

+↵[1 + ↵ + (↵ )2 + (↵ )3 ] ln kt

I Then:
V4 (kt ) = E + ↵[1 + ↵ + (↵ )2 + (↵ )3 ] ln kt

19/70
Value Function Iteration
I Computation of Vj
I From the previous steps we recognize the following pattern:

1
ct = k↵
1+↵ +(↵ )2 +...+(↵ )j 1 t

↵ 1+↵ +(↵ )2 +...+(↵ )j 2 ↵


kt+1 = k
(1+n)(1+g) 1+↵ +(↵ )2 +...+(↵ )j 1 t

I And
Vj (kt ) = ej + dj ln kt
where:

dj = ↵[1 + ↵ + (↵ )2 + ... + (↵ )j 1]

e j = aj + bj
Pj ⇣ ⌘
2 s 1
aj = s=0 ln 1+↵ +(↵ )2 +...+(↵ )j 1 s
" ⇥ ⇤ #
Pj 1 +⇣↵ + (↵ )2 + ... + (↵ )j 2 s
s⌘
2 s
bj = ↵ s=0 ↵ 1+↵ +(↵ )2 +...+(↵ )j 2
⇥ ln (1+n)(1+g) 1+↵ +(↵ )2 +...+(↵ )j 1 s

20/70
Value Function Iteration

I Convergence

I Letting j ! 1 we obtain:

ct = (1 ↵ )kt↵

kt+1 = k↵
(1+n)(1+g) t

I Notice that:
ct = (1 ↵ )yt

Ct = (1 ↵ )Yt

I The saving rate is constant:


s = ↵

↵(1+n)
= 1+⇢

I This looks like the Solow Model, but the constant saving rate is endogenously
determined.

21/70
Value Function Iteration

I Convergence (cont.)
h ⇣ ⌘i
I limj!1 ej = 1
ln(1 ↵ )+ ↵
ln ↵
1 1 ↵ (1+n)(1+g)

I limj!1 dj = ↵
1 ↵

I Then:

h ⇣ ⌘i
1 ↵ ↵ ↵
V (kt ) = 1
ln(1 ↵ )+ 1 ↵
ln (1+n)(1+g)
+ 1 ↵
ln kt

22/70
Guess and Verify

I Guess:

V g (kt ) = ⌦ + ⇥ ln kt

where ⌦ and ⇥ are parameters to be determined.


I The superscript g indicates that we are guessing the form of the value func-
tion; we’ll need to verify whether this guess is appropriate.
I Now solve:

Ve (kt ) = max ln ct + V g (kt+1 )


ct ,kt+1

kt↵ ct
s. t. kt+1 = (1+n)(1+g)

kt given

I The idea is to find Ve (kt ) and then look for the values of ⌦ and ⇥ that make
Ve (kt ) = V g (kt ) 8kt .
I If the guess works, we’ll use V to denote the resulting value function.

23/70
Guess and Verify
I We can rewrite the problem above as follows:
h ⇣ ⌘i
kt↵ ct
Ve (kt ) = max ln ct + ⌦ + ⇥ ln (1+n)(1+g)
ct

I FOC:
1 ⇥ 1
ct
+ ↵ c
kt t (1+n)(1+g)
=0
(1+n)(1+g)

I Then:
1 ⇥
ct kt↵ ct
=0

I Then:
ct = 1
k↵
1+ ⇥ t

I Substituting the expression above into the constraint:


kt↵ 1 k↵
1+ ⇥ t
kt+1 = (1+n)(1+g)

I Then:

kt+1 = 1
k↵
(1+n)(1+g) 1+ ⇥ t

24/70
Guess and Verify

I Evaluating the objective:

⇣ ⌘ ⇣ ⌘
Ve (kt ) = ln 1
k↵
1+ ⇥ t
+ ⌦ + ⇥ ln 1 ⇥
k↵
(1+n)(1+g) 1+ ⇥ t

I Then:

h ⇣ ⌘ ⇣ ⌘i
Ve (kt ) = ln 1+1 ⇥ + ⌦ + ⇥ ln (1+n)(1+g)
1 ⇥
1+ ⇥
+ ↵(1 + ⇥) ln kt

I For the expression above to coincide with V g (kt ) = ⌦ + ⇥ ln kt 8kt , we


need:
⇣ ⌘ ⇣ ⌘

⌦ = ln 1+1 ⇥ + ⌦ + ⇥ ln (1+n)(1+g) 1
1+ ⇥

⇥ = ↵(1 + ⇥)

which is a system of two equations in two unknowns, ⌦ and ⇥.

25/70
Guess and Verify

I From the second equation:



⇥= 1 ↵

I Substituting into the first:

h ⇣ ⌘i
1 ↵ ↵
⌦= 1
ln (1 ↵ )+ 1 ↵
ln (1+n)(1+g)

I Then:

h ⇣ ⌘i
1 ↵ ↵ ↵
V (kt ) = 1
ln (1 ↵ )+ 1 ↵
ln (1+n)(1+g)
+ 1 ↵
ln kt

I As expected, the expression above coincides with the one we found through
value-function iteration.

26/70
Guess and Verify

I Finally, substituting ⇥ = ↵
1 ↵
into ct = 1
k↵
1+ ⇥ t
and kt+1 = 1 ⇥
k↵
(1+n)(1+g) 1+ ⇥ t
we get:

ct = (1 ↵ )kt↵


kt+1 = k↵
(1+n)(1+g) t

which coincide with the expressions we found through value-function itera-


tion.

27/70
Euler Equation
I Consider the Bellman Equation:

V (kt ) = max u(ct ) + V (kt+1 )


ct ,kt+1

f (kt ) ct
s. t. kt+1 = (1+n)(1+g)

kt given
where we have used u(ct ) insead of ln ct , and f (kt ) instead of kt↵ .
I From the constraint:
ct = f (kt ) (1 + n)(1 + g)kt+1

I Substituting the expression above into the objective:

✓ ◆
V (kt ) = max u f (kt ) (1 + n)(1 + g)kt+1 + V (kt+1 )
kt+1

I By using the constraint to eliminate ct we have a problem where the state variable in period t is
kt and the control variable is kt+1 . The transition equation is simply kt+1 = kt+1 (that is, the
control variable in period t fully determines de value of the state in t + 1). This is simpler than
the alternative of using the constraint to replace kt+1 in V (kt+1 ) because in the former case the
transition equation for period t does not depend on the state at t, which makes the application of the
envelope condition simpler (we discussed this during the first lecture).

28/70
Euler Equation
I FOC:
✓ ◆
u0 f (kt ) (1 + n)(1 + g)kt+1 (1 + n)(1 + g) + V 0 (kt+1 ) = 0

I The expression above implicitly defines the policy rule

kt+1 = h(kt )

I Substituting into the objective:

✓ ◆
V (kt ) = u f (kt ) (1 + n)(1 + g)h(kt ) + V (h(kt ))

I Di↵erentiating:

✓ ◆
V 0 (kt ) = u0 f (kt ) (1 + n)(1 + g)h(kt ) f 0 (kt ) (1 + n)(1 + g)h0 (kt )

✓ ◆
+ V0 h(kt ) h0 (kt )

29/70
Euler Equation

I Then:
✓ ◆
V 0 (kt ) = u0 f (kt ) (1 + n)(1 + g)h(kt ) f 0 (kt )

 ✓ ◆ ✓ ◆
+ u0 f (kt ) (1 + n)(1 + g)h(kt ) (1 + n)(1 + g) + V 0 h(kt ) h0 (kt )

I The FOC implies that the term in brackets is zero. Then:

✓ ◆
V 0 (kt ) = u0 f (kt ) (1 + n)(1 + g)h(kt ) f 0 (kt )

I We could have obtained this expression by a direct application of the Envelope


Theorem.

30/70
Euler Equation

I Shifting the expression above one period ahead we get:

✓ ◆
0 0
V (kt+1 ) = u f (kt+1 ) (1 + n)(1 + g)h(kt+1 ) f 0 (kt+1 )

I Substituting into the FOC, and using h(kt+1 ) = kt+2 :

✓ ◆
u0 f (kt ) (1 + n)(1 + g)kt+1 (1 + n)(1 + g)

✓ ◆
+ u0 f (kt+1 ) (1 + n)(1 + g)kt+2 f 0 (kt+1 ) = 0

I This is the Euler Equation, a second-order di↵erence equation in k.

31/70
Euler Equation

I The two boundary conditions are:

k0 given
✓ ◆
t 0
lim u f (kt ) (1 + n)(1 + g)kt+1 kt+1 = 0
t!1

I The first one is an initial condition (the initial value of the state variable is
predetermined).
I The second one is the transversality condition. This is an optimality condition
that has to be formally derived. For more on this, read the note on the
transversality condition provided together with these slides.
I Notice that the Euler Equation and the transversality condition can be
rewritten as follows:
f 0 (k )
u0 (ct ) = u0 (ct+1 ) (1+n)(1+g)
t+1

t 0
lim u (ct )kt+1 = 0
t!1

32/70
Euler Equation
I For our particular case with log preferences and Cobb-Douglas technology, the
Euler Equation becomes:
(1+n)(1+g) 1 ↵
ct = ↵
kt+1 ct+1

I We can check that our solution satisfies the expression above.


I Using ct+1 = (1 ↵
↵ )kt+1 :

(1+n)(1+g) 1 ↵ ↵
ct = ↵
kt+1 (1 ↵ )kt+1

(1+n)(1+g)
= ↵
(1 ↵ )kt+1

I Using kt+1 = ↵
k↵ :
(1+n)(1+g) t

(1+n)(1+g) ↵
ct = ↵
(1 ↵ ) (1+n)(1+g) kt↵

I Then:
ct = (1 ↵ )kt↵

which is what we wanted to show.

33/70
Euler Equation

I We can also show that the solution satisfies the transversality condition.

I Combining log preferences with our solution we get:

lim t u0 (c t 1 k
t )kt+1 = lim ct t+1
t!1 t!1

t 1 ↵
= lim k↵
(1 ↵ )kt↵ (1+n)(1+g) t
t!1

t 1 ↵
= lim 1 ↵ (1+n)(1+g)
t!1

1 ↵ t
= (1+n)(1+g) 1 ↵
lim
t!1

1 ↵
= (1+n)(1+g) 1 ↵
⇥0 since 2 (0, 1)

=0

which is what we wanted to show.

34/70
Steady State

I We look for a solution that satisfies kt+1 = kt and ct+1 = ct 8t .

I If a solution with these characteristics exists, we denote the constant values


of k and c by kss and css (ss = steady state).
I Capital

I Imposing kt+1 = kt = kss in kt+1 = ↵


k↵ we get:
(1+n)(1+g) t

kss = k↵
(1+n)(1+g) ss

I Solving for kss :


h i 1
↵ 1 ↵
kss = (1+n)(1+g)

h i 1
↵ 1 ↵
kss = (1+⇢)(1+g)

1+n
where we’ve used ⌘ 1+⇢
in the last step.

35/70
Steady State
I Substituting kt = kss into ct = (1 ↵ ) kt↵ :

css = (1 ↵ ) kss

I Substituting kt = kss into yt = kt↵ :



yss = kss
⇣ ⌘
I From yss = Yt
At Lt
we get
ss

Yss,t = yss At Lt

I Then: Yss,t = yss A0 (1 + g)t L0 (1 + n)t )

Yss,t = A0 L0 yss [(1 + g)(1 + n)]t

I Taking natural logs:


ln Yss,t = ln(A0 L0 yss ) + ln ((1 + g)(1 + n)) ⇥ t

I Notice that the gross growth rate of Yss,t is (1 + g)(1 + n).


I Similar results apply to the other aggregate variables, like Kt and Ct .
36/70
Steady State

I Output per capita

I Recall ybt ⌘ Yt
. Then: ybt = Yt
A = yt At . Then: ybss,t = yss At )
Lt At L t t

ybss,t = yss A0 (1 + g)t

I Taking natural logs:

ln ybss,t = ln(A0 yss ) + ln(1 + g) ⇥ t

I Notice that the gross growth rate of ybss,t is 1 + g.

I Similar results apply to the other variables in per capita terms, like b
kt and b
ct
(there’s balanced growth).

37/70
Simulation

I Suppose:
↵ = 0.3 A0 = 10

⇢ = 0.05 L0 = 10

n = 0.03 k0 = 0.01

g = 0.02

I Then:
b = 0.952 kss = 0.162

= 0.981 yss = 0.580

s = 0.294 css = 0.409

38/70
Simulation
I Normalized output, yt

I When k0 < kss , like in our parametrization, there is a transition period.


Eventually, yt converges to its steady-state value yss = 0.58. With this
parametrization, convergence is fast.

39/70
Simulation
I Log of aggregate output, ln Yt

I Log of ouput per capita, ln ybt

40/70
Decentralized Version

I Now we decentralize the planner’s problem using competitive markets.

I Representative Producer
I Profit maximization:
max Kt↵ (At Lt )1 ↵
wt Lt R t Kt
Kt ,Lt

I FOC:
↵Kt↵ 1
(At Lt )1 ↵ = Rt

(1 ↵)Kt↵ (At Lt ) ↵A
t = wt

I Substituting these expressions into the objective function we see that profits
are zero:
⇧t = 0

I We can rewrite the FOC as follows:

Rt = ↵kt↵ 1

wt = (1 ↵)At kt↵

41/70
Decentralized Version
I Representative household
I Utility maximization:
1
P ⇣ ⌘
max bt ln Ct Lt
{Ct ,It ,Kt+1 ,Bt }1 L t
t=0 t=0

s. t. Ct + It + Bt = wt Lt + Rt Kt + (1 + rt 1 )Bt 1 + ⇧t

It = Kt+1 (since = 1)

K0 > 0, B 1 = 0 given
with Bt = stock of bonds at the end of t, and rt = real interest rate.
I Since Lt = L0 (1 + n)t , we can solve:
1
P ⇣ ⌘
max 1 t ln Ct
{Ct ,Kt+1 ,Bt }t=0 L t
t=0

s. t. Ct + Kt+1 + Bt = wt Lt + Rt Kt + (1 + rt 1 )Bt 1

K0 > 0, B 1 = 0 given
1+n
where ⌘ 1+⇢
2 (0, 1), and we have already imposed ⇧t = 0.

42/70
Decentralized Version

I Representative household (cont.)

I Lagrangian:

P
1 ⇣ ⌘ P
1
t Ct
L= ln Lt + t [wt Lt + Rt Kt + (1 + rt 1 )Bt 1 Ct Kt+1 Bt ]
t=0 t=0

I FOC:

t
@L t 1 1
@Ct
=0) Ct Lt
= t ) t = Ct
Lt

@L
@Bt
=0) t = t+1 (1 + rt )

@L
@Kt+1
=0) t = t+1 Rt+1

@L
@ t
=0) Ct + Kt+1 + Bt = wt Lt + Rt Kt + (1 + rt 1 )Bt 1

43/70
Decentralized Version

I Representative household (cont.)

t t+1
I From the first and the third FOC we get = Rt+1 . Then:
Ct Ct+1

Ct+1 = Rt+1 Ct

I From the second and third FOC we get:

Rt+1 = 1 + rt

I Dividing both sides of Ct+1 = Rt+1 Ct by At+1 Lt+1 we get:


Ct+1
A L
= Rt+1 (1+g)AC(1+n)L
t
. Then:
t+1 t+1 t t

ct
ct+1 = R
(1+g)(1+n) t+1

44/70
Decentralized Version

I Equilibrium

I From the FOC of the firm we know Rt+1 = ↵k↵ 1 . Substituting into
t+1
ct ↵
ct+1 = R
(1+g)(1+n) t+1
we get: ct+1 = k↵ 1 c
(1+g)(1+n) t+1 t
)

(1+n)(1+g) 1 ↵
ct = ↵
kt+1 ct+1

which coincides with the Euler Equation of the Planner.


I Market clearing in the credit market requires Bt = 0 8t (closed economy).
Imposing Bt = 0 into the budget constraint of the representative household,
we get: Ct + Kt+1 = wt Lt + Rt Kt . Since profits are zero, this becomes:
Ct + Kt+1 = Yt . Dividing by At Lt and rearranging appropriately:

ct + (1 + g)(1 + n)kt+1 = kt↵

which coincides with the constraint faced by the Planner.


I Hence, the equilibrium allocation of the decentralized competitive economy
coincides with the one chosen by the planner.

45/70
Stochastic Neoclassical Growth Model

I This is sometimes called the Brock-Mirman model.


I Brock, W. & Mirman, L. (1972). “Optimal Economic Growth and Uncertainty: The
Discounted Case,” Journal of Economic Theory , Vol. 4, No. 3 (June), pp. 479-513.

I In order to obtain a closed-form solution, we’ll make the same simplifying


assumptions as before,
I We modify the aggregate production function as follows:

Yt = ✓t F (Kt , At Lt )

where {ln ✓t }1
t=0 is a sequence of zero-mean i.i.d. random variables.

I F (•), At and Lt satisfy the same properties as before.


I By constant returns to scale:

yt = ✓t f (kt )

I With Cobb-Douglas technology:

yt = ✓t kt↵

46/70
Planner’s Problem and Bellman Equation

I The planner chooses contingent plans for consumption and capital in order to
solve:
1
P
max E0 t ln ct
t=0

s. t. ct + (1 + n)(1 + g)kt+1 = ✓t kt↵

k0 > 0, ✓0 > 0 given

where E0 is the conditional expectation as of time t = 0.


I At time t, the planner knows the history of realizations of the TFP shock (✓0 , ✓1 , ..., ✓t )
but not the future realizations.
I Since the TFP shock is i.i.d., we define the state of the system as (kt , ✓t ). There-
fore, the Bellman Equation is:

V (kt , ✓t ) = max ln ct + Et V (kt+1 , ✓t+1 )


ct ,kt+1

✓t kt↵ ct
s. t. kt+1 = (1+n)(1+g)

kt , ✓t given

47/70
Guess and Verify
I Guess:
V g (kt ) = ⌦ + ⇥ ln kt + ln ✓t
I Then:

Ve (kt , ✓t ) = max ln ct + Et {⌦ + ⇥ ln kt+1 + ln ✓t+1 }


ct ,kt+1

✓t kt↵ ct
s. t. kt+1 = (1+n)(1+g)

kt , ✓t given

I Using the constraint to eliminate kt+1 :

n ⇣ ⌘ o
✓t kt↵ ct
Ve (kt , ✓t ) = max ln ct + Et ⌦ + ⇥ ln (1+n)(1+g) + ln ✓t+1
ct

I Then:
⇣ ⌘
✓t kt↵ ct
Ve (kt , ✓t ) = max ln ct + ⌦ + ⇥ ln (1+n)(1+g)
+ Et ln ✓t+1
ct
48/70
Guess and Verify

I Since {ln ✓t+1 } is i.i.d. and has zero mean, Et ln ✓t+1 = E ln ✓t+1 = 0.
Then:
⇣ ⌘
✓t kt↵ ct
Ve (kt , ✓t ) = max ln ct + ⌦ + ⇥ ln (1+n)(1+g)
ct

I FOC:
1 1 1
ct
+ ⇥ ↵ c
✓t k t t (1+n)(1+g)
=0
(1+n)(1+g)

I Then:
1 ⇥
ct ✓t kt↵ ct
=0

I Then:
ct = 1
✓ k↵
1+ ⇥ t t

✓t kt↵ 1 ✓ k↵
I Substituting into the constraint: kt+1 = 1+ ⇥ t t
(1+n)(1+g)
)


kt+1 = 1
✓ k↵
(1+n)(1+g) 1+ ⇥ t t

49/70
Guess and Verify

I Evaluating the objective:


⇣ ⌘ n ⇣ ⌘ o
e (kt , ✓t ) = ln
V 1 ↵
+ Et 1 ⇥
✓ k↵
1+ ⇥ ✓t kt ⌦ + ⇥ ln (1+n)(1+g) 1+ ⇥ t t
+ ln ✓t+1

I Then:

h ⇣ ⌘ ⇣ ⌘i
Ve (kt , ✓t ) = ln 1+1 ⇥ + ⌦ + ⇥ ln (1+n)(1+g)
1 ⇥
1+ ⇥

+↵ (1 + ⇥) ln kt + (1 + ⇥) ln ✓t

I Therefore, Ve (kt , ✓t ) = V g (kt , ✓t ) 8(kt , ✓t ) requires:

⇣ ⌘ ⇣ ⌘
1 1 ⇥
⌦ = ln 1+ ⇥
+ ⌦ + ⇥ ln (1+n)(1+g) 1+ ⇥

⇥ = ↵(1 + ⇥)

=1+ ⇥

50/70
Guess and Verify

I From the second equation:



⇥= 1 ↵

I Substituting the expression above into the third equation:


1
= 1 ↵

I Finally, substituting ⇥ = ↵
1 ↵
into the first equation:

h ⇣ ⌘i
1 ↵ ↵
⌦= 1
ln (1 ↵ )+ 1 ↵
ln (1+n)(1+g)

I Then:
h ⇣ ⌘i
1 ↵ ↵ ↵ 1
V (kt , ✓t ) = 1 ln (1 ↵ )+ 1 ↵ ln (1+n)(1+g)
+ 1 ↵ ln kt + 1 ↵ ln ✓t

51/70
Guess and Verify

I Substituting ⇥ = ↵
1 ↵
into ct = 1
✓ k↵
1+ ⇥ t t
:

ct = (1 ↵ )✓t kt↵

I Notice that
ct = (1 ↵ )yt

I Substituting ⇥ = ↵
1 ↵
into kt+1 = 1 ⇥
✓ k↵
(1+n)(1+g) 1+ ⇥ t t
:


kt+1 = ✓ k↵
(1+n)(1+g) t t

I Notice that

kt+1 = y
(1+n)(1+g) t

52/70
Nonstochastic Steady State

I Suppose that, in every period, ln ✓t equals its mean:

ln ✓t = 0 8t

I Hence:
✓t = 1 8t

I Then:

kt+1 = k↵
(1+n)(1+g) t

I Setting kt+1 = kt = knss we get:


h i 1
↵ 1 ↵
knss = (1+n)(1+g)

where the subscript nss stands for nonstochastic steady state.


I Then:

cnss = (1 ↵ )knss


ynss = knss

53/70
Limiting Distribution

I Taking logs in kt+1 = ↵


✓ k↵
(1+n)(1+g) t t
:

⇣ ⌘

ln kt+1 = ln (1+n)(1+g)
+ ↵ ln kt + ln ✓t

I Then:
ln kt+1 = + ↵ ln kt + ln ✓t 8t 0

⇣ ⌘

where ⌘ ln (1+n)(1+g)
.

I Now we’ll iterate forward on the expression above.

I At t = 1 :
ln k1 = + ↵ ln k0 + ln ✓0

54/70
Limiting Distribution

I At t = 2 :
ln k2 = + ↵ ln k1 + ln ✓1
= + ↵( + ↵ ln k0 + ln ✓0 ) + ln ✓1
= (1 + ↵) + ↵2 ln k0 + ↵ ln ✓0 + ln ✓1

I At t = 3 :

ln k3 = + ↵ ln
⇥ k2 + ln ✓2 ⇤
= + ↵ (1 + ↵) + ↵2 ln k0 + ↵ ln ✓0 + ln ✓1 + ln ✓2
= (1 + ↵ + ↵2 ) + ↵3 ln k0 + ↵2 ln ✓0 + ↵ ln ✓1 + ln ✓2

I ...

I Hence, 8t 1:

ln kt = (1 + ↵ + ↵2 + ... + ↵t 1
) + ↵t ln k0

+↵t 1
ln ✓0 + ↵t 2
ln ✓1 + ... + ↵ ln ✓t 2 + ln ✓t 1

55/70
Limiting Distribution

I Then:
tP1 tP1
ln kt = ↵s + ↵t ln k0 + ↵s ln ✓t 1 s
s=0 s=0

I Or:
tP2
1 ↵t
ln kt = 1 ↵
+ ↵t ln k0 + ↵t 1
ln ✓0 + ↵s ln ✓t 1 s
s=0

I From now on assume:


2
ln ✓t ⇠ N (0, ) 8t

I Notice that, given k0 and ln ✓0 , ln kt is a linear combination of independent


normally-distributed random variables.
I Therefore, given k0 and ln ✓0 , ln kt is normally distributed.
I We can calculate its mean and variance.

56/70
Limiting Distribution

I The mean is:

⇢ tP2
1 ↵t
E0 ln kt = E0 1 ↵
+ ↵t ln k0 + ↵t 1
ln ✓0 + ↵s ln ✓t 1 s
s=0

tP2
1 ↵t
E0 ln kt = 1 ↵
+ ↵t ln k0 + ↵t 1
ln ✓0 + ↵s E0 ln ✓t 1 s
s=0

tP2
1 ↵t
E0 ln kt = 1 ↵
+ ↵t ln k0 + ↵t 1
ln ✓0 + ↵s ⇥ 0
s=0

I Then:
1 ↵t
E0 ln kt = 1 ↵
+ ↵t ln k0 + ↵t 1
ln ✓0

57/70
Limiting Distribution

I The variance is:


⇢ tP2
1 ↵t
V ar0 {ln kt } = V ar0 1 ↵
+ ↵t ln k0 + ↵t 1 ln ✓0 + ↵s ln ✓t 1 s
s=0
⇢t 2
P
V ar0 {ln kt } = V ar0 ↵s ln ✓t 1 s
s=0
COV = D

tP2 -
V ar0 {ln kt } = (↵s )2 V ar {ln ✓t 1 s} (independence)
s=0

tP2
V ar0 {ln kt } = ↵2s 2 (identically distributed)
s=0

tP2
2 s
V ar0 {ln kt } = ↵2
s=0

I Then:
t 1
21 ↵2
V ar0 {ln kt } =
1 ↵2

58/70
Limiting Distribution
I Now we take limits for t ! 1.

I Mean

h i
1 ↵t
lim E0 ln kt = lim 1 ↵
+ ↵t ln k0 + ↵t 1 ln ✓0
t!1 t!1

t
lim E0 ln kt = lim 1 ↵ + ln k0 lim ↵t + ln ✓0 lim ↵t 1
t!1 t!1 1 ↵ t!1 t!1

lim E0 ln kt = 1 ↵
t!1

I Variance

 t 1
21 (↵ 2 )
lim V ar0 {ln kt } = lim 1 ↵2
t!1 t!1

2
h i
t 1
lim V ar0 {ln kt } = 1 ↵2 t!1
lim 1 ↵2
t!1

2
lim V ar0 {ln kt } = 1 ↵2
t!1

59/70
Limiting Distribution
=distribution

I Then:
limite de h (k)
M ⇣ 2

ln k1 ⇠ N 1 ↵
, 1 ↵2

where ln k1 is just notation for the limiting distribution.

I From yt = ✓t kt↵ :
ln yt = ln ✓t + ↵ ln kt

I Hence: ⇣ ⌘
↵ 2
ln y1 ⇠ N ,
1 ↵ 1 ↵2

I The same can be done for consumption, ct = (1 ↵ )yt .

I Important: the steady state of a stochastic model is not a set of numbers,


it is characterized by an invariant distribution function for the endogenous
variables.

60/70
Simulations

I Same parametrization as before:

↵ = 0.3, ⇢ = 0.05, n = 0.03, g = 0.02, A0 = 10, L0 = 10

I Then:
b = 0.952, = 0.981, s = 0.294

knss = 0.162, ynss = 0.580, cnss = 0.409

I Now we add ln ✓t ⇠ N (0, 2


) with

= 0.05

I Assume
k0 = knss

61/70
Simulations
I Normalized output (y)

I Log of aggregate output (ln Y ) and log of output per capita (ln yb)

62/70
Simulations

I Now assume:

k0 = 0.01 < knss

I Log of aggregate output (ln Y ) and log of output per capita (ln yb)

63/70
Simulations

I Limiting distribution of ln y

I The histogram has been normalized to have area = 1, like a density.


I The red line is the theoretical limiting distribution: ln y1 ⇠ N 0.5454, 0.05242 .

64/70
Impulse-Response Functions
I Evolution of ✓t

I Response of yt

65/70
Impulse-Response Functions
I Response of ln Yt

I Response of ln ybt

66/70
Appendix: Phase Diagram for the Deterministic Model

I Recall the Euler Equation:

↵ (1 ↵)
ct+1 = k
(1+n)(1+g) t+1
ct

I Then: h i
↵ (1 ↵)
ct+1 ct = k
(1+n)(1+g) t+1
1 ct

kt↵ ct
I Using kt+1 = (1+n)(1+g)
to eliminate kt+1 we get:

 ⇣ ⌘ (1 ↵)
↵ kt↵ ct
ct+1 ct = (1+n)(1+g) (1+n)(1+g)
1 ct

kt↵ ct
I Also, from kt+1 = (1+n)(1+g)
we get:

kt↵ ct
kt+1 kt = (1+n)(1+g)
kt

67/70
Appendix: Phase Diagram for the Deterministic Model

I Hence, we have a system of two di↵erence equations in k and c :

8  ⇣ ⌘ (1 ↵)
> ↵ kt↵ ct
>
< ct+1 = (1+n)(1+g) (1+n)(1+g)
1 ct

>
>
: kt↵ ct
kt+1 = (1+n)(1+g)
kt

where xt+1 ⌘ xt+1 xt .

I The ct+1 = 0 and kt+1 = 0 schedules are:

8
< ct+1 = 0 : ct = kt↵ (1 + n)(1 + g)kss
:
kt+1 = 0 : ct = kt↵ (1 + n)(1 + g)kt

h i 1
↵ 1 ↵
where kss = (1+n)(1+g)
.

68/70
Appendix: Phase Diagram for the Deterministic Model

I Now we can plot the two schedules given above, together with the policy
function
ct = (1 ↵ ) kt↵
function
I Phase diagram a policy

69/70
Appendix: Phase Diagram for the Deterministic Model

I Remarks

I For the plot we have set ↵ = 0.7, ⇢ = 0.05, n = 0.03, and g = 0.02. These
values give kss = 0.242, yss = 0.371, and css = 0.116.

I The three lines cross at k = kss .

I The arrows indicate the qualitative dynamics of the system.

I Notice that the policy function is the stable arm of the system.

I In the continuous-time version of this model, the c = 0 schedule would be


a vertical line at k = kss .

70/70

You might also like