# ch03.pdf

by user

on
Category: Documents
4

views

Report

#### Transcript

ch03.pdf
```Chapter 3
Numerical Methods for Solution of
Ordinary Differential Equations
1
Introduction
As described before, many mathematical models of systems of interest in engineering and the
sciences are formulated in terms of differential equations. In these equations, the function
of interest is given implicitly in terms of one or more independent variables by means of
a relationship involving the function and its derivatives. Sometimes, the given differential
equation is simple enough that an exact solution can be determined using standard analytical
methods. However, in many cases this is not possible and one must resort to one of the various
available numerical methods in order to determine a numerical approximation to the desired
solution. Numerical methods exist to solve both ordinary and partial differential equations.
This chapter contains a summary of useful numerical techniques applicable to the solution
of initial and boundary value problems for ordinary differential equations.
Ordinary differential equation problems where conditions are specified only at one end
of the domain of the independent variable are called Initial Value Problems (IVP). IVP are
typically associated with the time evolution of a dynamical system from a specified initial
condition of the system.
Boundary Value Problems (BVP) for ordinary differential equations involve the determination of a function of a single independent variable satisfying a given differential equation,
subject to specified values at the boundaries of the solution domain. Second and fourth
order BVP are most common in engineering applications.
1
2
Initial Value Problems for Ordinary Differential Equations
An initial value problem in first order, ordinary differential equations consists of an ordinary
differential equation for the variable y(t) ∈ [a, b]
dy
= f(t, y)
dt
together with an initial condition
y(a) = α
where f(t, y) and α are given.
Initial value problems are also encountered that involve systems of equations where for
vectors y(t) = (y1 , y2, ...yn) and f(t, y) = (f1(t, y), f2(t, y), ..fn(t, y)) one has the relationship
dy
= f(t, y)
dt
subject to
y(a) = (α1, α2 , ..., αn)
Higher order IVPs appear when the given differential equation is of order greater than
one (while conditions are specified only at one end of the solution domain). These problems
are solved by first transforming them into an equivalent system of first order IVPs. For
instance the problem of determining the function y(t) satisfying
d2 y
= −ω 2 y
dt2
subject to
y(0) = y0
and
dy
|0 = 0
dt
can be transformed into the following system of two first-order IVP
dy1
= y2
dt
2
and
dy2
= −ω 2 y1
dt
subject respectively to
y1 (0) = y0
and
y2(0) = 0
2.1
Some Useful Definitions and Theorems
A function f(t, y) is Lipschitz in y if there is an L > 0 such that for any pair (t, yi), (t, yj ) ∈
D ⊂ ℜ2
|f(t, yi) − f(t, yj )| ≤ L|yi − yj |
where L is the Lipschitz constant for f.
D ⊂ ℜ2 is convex if whenever (t1 , y1), (t2, y2) ∈ D ⊂ ℜ2, ((1 − λ)t1 + λt2 , (1 − λ)y1 + λy2 )
also belongs to D for each λ ∈ [0, 1].
If f(x, y) is defined on a convex set and a constant L > 0 exists such that L ≥ |∂f/∂y|
then f is Lipschitz.
If f is continuous and Lipschitz on D in y the solution y(t) with a ≤ t ≤ b to the (IVP)
y ′(t) = f(t, y),
y(a) = α
is unique.
If the solution to the IVP y ′ (t) = f(t, y),
y(a) = α on a ≤ t ≤ b is unique and if
for any ǫ > 0 there is a k(ǫ) > 0 such that whenever |ǫ0| < ǫ and |δ(t)| < ǫ, the perturbed
problem
z ′(t) = f(t, y) + δ(t),
z(a) = α + ǫ0
also has a unique solution z(t) and
|z(t) − y(t)| < k(ǫ)ǫ
∀a ≤ t ≤ b
If D = [(t, y)|a ≤ t ≤ b, −∞ < y < ∞]andf(t,y) is continuous and Lipschitz in y on D,
the IVP y ′(t) = f(t, y),
y(a) = α is well posed
3
2.2
Euler’s Method
Consider the well posed IVP
y ′(t) = f(t, y),
a ≤ t ≤ b,
y(a) = α
Create now a set of uniformly spaced (step size h = (b − a)/N) mesh points in [a, b] such
that ti = a + ih, i = 0, 1, 2, ...N. Assuming now that y(t) ∈ C 2[a, b]
(ti+1 − ti )2 ′′
y (ξi ) =
2
h2
= y(ti) + hy ′ (ti) + y ′′(ξi ) =
2
h2
= y(ti) + hf(ti , y(ti)) + y ′′(ξi )
2
y(ti+1) = y(ti) + (ti+1 − ti)y ′ (ti) +
for some numbers ξi ∈ [ti, ti+1 ] and with h = ti+1 − ti . Euler’s method is the approximation
wi ≈ y(ti) obtained when the remainder is deleted. I.e.
w0 = α
wi+1 = wi + hf(ti , wi )
This is called the difference equation of Euler’s method.
Euler’s Method Algorithm
• Give function f(t, y), initial condition, endpoints a and b, and the number of subintervals N.
• Set h = (b − a)/N; t = a, w = α.
• For i = 1, 2, ..., N compute w = w + hf(t, w) and t = t + h
The following theorem affirms the existence of a bound for the truncation error in Euler’s
Method. If f(t, y) is continuous and Lipschitz with constant L in y on D and |y ′′(t)| ≤ M <
∞ the error in using Euler’s method is bounded and the bound is iven by
|y(ti) − wi | ≤
hM
[exp(L(ti − a) − 1]
2L
If roundoff error is included, there is a corresponding bound on the total error and this
is
|y(ti) − wi | ≤
1 hM
δ
(
+ )[exp(L(ti − a) − 1] + |δ0| exp(L(ti − a)
L 2
h
Therefore, there is an optimum step size h∗ =
4
q
2δ
M
giving lowest total error.
2.3
Higher Order Taylor Methods
From the above, it is clear that higher order accuracy is required. The difference method
w0 = α
wi+1 = wi + hφ(ti, wi )
for each i = 0, 1, ...N − 1 has a local truncation error given by
τi+1 (h) =
y(ti+1) − (y(ti) + hφ(ti , y(ti)))
yi+1 − yi
=
− φ(ti, yi )
h
h
for each i = 0, 1, ...N − 1. For example, for Euler’s method, the local error is
τi+1 (h) =
yi+1 − yi
h
− f(ti , yi ) = y ′′(ξi )
h
2
so that τi+1 (h) = O(h). One would like numerical methods such that τi+1 (h) = O(hp ) with
p >> 1 but with a manageable amount of computation. One such method is the Taylor
Method of Order n
w0 = α
wi+1 = wi + hT (n)(ti , wi )
for each i=0,1,2,...N-1, where
h
hn
T (n)(ti , wi ) = f(ti , wi ) + f ′ (ti , wi ) + ... + f (n−1) (ti , wi )
2
n!
If Taylor’s method of order n is used to approximate the solution to the problem
y ′(t) = f(t, y(t))
in a ≤ t ≤ b with y(0) = α and y ∈ C n+1 [a, b] with step size h, then the truncation error is
O(hn ).
2.4
Runge-Kutta Methods
Higher order methods which do not require calculation of the derivatives of f have been
developed. The Runge-Kutta family of methods are in this class. To motivate one needs the
n-th Taylor polynomial in two variables:
5
If f(t, y) and all its partial derivatives of order ≤ n + 1 are continuous on D then
f(t, y) = Pn (t, y) + Rn (t, y). Here Pn (t, y) is the n-th Taylor polynomial in two variables and
Rn (t, y) is the remainder.
Consider now the Taylor polynomial
h ∂f
h ∂f
+
f(t, y) =
2 ∂t
2 ∂y
h
h
= f(t + , y + f(t, y)) − R1
2
2
T (2)(t, y) = f(t, y) +
Where R1 = O(h2 ). If R1 is discarded and T (2) this is used in Taylor’s method formula of
order two one obtains the midpoint method:
w0 = α
h
h
wi+1 = wi + hf(ti + , wi + f(ti , wi ))
2
2
for each i = 0, 1, ...N − 1. Other RK methods of order two are also available.
The Runge-Kutta method of order four is the most common RK method. This is as
follows
w0 = α
k1 = hf(ti , wi )
1
h
k2 = hf(ti + , wi + k1 )
2
2
h
1
k3 = hf(ti + , wi + k2 )
2
2
k4 = hf(ti+1 , wi + k3 )
1
wi+1 = wi + (k1 + k2 + k3 + k4 )
6
This method has local truncation error O(h4 ). The method is implemented in Maple with
the rungekutta command.
Runge-Kutta Order Four Algorithm.
• Give function f(t, y), initial condition, endpoints a and b, and the number of subintervals N.
• Set h = (b − a)/N; t = a, w = α.
• For i = 1, 2, ..., N compute K1 = hf(t, w), K2 = hf(t + h/2, w + K1 /2), K3 = hf(t +
h/2, w + k2 /2), K4 = hf(t + h, w + K3 ), w = w + (K1 + 2K2 + 2K3 + K4 )/6 and t = t + h
6
2.5
Runge-Kutta-Fehlberg Method
Often, the use of non-uniform step sizes could be advantageous. If the step size is selected
so that the approximation stays within a pre-selected value of the (global) truncation error
one obtains an adaptive method. Consider two approximations to an IVP differing only in
order and obtained using a fixed step size h.
w0 = α
wi+1 = wi + hφ(ti, wi , h)
with τi+1 (h) = O(hn ), and
w0∗ = α
∗
wi+1
= wi∗ + hφ∗(ti , wi∗ , h)
∗
∗
with τi+1
(h) = O(hn+1 ). Now if wi+1 ≈ wi+1
then
∗
wi+1
− wi+1
τi+1 (h) ≈
≈ Khn
h
Since we want to vary the step size, let’s pick a q < 1 such that the new spacing is qh.
Therefore
∗
n
n wi+1 − wi+1
τi+1 (h) ≈ K(qh) ≈ q
h
So, to bound τi+1 (qh) by ǫ, pick q so that
ǫh
q≤( ∗
)1/n
|wi+1 − wi+1 |
In practice
q ≤ 0.84(
∗
|wi+1
ǫh
)1/4
− wi+1 |
∗
In the Runge-Kutta-Fehlberg method one uses order five for wi+1
and order four for wi+1 .
∗
In practice, an initial value of h is selected. Then q is calculated from wi+1
and wi+1 . The
initial choice of h is therefore adjusted or not depending on whether the required tolerance
has been achieved.
Runge-Kutta-Fehlberg Algorithm.
• Give function f(t, y), initial condition, endpoints a and b, the desired tolerance level,
and the maximum and minimum step sizes hmax and hmin .
• Compute K1 through K6 and R
• If R < T OL then update w and advance t; otherwise set δ = 0.84 ∗ (T OL/R)1/4 and
set h = 0.1h (if δ < 0.1) , h = 4h (if δ > 4) or h = δh (if 0.1 < δ < 4).
• Continue as long as t < b and hmin < h < hmax .
7
2.6
Multistep Methods
In multi-step methods information at more than two mesh points is used to construct the
approximation.
For the problem
y ′(t) = f(t, y),
a ≤ t ≤ b,
y(a) = α
an m-step approximation method has the form
wi+1 = am−1 wi + am−2 wi−1 + ... + a0wi−m+1 +
h[bmf(ti+1 , wi+1 ) + bm−1 f(ti , wi ) + ... + b0f(ti−m+1 , wi−m+1 )]
Here, the ai ’s and bi’s are constants and the starting values
w0 = α, w1 = α1, w2 = α2 , ..., wm−1 = αm−1
must be specified.
If bm = 0 the method is called explicit. An example is the fourth order-four step AdamsBashforth method (AB4)
w0 = α, w1 = α1, w2 = α2 , w3 = α3
wi+1 = wi +
h
(55f(ti , wi ) − 59f(ti−1 , wi−1 ) + 37f(ti−2 , wi−2 ) − 9f(ti−3 , wi−3 ))
24
for each i = 3, 4, ..., N − 1.
If bm 6= 0 the method is implicit.An example is the fourth order-three step Adams-Moulton
method (AM4)
w0 = α, w1 = α1 , w2 = α2
wi+1 = wi +
h
(9f(ti+1 , wi+1 ) + 19f(ti , wi) − 5f(ti−1 , wi−1 ) + f(ti−2 , wi−2 ))
24
for each i = 2, 3, 4, ..., N − 1.
If y(t) is the solution of y ′(t) = f(t, y), a ≤ t ≤ b, y(a) = α obtained by
wi+1 = am−1 wi + am−2 wi−1 + ... + a0wi−m+1 +
h[bmf(ti+1 , wi+1 ) + bm−1 f(ti , wi ) + ... + b0f(ti−m+1 , wi−m+1 )]
the local truncation error at the (i + 1)th step can be defined.
Most importantly, the local truncation error for AB4 is
AB4
τi+1
(h) =
251 (5)
y (µi )h4
720
8
while the one for AM4 is
19 (5)
y (µi )h4
720
However, AM4 requires only three points while AB4 requires four!
It is common to combine an approximating prediction obtained explicitly with a correction obtained implicitly. These are called predictor-corrector methods. This is how the
Adams Fourth Order Predictor Corrector (A4PC) method works:
1.- Starting with w0 values w1, w2 , w3 are calculated using a single step method like RK4.
2.- Then, AB4 is used to predict w40 .
3.- Finally, AM4 is used to correct the prediction giving w41
4.- To improve the prediction, one can either reduce h (preferred) or iterate around AM4.
Adams Fourth Order Predictor-Corrector (A4PC) Algorithm.
AM 4
τi+1
(h) = −
• Give function f(t, y), initial condition α, endpoints a and b.
• Set h = (b − a)/N; t0 = a; w0 = α
• Compute starting values i = 1, 2, 3 using RK4 method.
• For i > 3 use AB4 to predict w and AM4 to correct w.
2.7
Variable Step-Size Multistep Methods
0
Let the A4PC method be used to solve an IVP producing approximations wi+1
and wi+1 .
Assuming that h is small, substraction of the truncation error equations leads to
y (5)(µi ) ≈
8
0
(wi+1 − wi+1
)
3h5
AM 4
Backsubstituting into τi+1
(h) produces
AM 4
τi+1
(h) =
0
19|wi+1 − wi+1
|
270h
If now q < 1 is selected so that the new step size is qh and the computation is repeated, one
0
0∗
∗
and wi+1 the set wi+1
and wi+1
. One can then select the value of q
AM 4
again and again until τi+1 (h) is less than a previously specified tolerance ǫ, i.e.
q ≤ 1.5(
ǫh
)1/4
0
|wi+1 − wi+1
|
In practice, the step size reduction is avoided whenever
0
ǫ
19|wi+1 − wi+1
|
≤
≤ǫ
10
270h
9
• Give function f(t, y), initial condition α, endpoints a and b, the desired tolerance level
TOL, and the maximum and minimum step sizes hmax and hmin .
• Set t0 = a, w0 = α, h = hmax
• Compute starting values i = 1, 2, 3 using RK4 method.
• For i > 3 use AB4 to predict w = wp and AM4 to correct w = wc .
• Compute σ = 19|wc − wp |/270h. If σ < T OL accept wc and continue adjusting h up
or down as needed.
• Otherwise, adjust h down to 0.1h or qh (if q < 0.1) and repeat the calculation.
Exercise 11. Example 5.7.1 BF. Using the AVSPC to solve the IVP y ′(t) = f(t, y) =
y − t2 + 1, y(0) = 0.5.
2.8
Extrapolation Methods
Extrapolation methods appropriately combine O(h) accurate formulae to produce higher
accuracy formulae. The key idea is Richardon’s extrapolation. For instance, use of Euler’s
method with h0 = h/2 yields the following approximation to y(a + h0),
w1 = w0 + h0 f(a, w0)
Using next the midpoint method to approximate y(a + h)
w2 = w0 + 2h0 f(a + h0 , w1)
Using now endpoint correction yields the following O(h2 ) approximation to y(t1)
1
y1,1 = [w2 + w1 + h0 f(a + 2h0 , w2 )]
2
Now taking h1 = h/4 and proceeding as above yields the following alternative O(h2 ) approximation to y(t1)
1
y2,1 = [w4 + w3 + h1 f(a + 4h1 , w4 )]
2
where
w4 = w2 + 2h1 f(a + 3h1 , w3)
w3 = w1 + 2h1 f(a + 2h1 , w2)
10
w2 = w0 + 2h1 f(a + h1 , w1)
w1 = w0 + h1 f(a, w0)
The following combination of the above two approximations is now an O(h4 ) approximation
to y(t1)
1
y2,2 = y2,1 + (y2,1 − y1,1)
3
Extrapolation Algorithm.
• Give function f(t, y), initial condition α, endpoints a and b, the desired tolerance level
TOL, and the maximum and minimum step sizes hmax and hmin .
• Define the integers qi , i = 1, 2..., 7 and the spacings hi = h/qi
• Compute approximations using Euler and midpoint methods.
• Compute extrapolations.
• Accept approximation or decrease h.
2.9
Higher Order Equations and Systems of Equations
A higher order IVP can be rewritten as a system of first order IVPs.
dy
= f(t, y)
dt
with
y(a) = (α1, α2 , ..., αn)
where y(t) = (y1, y2 , ...yn) and f(t, y) = (f1 (t, y), f2(t, y), ..fn(t, y))
One can show that if f is continuous and Lipschitz in all the variables y, the system of
IVP’s has a unique solution y(t).
Therefore, any of the previously discussed methods can be employed to solve systems of
IVP’s. In Maple the solution of systems is implemented either with the dsolve command or
using rungekutta.
Runge-Kutta Method for Systems Algorithm.
• Give number of equations m, the function f(t, y), initial condition αi , endpoints a and
b, and the number of subintervals N.
• For each variable yj , j = 1, 2, ..., m compute K1,j , K2,j , K3,j , K4,j and the approximations wj
11
2.10
Stability
A effective numerical method for the approximation of the solution of an initial value problem
must fulfill some basic mathematical requirements in order to be useful in practice. The key
concepts are consistency, stability and convergence. Only methods with these characteristics
are likely to be useful for practical computation.
Consider for instance the problem
dy
= λy
dt
where λ may be a complex number, i.e. λ = λR + iλI . This must be solved subject to
y(0) = y0
for a ≤ t ≤ b. Introduce uniformly spaced mesh points such that ti = a + nh, n = 0, 1, 2, ...N
with h = (b − a)/N and use Euler’s forward difference method to approximate the solution,
i.e.
wn+1 = wn + hλwn = wn (1 + λh)
Because of the recursive character of this relationship one can also write it as
wn = w0(1 + λh)n = w0 σ n
where the complex number
σ = 1 + λR h + iλI h
is called the amplification factor.
For stability of the numerical solution it is required that it remains bounded as n increases
and this is only possible if
|σ| ≤ 1
Consider now specifically the case when λR ≤ 0. Euler’s method above will be stable
only as long as
|σ|2 = (1 + λR h)2 + (λI h)2 ≤ 1
that is, the region of stability, represented in complex plane is the unit circle centered at
z = −1. as long as λh is inside the circle the method is stable. This despite the fact that
the region of stability of the exact solution is the entire left-hand plane. Therefore, if λ is
real (and ≤ 0), for stability of the numerical method it is required that
h≤
2
|λ|
12
and the method is conditionally stable.
A one step method is consistent iff
lim max1≤i≤N |τi (h)| = lim max1≤i≤N |
h→0
h→0
y(ti+1 ) − y(ti )
− f(ti , y(ti ))| = 0
h
I.e. if the difference approximation to y ′ approaches f as h → 0.
A one step method is convergent iff
lim max1≤i≤N |wi − y(ti)| = 0
h→0
I.e. if the solution of the difference equation approaches the solution of the differential
equation as h → 0.
A numerical method is stable if its results depend continuously on the initial data.
Let the IVP y ′(t) = f(t, y), y(a) = α be approximated by a single step method
w0 = α
wi+1 = wi + hφ(ti, wi , h)
where 0 ≤ h ≤ h0 > 0. Moreover, let φ(ti, wi , h) be Lipschitz in w with constant L. Then:
1) The method is stable.
2) If the method is consistent then it is convergent.
3) The local truncation error is such that
|y(ti) − wi | ≤
τ (h) L(ti−a)
e
L
The situation concerning multistep methods is more involved but proceeds along similar
lines.
2.11
Stiff Equations
If no restrictions are placed on the step size of approximation methods used to solve initial
value problems whose solutions contain rapidly decaying (exponential) terms of the form
exp(λt) with λ < 0, the approximation fails. Problems of this type are called stiff problems.
For instance, for the IVP y ′ = λy with y(0) = α, control of roundoff error is obtained only if
h<
2
|λ|
Numerical solutions to stiff problems are commonly generated using implicit multistep
methods. For instance use of the implicit trapezoidal formula to approximate the solution
to y ′ = f yields the (generally nonlinear) equation
h
F (w) = wj+1 − wj − [f(tj+1 , wj+1 ) + f(tj , wj )] = 0
2
13
The solution to this can be obtained by Newton’s method iterations as
wj+1,k = wj+1,k−1 −
F (wj+1,k−1)
F ′(wj+1,k−1 )
Implicit Trapezoidal with Newton Iteration Algorithm.
• Give function f(t, y), initial condition α, endpoints a and b, tolerance TOL, maximum
number of iterations M.
• Set h = (b − a)/N; t = a; w = α.
• Compute w0 = w + (h/2)f(t, w)
• Use w = w0 −
3
w0 −(h/2)f (t+h,w0 )−w0
1−(h/2)fy (t+h,w0 )
and iterate to tolerance.
Boundary Value Problems
While initial value problems require determination of the function subject to specified values
of it and maybe of its derivatives at one end of the domain (typically t = 0), in BVPs involving
second order ODEs and function values are specified at the two ends of the solution domain
(typically x = 0 and x = L). Many problems in engineering and science can be formulated
as BVPs. Examples include: steady state conduction heat transfer in a thin heated wire,
electric potential inside a thin conductor, deflection of a thin elastic thread under load and
many others.
In contrast with initial value problems, boundary value problems often have multiple
solutions or even fail to have a solution.
For instance, consider the problem of determining y(x) such that
d2 y
+ y = y ′′ + y = 0
dx2
By inspection (or by use of Frobenius’s method), the general solution of this differential
equation is of the form
y(x) = A sin(x) + B cos(x)
Now, if the boundary conditions are
y(0) = 0
y(π/2) = 1
the unique solution to the BVP is y(x) = sin(x).
14
Alternatively, if the boundary conditions are
y(0) = 0
y(π) = 0
multiple solutions of the form y(x) = A sin(x) with A arbitrary exist.
Finally, if the boundary conditions are
y(0) = 0
y(π) = 1
no solution y(x) exists.
Three commonly used numerical methods for BVPs are: shooting, finite difference and
finite element. All three can be used both for linear and nonlinear problems. The principles
of the three methods are described below.
3.1
Shooting Methods
Shooting methods are based on the transformation of the original boundary value problem
into an equivalent set of initial value problems. The solution of the original problem is
obtained directly from the solutions of the equivalent set of initial value problems.
3.1.1
Higher Order Equations and Systems of Equations
A higher order IVP can always be rewritten as a system of first order IVPs.
dy
= f(x, y)
dx
with
y(a) = (α1, α2 , ..., αn)
where y(t) = (y1, y2 , ...yn) and f(t, y) = (f1 (t, y), f2(t, y), ..fn(t, y))
For instance, the third order equation
d3 y
dy d2 y
=
f(x,
y,
,
)
dx3
dx dx2
subject to the initial conditions
y(0) = α1
y ′(0) = α2
y ′′(0) = α3
15
by introducing the new variables
y1 = y
dy
y2 =
dx
d2 y
y3 = 2
dx
can be transformed into the equivalent system of three first order equations
dy1
= y2
dx
subject to
y1(0) = α1
dy2
= y3
dx
subject to
y2(0) = α2
and
dy3
= f(x, y1, y2, y3 )
dx
subject to
y3(0) = α3
Uniqueness Theorem. If f is continuous and Lipschitz in all the variables y, the system
of IVP’s has a unique solution y(t).
Therefore, any of the various Initial Value Problem algorithms used to solve single equations can be readily extended to deal with systems of equations. In Maple the solution of
systems is implemented with the dsolve command.
Recall that the fourth order Runge-Kutta method is a single step procedure which produces an approximation wi to the solution y(ti) of the initial value problem represented by
the equation
dy
= f(t, y)
dt
on a ≤ x ≤ b subject to the condition
y(a) = α
16
given by
1
wi+1 = wi + (k1 + 2k2 + 2k3 + k4 )
6
where
k1 = hf(ti , wi )
k1
h
k2 = hf(ti + , wi + )
2
2
h
k2
k3 = hf(ti + , wi + )
2
2
k4 = hf(ti+1 , wi + k3 )
and h = ∆t is the step size.
Runge-Kutta Method for Systems Algorithm.
• Give number of equations m, the function f(t, y), initial condition αi , endpoints a and
b, and the number of subintervals N.
• For each variable yj , j = 1, 2, ..., m compute K1,j , K2,j , K3,j , K4,j and the approximations wj
3.1.2
Linear Shooting
A boundary value problem (BVP) involving an ordinary differential equation of order two
consists of determining the function y(x) for x ∈ [a, b] such that
y ′′ = f(x, y, y ′)
subject to the boundary conditions
y(a) = α
and
y(b) = β
Theorem of Uniqueness. If the function f(x, y, y ′) as well as ∂f/∂y and ∂f/∂y ′ are
continuous in the set D = {(x, y, y ′) : a ≤ x ≤ b, −∞ ≤ y ≤ +∞, −∞ ≤ y ′ ≤ +∞} and
if ∂f/∂y > 0 for all (x, y, y ′) ∈ D and |∂f/∂y| ≤ M for all (x, y, y ′) ∈ D where M is a
constant, then the BVP has a unique solution.
The shooting method is based on the idea of replacing the original BVP by an equivalent
set of initial value problems (IVPs) which are solved using standard IVP methods.
17
Corollary on Uniqueness of the Linear Problem. If the BVP is linear , i.e.
y ′′ = p(x)y ′ + q(x)y + r(x)
in
a≤x≤b
subject to
y(a) = α
and
y(b) = β
and the coefficients p(x), q(x) and r(x) satisfy
1. p(x), q(x), and r(x) are continuous on [a, b],
2. q(x) > 0 on [a, b],
then the problem has a unique solution given by
y(x) = y1 (x) +
β − y1(b)
y2(x)
y2 (b)
where y1 (x) and y2(x) are, respectively, the solutions to the following two IVPs
y1′′ = p(x)y1′ + q(x)y1 + r(x)
in
a≤x≤b
subject to
y1(a) = α
and
y1(a)′ = 0
and
y2′′ = p(x)y2′ + q(x)y2
18
in
a≤x≤b
subject to
y2 (a) = 0
and
y2(a)′ = 1
Linear Shooting Algorithm
• Give functions p(x), q(x), r(x), endpoints a, b, boundary conditions α, β and the number
of subintervals N.
• Set h = (b − a)/N; u1,0 = α, u2,0 = 0, v1,0 = 0, v2,0 = 1.
• For i = 0, 2, ..., N − 1 set x = a + ih and use RK method for systems to determine
u1,i+1 , u2,i+1, v1,i+1, v2,i+1.
• Set w1,0 = α and w2,0 = (β − u1,N )/v1,N .
• For i = 1, 2, ..., N produce w1 = u1,i + w2,0v1,i ≈ y(x) and w2 = u2,i + w2,0v2,i ≈ y ′(x)
3.1.3
More General Boundary Conditions
With small modifications, the shooting method described above can be extended and applied
to problems involving more general boundary conditions. Specifically, for the BVP consisting
of determining the function y(x) that satisfies the second order ODE
−y ′′ + p(x)y ′ + q(x)y = r(x)
in x ∈ [a, b] and subject to
a0y(a) + a1y ′(a) = α
b0 y(b) + b1 y ′(b) = β
with |a0 | + |b0| =
6 0.
It can be shown that the solution can be expressed as
y(x) = y(x; s) = y1 (x) + sy2(x)
19
where
s=
β − [b0y1(b) + b1y1′ (b)]
b0y2(b) + b1 y2′ (b)
and where y1 (x) and y2 (x) are, respectively, the solutions to the following two IVPs
−y1′′ + p(x)y1′ + q(x)y1 = r(x)
subject to
y1(a) = −αc1
y1 (a)′ = −αc0
and
−y2′′ + p(x)y2′ + q(x)y2 = 0
subject to
y2(a) = a1
and
y2 (a)′ = a0
Here, c0 and c1 are any constants such that
a 1 c0 − a 0 c1 = 1
Furthermore, for the above to be true (otherwise, the trivial solution is obtained) it is required
that
b0 y2(b) + b1y2′ (b) 6= 0
3.1.4
Nonlinear Shooting
If the BVP is nonlinear, its solution cannot be obtained by linear combination of the solutions of the associated IVPs. Instead one must produce a sequence of solutions y(x, t)
corresponding to an IVP of the form
y ′′(x, t) = f(x, y(x, t), y ′(x, t))
in
a≤x≤b
20
subject to
y(a, t) = α
and
y ′(a, t) = t
where t is a parameter of the method whose values must be selected so that
y(b, t) − β = 0
If the preceeding differential equation is differentiated with respect to t and z(x, t) =
(∂y/∂t)(x, t) the second associated IVP is obtained
z ′′(x, t) =
∂f
∂f
z(x, t) + ′ z ′ (x, t)
∂y
∂y
in
a≤x≤b
with
z(a, t) = 0
and
z ′ (a, t) = 1
Therefore, nonlinear shooting proceeds by solving the above two IVP’s and Newton’s
method to determine subsequent iterate values for tk as follows
tk = tk−1 −
y(b, tk−1) − β
z(b, tk−1)
Nonlinear Shooting Algorithm.
• Give function f(x, y, y ′), endpoints a, b, boundary conditions α, β, the number of subintervals N, the tolerance T OL and the maximum number of iterations M.
• Set h = (b − a)/N; k = 1, T K = (β − α)/(b − a),
• For k ≤ M set w1,0 = α, w2,0 = T K, u1 = 0, u2 = 1.
• For i = 1, 2, ..., N set x = a + (i − 1)h and use RK method for systems to determine
w1,i , w2,i, u1, u2 .
• If |w1,N − β| ≤ T OL stop else set T K = T K − (w1,N − β)/u1 (Newton’s method) and
continue until k = M .
21
3.2
Finite Difference Methods
In the finite difference method one starts with the differential formulation and by a process
of discretization transforms the problem into a system of interlinked simultaneous algebraic
equations that then must be solved in order to determine an approximation to the desired
solution. Discretization consists of first introducing a mesh of nodes by subdividing the
solution domain into a finite number of sub domains and then approximating the derivatives
in the boundary value problem by means of appropriate finite difference ratios which can
be obtained from a truncated Taylor series expansion. As a result, a system of interlinked
simultaneous algebraic equations is obtained that then must be solved in order to determine
an approximation to the desired solution.
3.2.1
Finite Difference Operators
Effective implementation of finite difference methods requires consideration of finite difference operators. These are mappings on sequences representing the nodal values of the
approximated function. For our purposes, a typical sequence is a set of indexed quantities
+1
such as y1 , y2, ...yN , yN +1 = {yk }N
These are the assumed values of y at corresponding valk
+1
+1
ues of {xk }N
.
For
simplicity,
the
values
of the sequence {xk }N
will be assumed equally
k
k
spaced and the mesh spacing is given by xk+1 − xk = h for k = 1, 2, 3, ...N. The following
operators are commonly used:
• Identity: I yk = yk
• Shift: S yk = yk+1
• Forward Difference: ∆+ yk = yk+1 − yk
• Backward Difference: ∆− yk = yk − yk−1
• Central Difference: ∆0 yk = yk+1 − yk−1
• Average: Ayk = 12 (yk− 1 + yk+ 1 )
2
• Differential: D yk =
2
dy
|
dx x=xk
All these operators can be applied in sequence to create combined operations. Some useful
relationships among the above finite difference operators, all of which are readily derived are
as follows:
∆+ = S − I
∆− = I − S −1
22
1
1
∆0 = S 2 − S − 2
ln(I + ∆+ )
ln(I + ∆− )
ln S
=
=−
=
h
h
h
q
2 ln( 12 ∆0 + I + 14 ∆20)
1
1
1
=
= (∆+ − ∆2+ − ∆3+ ) + O(h3 )
h
h
2
3
D=
1
22 4
2
3
(∆
−
∆
+
∆ ) + O(h3 )
+
+
h2
24 +
1
1
1
= 2 (∆20 − ∆40 + ∆60) + O(h6 )
h
12
45
D2 =
3.2.2
Linear Problems
Consider the linear BVP
y ′′ = p(x)y ′ + q(x)y + r(x)
in
a≤x≤b
subject to
y(a) = α
and
y(b) = β
Introduce a mesh in [a, b] by dividing the interval into N equal subintervals of size h. This
produces two boundary mesh points x1 and xN +1 , and N − 1 interior mesh points xi , i =
2, .., N − 1 for a total of N + 1 mesh points. If the values y(x1) = y(a) = α and y(xN +1) =
y(b) = β are known, the values y(xi ), i = 2, .., N − 1 are to be determined.
From the Taylor expansions for y(xi+1) and y(xi−1 ) the following centered difference
formula is obtained
y ′′(xi ) =
1
h2 (4)
[y(x
)
−
2y(x
)
+
y(x
)]
+
y (ξi )
i+1
i
i−1
h2
12
for some ξi ∈ (xi−1 , xi+1 ).
Similarly, an approximation for y ′(xi ) is
y ′(xi ) =
1
h2
[y(xi+1) − y(xi−1 )] − y (3)(ηi )
2h
6
23
for some ηi ∈ (xi−1 , xi+1 ).
If the higher order terms are discarded from the above formulae and the approximations
are used in the original differential equation, the second order accurate finite difference
approximation to the BVP becomes a system of simultaneous linear algebraic equations
−(
wi+1 − wi−1
wi+1 − 2wi + wi−1
) + p(xi )(
) + q(xi)wi = −r(xi )
2
h
2h
or, alternatively
h
h
−(1 + p(xi ))wi−1 + (2 + h2 q(xi))wi − (1 − p(xi ))wi+1 = −h2 r(xi )
2
2
for i = 1, 2, .., N, where wi ≈ y(xi),
In matrix notation the system becomes
Aw = b
where A is a tridiagonal matrix.
Theorem of Uniqueness. If p(x), q(x) and r(x) are continuous and q(x) > 0 on [a, b]
the problem Aw = b has a unique solution provided h < 2/L where L = maxa≤x≤b |p(x)|.
Further, if y (4) is continuous on [a, b] the truncation error is O(h2 ).
Linear Finite Difference Method Algorithm.
• Give functions p(x), q(x), r(x), endpoints a, b, boundary conditions α, β, and the number of subintervals N + 1.
• Set h = (b − a)/(N + 1); x = a + h a1 = 2 + h2 q, b1 = −1 + (h/2)p, d1 = −h2r + [1 +
(h/2)p]α.
• For i = 2, .., N − 1 set x = a + ih ai = 2 + h2 q, bi = −1 + (h/2)p, ci = −1 − (h/2)p,
di = −h2 r.
• Set x = b − h, aN = 2 + h2 q, cN = −1 − (h/2)p, dN = −h2 r + [1 − (h/2)p]β
• Solve tridiagonal system for zi.
• w0 = α, wN +1 = β, wN = zN .
• For i = N − 1, ..., 1; set wi = zi − ui wi+1
• Stop.
24
3.2.3
More General Boundary Conditions
Problems involving boundary conditions other than Dirichlet type are readily handled using
the FDM. Consider for instance the problem of determining the function that satisfies the
ODE
−y ′′ = f(x)
subject to
y ′(a) = 0
and
y(b) = 0
Note that since a derivative condition is involved at x = 0, the value of y(0) is not given
in advance but must be determined as part of the solution.
As before, introduce first a (uniform) mesh xi , i = 1, 1, ..., N. The boundary condition at
x1 requires vanishing of the derivative. Introduce an extra mesh point, on the opposite side
of the origin and symmetrically located with node x2. This is a fictitious, no-physical node
introduced just for the purpose of obtaining a second order accurate FD formula. Label the
location of the fictitious node x−2 and the (fictitious) value of y associated with it y−2 . A
central difference approximation for the boundary condition at x1 is then
dy
y2 − y−2
y2 − y−2
≈
=
=0
dx
x2 − x−2
2∆x
Hence, y−2 = y2 .
Now write down the finite difference formula corresponding to the original ODE at location x1, i.e.
−y−2 + 2y1 − y2
= f(x1 )
∆x2
However, since y−2 = y2 , this becomes, after rearrangement
1
y1 = y2 + f(x)∆x2
2
which provides the required relationship for the determination of the unknown value of y at
x = 0. Note that nor the fictitious node or the value of y there appear anywhere in the final
result. More general boundary conditions are easily handled using the same procedure.
25
3.2.4
Nonlinear Problems
Let the BVP
y ′′ = f(x, y, y ′)
subject to
y(a) = α
and
y(b) = β
be such that
1) f, ∂f/∂y = fy and ∂f/∂y ′ = fy′ are continuous on D;
2) ∂f/∂y ≥ δ > 0;
3) Constants k and L exist such that k = max|∂f/∂y| and L = max|∂f/∂y ′|
Then, a unique solution exists. Introducing finite differences and since f is nonlinear,
the following system of nonlinear algebraic equations is obtained
−(
wi+1 − 2wi + wi−1
wi+1 − wi−1
) + f(xi , wi ,
)=0
2
h
2h
for each interior mesh point i = 1, 2, .., N. The nonlinear system has a unique solution as
long as h < 2/L.
The Jacobian matrix associated with this system is tridiagonal and it is given by
−1 + h2 fy′ i = j − 1,
i = j,
J (w1, ..., wN )ij = 2 + h2 fy


h
−1 + 2 fy′ i = j + 1,



j = 2, .., N
j = 1, .., N
j = 1, .., N − 1
Newton’s method applied to this problem requires first the solution of the linear system
J v(k) = b
to produce v(k) which is then added to w(k−1) to obtain w(k) , i.e.
w(k) = w(k−1) + v(k)
for each i = 1, 2, .., N.
Nonlinear Finite Difference Method Algorithm.
• Give functions p(x), q(x), r(x), endpoints a, b, boundary conditions α, β, the number
of subintervals N + 1, the tolerance T OL and the maximum number of iterations M.
26
• Set h = (b − a)/(N + 1), w0 = α, wN +1 = β.
• For i = 1, .., N set wi = α + i[(β − α)/(b − a)]h.
• Set k = 1, and while k ≤ M compute ti , ai , bi, ci , di for i = 1, 2, ..., N.
• Solve tridiagonal system for zi.
• Set vN = zN , wn = wN + vN .
• Compute vi = zi − uivi+1 and wi .
• If kvk ≤ T OL stop, else continue until k = M.
3.3
Numerical Approximation Methods based on Variational Formulations
Mathematical representation of physical laws is possible in two alternative but equivalent
formulations
• Differential formulation
• Variational formulation
The differential form is usually obtained by first applying the conservation principle on a
small box of material followed by a limiting process in which the volume of the box is made
infinitesimally small. The variational formulation is the integrated form of the conservation
principle over the entire volume of the material under consideration. Each formulation can
be obtained from the other one by a straightforward process.
For instance, mechanical equilibrium in an elastic body of volume Ω, loaded by conservative force at its boundary is described by the equilibrium equations
∂σij
=0
∂xj
Where σij (x, y, z) is the stress tensor field which is to be determined by solving the above
equations subject to the imposed force vector (per unit area) Fi at its boundary Γ. In
practice, one always determines first the displacement field ui , then the strain tensor field
eij (x, y, z) and last the associated stress tensor field.
Alternatively, the variational representation of the same problem involves a functional Π
given by
Π=
Z
Ω
W dΩ −
Z
Γ
Fiui dΓ
Where W = ∂σij /∂eij is the strain energy function.
27
The desired solution ui (x, y, z) is selected from amongst an appropriately chosen set of
functions and is such that the value of Π is minimal. This is the principle of minimum
potential energy.
As a second example, for a solid material of volume Ω, inside which thermal energy is
generated at a rate g per unit volume and which is maintained at fixed temperature φB = 0
at its boundary Γ, under conditions of thermal equilibrium, the differential representation is
the steady state heat conduction equation
−∇ · k∇φ + g = 0
inside Ω, where k and φ are the thermal conductivity and the temperature inside the solid.
To determine the function φ(x, y, z) one must solve the above differential equation subject
to the condition
φ = φB = 0
at the boundary Γ.
The corresponding variational representation of the same system involves a functional Π
given by
Π=
Z
Ω
T
(∇φ) k∇φdΩ +
Z
Γ
φgdΓ
The desired solution φ(x, y, z) is selected from amongst an appropriately chosen set of functions and is such that the value of Π is minimal. This is the principle of conservation of
thermal energy.
Except for the most elementary situations, the solution of most engineering problems
cannot be obtained using classical analytical methods of applied mathematics and numerical
solution procedures must be employed. Two commonly used methods of numerical solution
of engineering problems are the finite difference method and the finite element method.
In the finite element method, one starts with the variational formulation and then by
a process of finite element function representation transforms the problem into a system of
interlinked simultaneous algebraic equations that then must be solved in order to determine
an approximation to the desired solution. The process of finite element function representation is based on an idea first introduced by Rayleigh and further developed by Ritz and
Galerkin. It is based in first representing the desired solution as a linear combination of
suitable basis functions and then proceeding to the minimization of the functional resulting
in a set of algebraic equations for the unknown coefficients.
Although both methods transform the original infinite dimensional problem into a finite
dimensional systems of algebraic equations, the specific form of the resulting systems may
or may not be the same when the two methods are applied to the same problem.
28
Therefore, the implementation of the finite element method for the solution of engineering
problems involves two key steps:
• The variational statement of the problem.
• The approximate solution of the variational problem by means of finite element functions.
3.3.1
Basic Theory of the Finite Element Method
Consider the following differential formulation of a boundary value problem
−
d2 u
= −u′′ = f
dx2
for x ∈ [0, 1] and subjected to the boundary conditions
u(0) = 0
and
u′(1) = 0
The task is to determine the function u(x).
The equivalent variational formulation is obtained by introducing a function v(x) satisR
fying v(0) = 0 and 01 v 2dx < ∞, but otherwise rather arbitrary and then integrating the
differential equation, i.e.
Z
1
0
′′
−u vdx =
Z
0
1
fvdx
Integrating the left hand side by parts readily yields the variational or weak form
a(u, v) = (f, v)
where
a(u, v) =
Z
(f, v) =
Z
1
0
u′v ′dx
and
1
0
fvdx
We say that the function v belongs to a space of functions V (v ∈ V ) defined as follows
V = {v ∈ L2(0, 1) : a(v, v) < ∞; v(0) = 0}
29
Then, it can be shown that, as long as f is a continuous function of x (i.e. f ∈ C 0([0, 1]))
and u possesses continuous derivatives up to the second order (i.e. u ∈ C 2([0, 1]), the desired
solution u(x) ∈ V is the function satisfying
a(u, v) = (f, v)
for all v ∈ V .
The Ritz-Galerkin approximation of the above problem is obtained by introducing a
subspace of functions S ⊂ V and taking the function uS from there rather than from V , such
that
a(uS , v) = (f, v)
for all v ∈ S . It can be shown that this problem has a unique solution which is obtained by
solving the finite dimensional system of algebraic equations
KU = F
Where K is the stiffness matrix which can be regarded as a finite difference operator.
The estimated error involved in the Ritz-Galerkin approximation process is given by
k u − uS kE = mink u − v kE : v ∈ S
where the energy norm k . kE is defined as
k v kE =
q
a(v, v)
Moreover, one can also show that
k u − uS k≤ ǫ2 k f k
q
where k v k= (v, v) is the L2 norm, and ǫ is a small number.
In the finite element method one uses piecewise polynomial spaces of functions. For
instance, partition the interval [0, 1] into elements joined at the nodes xi such that 0 = x0 <
x1 < x2 ... < xi ... < xn = 1. Now introduce the basis functions φi with value of 1 at node i
zero at all other nodes and linear between nodes i − 1 and i and i and i + 1. The interpolant
vI is defined as
vi =
n
X
v(xi)φi
i=1
Then, it can be shown that
k u − uI kE ≤ Ch k u′′ k
30
where h is the size of the largest element and the constant C is independent of h and u.
The equivalence between the solution of a boundary value problem and that of the associated variational problem follows from the uniqueness theorem (Rayleigh-Ritz):
Let p ∈ C 1[0, 1], q, f ∈ C[0, 1] with p(x) ≥ δ > 0 and q(x) ≥ 0 for 0 ≤ x ≤ 1. Then, the
function y(x) ∈ C 2[0, 1] with y(0) = y(1) = 0 is the solution to the differential equation
−
d
dy
(p(x) ) + q(x)y = f(x)
dx
dx
if and only if y(x) is the unique function which minimizes the integral
I[y] =
Z
1
0
{p(x)[y ′(x)]2 + q(x)[y(x)]2 − 2f(x)y(x)}dx
In finite element practice, one searches for the solution among a set of functions produced
by linear combination of a set of linearly independent and well defined basis functions φi (x),
P
i = 1, 2, ..., n. The solution y(x) is first approximated by ni ci φi (x) and then the coefficients
P
ci are chosen so that I[y] = I[ ni ci φi (x)] is minimized, i.e.
∂I
=0
∂cj
for each j = 1, 2, .., n.
P
Substituting y(x) ≈ ni ci φi (x) into I[y], differentiating and introducing the minimization
Ac = b
where
aij =
Z
0
1
[p(x)φ′i (x)φ′j (x) + q(x)φi(x)φj (x)]dx
are the entries of the n × n stiffness matrix A,
bi =
Z
0
1
f(x)φi(x)dx
is the load vector and c = (c1 , c2, .., cn )t is the vector of unknown coefficients.
The simplest kind of basis functions is the set of piecewise linear polynomials defined by
first introducing in [0, 1] a collection of nodes x0 , x1, ..., xn+1 with hi = xi+1 − xi and then
making
φi (x) =











0,
x−xi−1
,
hi−1
xi+1 −x
,
hi
0,
0 ≤ x ≤ xi−1
xi−1 < x ≤ xi
xi < x ≤ xi+1
xi+1 < x ≤ 1
31
for each i = 1, 2, .., n. This approximation is continuous but it is not differentiable. Note
that φi (x)φj (x) = 0 and φ′i (x)φ′j (x) = 0 except when j is equal to i − 1, i, or i + 1 and this
makes A tridiagonal. Evaluation of the matrix entries requires the calculation of 6 integrals
for each node. One can approximate the functions p, q, and f by first order interpolating
polynomials and then integrate numerically.
Piecewise Linear Rayleigh-Ritz Algorithm.
• Give functions p(x), q(x), f(x), endpoints 0, 1, boundary conditions α = 0, β = 0, and
the number of nodes n + 2.
• For each node i = 1, ..., n define the basis function φi (x).
• For i = 1, ..., n − 1 compute the integrals Qj,i for j = 1, ..., 6.
• Compute entries in the tridiagonal matrix.
• Solve tridiagonal system for the unknown coefficients ci .
• Compute φ(x) =
Pn
i=1 ci φi (x).
Piecewise linear basis functions produce a continuous solution which is however nondifferentiable. Differentiability of the approximation can be obtained by using spline functions and a Cubic Spline Rayleigh-Ritz algorithm can be obtained.
3.3.2
Rayleigh-Ritz-Galerkin Finite Element Method
Recall that the boundary value problem consisting of finding the function u(x) such that
−
d2 u
= −u′′ = f
dx2
subject to the boundary conditions u(0) = u(1) = 0 is equivalent to the variational problem
consisting of determining the function u(x) such that
a(u, v) = (f, v)
where
a(u, v) =
Z
0
1
′′
−u vdx =
Z
0
and
(f, v) =
Z
1
0
32
fvdx
1
u′v ′dx
and also to the problem of finding the function u(x) that minimizes the functional
1
I(u) = a(u, v) − (f, v)
2
where v is a reasonably well behaved but rather arbitrary function.
When one looks for approximate solutions to the integrated forms of the problem by
the Galerkin method or by the Ritz method, one seeks for solutions in a finite dimensional
subspace of the actual solution space.
In the Ritz method, one searches for an approximation to the function that minimizes
the functional. The method starts by introducing a set of basis functions {φi (x)}N
i=1 and
expresses the approximate solution as the trial function uN ≈ u as
N
X
uN =
a i φi
i=1
The functional obtained using this approximation then becomes
1
I(uN ) = a(uN , v) − (f, v)
2
Now, the function v being arbitrary, is selected as the test function
v = v N = uN =
N
X
a i φi
i=1
and is substituted into the functional expression to give
N
N
N
X
X
1 X
ai φi ) − (f,
a i φi )
I(uN ) = a( ai φi ,
2 i=1
i=1
i=1
where
a(
N
X
i=1
a i φi ,
N
X
a i φi ) =
i=1
Z
1
0
N
X
N
dφi X
dφi
( ai
)( ai
)dx
dx i=1 dx
i=1
and
(f,
N
X
i=1
a i φi ) =
Z
1
0
f(
N
X
ai φi )dx
i=1
Finally, the unknown coefficients ai , i = 1, 2, ..., N are determined by solving the system
of algebraic equations obtained from the extremum conditions. i.e.
∂I
∂I
∂I
=
= .... =
=0
∂a1
∂a2
∂aN
33
which in matrix notation are simply written as
Ka = F
In the Galerkin method, one seeks directly for an approximate solution of a(u, v) =
(f, v). Like in the Ritz method, in the Galerkin approach one starts by introducing a set
of basis functions {φj (x)}N
j=1 satisfying the stated boundary conditions and expresses the
approximate solution as the trial function uN ≈ u as
uN =
N
X
a j φj
j=1
However, the test functions are selected so as to be identical to the basis functions, i.e.
N
{vi }N
i=1 = {φi (x)}i=1 .
The resulting discrete form of the problem is then
N
X
aj a(φj , φi ) = (f, φi)
i=1
for all i = 1, 2, ..., N. Here
a(φj , φi ) =
Z
1
0
dφj dφi
dx
dx dx
and
(f, φi) =
Z
1
0
fφi dx
for i = 1, 2, ..., N
Using matrix notation, the above is simply written as
Ka = F
Solving the algebraic problem yields the values of the coefficients ai , i = 1, 2, ..., N.
The system of equations obtained with the Galerkin method is identical to the one obtained using the Ritz method. However, the Galerkin method is of more general applicability
since it works even when the problem of minimizing the functional has no solution. The finite
element method is obtained by implementing the Galerkin (or Ritz) procedure with a very
particular choice of basis functions.
3.3.3
Numerical Implementation of the Finite Element Method
The finite element method is a direct implementation of the Galerkin (or Rayleigh-Ritz)
procedure in which the chosen basis functions are finite element basis functions (also called
34
sometimes finite element shape functions). These functions are of very simple form (piecewise
polynomials of low order are most common) and their most distinctive feature is that they
are nonzero only in a small subregion of the computational domain, i.e. they have a local
character. Another important feature of the finite element basis functions is that the basis
function associated with a particular element partially overlap with those of immediately
element method by simply increasing the number of sub domains (i.e. by decreasing their
size) without increasing the complexity of the polynomials used to represent the solution
inside each individual subregion. Moreover, the matrix associated with the resulting system
has a sparse structure and efficient solution methods are applicable. This is the well known hformulation of the finite element method. Last, but not least, since the coordinate functions
of adjacent subregions overlap partially with each other, the resulting system of algebraic
equations can be constructed and assembled together by the computer just as it is created.
In implementing the finite element method one starts by using simple interpolation functions as basis functions at the element level and then proceeds to represent the solution in the
entire domain by collecting the contributions associated with each element. This is possible
thanks to the mathematical property that allows computation of integrals over domains as
sums of integrals over sub domains spanning the original domain.
The basis functions involved at the element level are called local basis functions while
the ones that produce the solution in the entire domain are called global basis functions.
Clearly, the two sets of basis functions are closely related.
The role of the local basis functions is to generate the value of the approximated quantity
inside the element from the values at the nodes by interpolation. Assuming again a one dimensional system and the simple subdivision introduced above, consider an arbitrary interior
element ei where the index i denotes the global node number. If the values of the required
solution at xi and xi+1 are u(xi ) = ui and u(xi+1 ) = ui+1 , respectively, the approximate
values of u(x), u(x)ei inside the element are calculated by simple interpolation as follows
uei (x) = ui φi1 + ui+1 φi2
Here, the local finite element basis functions for element ei , φi1 and φi2 are defined as
φi1 =
xi+1 − x
hi
φi2 =
x − xi
hi
where hi = xi+1 − xi is the element size and all the positions are measured in the global
coordinate system. Note that the indices 1 and 2 on the local basis functions refer to the
local node numbers for the element.
35
For simplicity, assume all finite elements are of equal size (uniformly spaced mesh) so
that h1 = h2 = ... = hi = h. Now, the approximate solution on the entire domain uh (x) is
represented as a linear combination of the finite element basis functions, i.e.
uh (x) = u1 φ1 + u2φ2 + ... + ui φi + uN φN =
N
X
u i φi
i=1
However, note that here, φi , i = 1, 2, ..., N are global finite element basis functions.
There is a simple relationship between local and global finite element basis functions,
specifically, in element 1,
φ1 = φ11
in element 2
φ2 =
(
φ12 if x ∈ e1
φ21 if x ∈ e2
in element ei ,
φi =
(
φ2i−1 if x ∈ ei−1
φi1
if x ∈ ei
and in element eN
φN = φN
2
Using the introduced notation the Galerkin finite element method equations can be expressed as
N
X
uj a(φj , φi ) = (f, φi)
i=1
for all i = 1, 2, ..., N. Here
a(φj , φi ) =
Z
1
0
dφj dφi
dx
dx dx
and
(f, φi) =
Z
1
0
fφi dx
for i = 1, 2, ..., N.
Using matrix notation, the above is simply written as
Ku = F
36
Solving the algebraic problem yields the values of the nodal values ui , i = 1, 2, ..., N. The
matrix K is called the finite element stiffness matrix, the colum vector F is the force vector
and the column vector u is the vector of unknown nodal values of u.
The same thing can be done to express the Ritz finite element method equations using
the new notation. Specifically, the functional I(uh) is given by
N
N
N
X
X
1 X
I(uh ) = a( uiφi ,
ui φi ) − (f,
u i φi )
2 i=1
i=1
i=1
where
a(
N
X
u i φi ,
i=1
N
X
u i φi ) =
i=1
Z
1
0
(
N
X
i=1
ui
dφi 2
) dx
dx
and
(f,
N
X
a i φi ) =
i=1
Z
1
0
f(
N
X
ui φi )dx
i=1
Finally, introduction of the extremum conditions
∂I
∂I
∂I
=
= ... =
∂u1
∂u2
∂uN
yields the system
Ku = F
which is identical to the one obtained using Galerkin’s method.
3.3.4
Illustrative Examples
One Dimensional Problem in Cartesian Coordinates Consider the boundary value
problem
−
d2 u
= 20x3
dx2
subject to the boundary conditions u(0) = u(1) = 0. Find an approximate solution using
the Galerkin finite element method with three equal size elements and the following global
finite element basis functions:
φ1(x) =
(
1 − 3x for x ∈ [0, 1/3]
0
elsewhere
37



3x
for x ∈ [0, 1/3]
φ2(x) = 2 − 3x for x ∈ [1/3, 2/3]


0
elsewhere



3x − 1 for x ∈ [1/3, 2/3]
φ3(x) = 3 − 3x for x ∈ [2/3, 1]


0
elsewhere
φ4(x) =
(
3x − 2 for x ∈ [2/3, 1]
0
elsewhere
The variational representation of the problem is readily obtained by first multiplying the
given differential equation by a function v and subsequently integrating from x = 0 to x = 1.
After integration by parts, the result is
a(u, v) = (f, v)
where
a(u, v) =
Z
(f, v) =
Z
1
0
du dv
dx
dx dx
and
0
1
20x3 vdx
Now, let us proceed to the implementation of the finite element method. Here there are
three elements e1 , e2, e3 and four nodes i = 1, 2, 3, 4. The nodal values of u are ui , i = 1, 2, 3, 4
and since u is specified at x = 0 and x = 1, then u1 = u4 = 0 and the only unknowns are u2
and u3 . The representation of the trial function u in terms of the nodal values then becomes
uh =
4
X
ui φi = u2 φ2 + u3 φ3 = [φ2, φ3 ]
i=1
"
u2
u3
#
In the Galerkin method the test function v is given directly in terms of the basis functions.
In the present case, since u1 = u4 = 0, v is simply given by
vh = φ2 + φ3 = [1, 1]
"
φ2
φ3
#
Substituting the above into the variational formulation yields
a(uh , vh ) = [1, 1]
"
k22 k23
k32 k33
38
#"
u2
u3
#
and
(f, v) = [1, 1]
"
F2
F3
#
where
F2 =
Z
0
1
1
dφ2 2
(
) dx =
dx
k22 =
Z
1
k23 = k32 =
Z
k33 =
Z
1
3
20x φ2 dx =
Z
0
0
1/3
0
0
Z
2/3
0
dφ2 dφ3
dx =
dx dx
Z
dφ3 2
) dx =
dx
1
(
3
20x 3xdx +
Z
Z
(
dφ2 2
) dx = 6
dx
2/3
1/3
1/3
2/3
1/3
(
dφ2 dφ3
dx = −3
dx dx
dφ3 2
) dx = 6
dx
20x3 (2 − 3x)dx =
4
20
10
+
=
81 81
27
and
F3 =
Z
0
1
20x3 φ3 dx =
Z
2/3
1/3
20x3 (3x − 1)dx +
Z
1
2/3
20x3 (3 − 3x)dx =
49 131
60
+
=
81
81
27
Therefore, the finite element equation is
Ku = F =
"
6 −3
−3 6
#"
u2
u3
#
=
"
10
27
60
27
#
Finally, solving for u2 and u3 yields
80
243
130
u3 =
243
u2 =
One-Dimensional Problem in Cylindrical Coordinates Consider the boundary
value problem consisting of finding the function u(r) satisfying
1 ∂ ∂u
[ (r )] = 0
r ∂r ∂r
subject to
−(
∂u
)r = q 1
∂r 1
39
at r = r1 , and
u(r2) = uS
The problem is a mathematical model of steady state conduction through a hollow cylindrical
shell of a material with unit thermal conductivity, exposed to heat flux input at its inner
radius r1 and maintained at fixed temperature u2 = u(r2) at its outer radius r2. Here u(r)
is the temperature, r is the radial coordinate and k is the thermal conductivity.
The analytical solution to this problem is
u(r) = u2 − q1r1 ln(
r
)
r2
Using the calculus of variations one can show that the solution to the above problem is
the function that minimizes the following functional
I(u) =
1
2
Z
0
2π
Z
r2
r1
k(
du 2
) rdrdθ −
dr
Z
0
2π
q1 u1r1 dθ
where u1 = u(r1) is the yet to be determined temperature on the inner surface of the
cylindrical shell and θ is the angular coordinate.
To implement the finite element Ritz-Galerkin method, finite element basis functions are
required. The key is to assumes a simple, easily integrable form for the solution in terms of
the spatial coordinate r and still to be determined parameters. This is then substituted into
the expression for the functional and the minimum of the functional is sought by equating
the derivatives of the functional with respect to the unknown parameters to zero. The final
result is a system of algebraic equations that must then be solved for the values of the
parameters. The procedure is exactly as the classical Ritz method but the basis functions
used are finite element basis functions.
The simplest thing to do at first is to consider a single element representation. In this
case, the finite element is a radial segment spanning the entire shell thickness of the hollow
cylinder. The ends of the finite element are the locations of the inner and outer surfaces.
These are also the two nodes associated with the finite element. Since there is only one
element, global and local quantities are the same and no distinguishing superscripts are
needed.
Assume now that the solution can be approximately expressed as linear combination of
the nodal values of u
uh (r) = φ1 (r)u1 + φ2(r)u2 = φu
where φ1 , φ2 are the local finite element basis functions,
φ = [φ1, φ2 ]
40
is the global basis functions vector (a row matrix) and
u=
"
u1
u2
#
is the (column) vector of nodal values of u.
The basis functions are selected to be simple linear functions of the radial distance r and
are given by,
φ1 =
r2 − r
r2 − r1
φ2 =
r − r1
r2 − r1
As mentioned before, since the element spans the entire domain, φ1 and φ2 are also global
basis functions associated with the (nodal) positions r1 and r2.
du
= Ba
dr
where the row matrix B is
B=[
dφ1 dφ2
,
] = [B1, B2]
dr dr
Using the given vector notation and substituting into the expression for the functional
yields
1
I(uh) =
2
2π
Z
0
Z
r2
r1
T
T
u B Burdrdθ −
Z
2π
0
u1q1 r1 dθ
One can now proceed to carry out the integrals. The integral over θ is direct since none
of the quantities in the integrands depends on the angular coordinate. Further, the vector
operations can be performed to split the integrand in the remaining integral into four terms,
i.e.
dφ1 2 2
dφ1 dφ2
) u1 + (
)(
)u1u2+
dr
dr
dr
r1
dφ2 dφ1
dφ2 2 2
(
)(
)u2 u1 + (
) u2]rdr − 2πu1q1r1
dr
dr
dr
I=π
Z
r2
[(
The individual integrals are easily obtained and are
Z
r2
r1
(
dφ1 2 2
r2 + r1 2
) u1 =
u
dr
2(r2 − r1 ) 1
41
(1)
(2)
Z
r2
Z
r2
r1
r1
(
r2 + r1
dφ1 dφ2
)(
)u1u2 = −
u1 u2
dr dr
2(r2 − r1 )
(
r2 + r1
dφ2 dφ1
)(
)u2u1 = −
u2 u1
dr dr
2(r2 − r1 )
r2
Z
r1
(
r2 + r1 2
dφ2 2 2
) u2 =
u
dr
2(r2 − r1 ) 2
Therefore, the functional finally becomes
I=
2π(r2 + r1 ) 2
(u − 2u1 u2 + u22) − 2πu1q1 r1
4(r2 − r1 ) 1
or, in matrix notation
1
I = uT Ku − uT F
2
where K, the stiffness matrix is
K=
K11
K21
K12
K22
=
π(r2 + r1)
(r2 − r1 )
1 −1
−1 1
and the distributed force vector is
2πq1r1
0
F=
!
Since the u2 = uS is given, the only independent parameter affecting the value of the
functional is u1. Therefore the condition for the functional to attain its minimum value is
simply
∂I
=0
∂u1
Carrying out the differentiation and solving for u1 yields
u1 = uS +
2q1 r1 (r2 − r1 )
(r2 + r1 )
In this case, the finite element equation
Ku = F
reduces to a single equation in a single unknown.
42
4
Exercises
1.- Use the shooting method to solve the BVP given by
d2 y
= 4(y − x)
dx2
subject to
y(0) = 0
y(1) = 2
2.- Construct the following finite difference approximations to the derivative dy/dx of
the function y(x) = x sin(x) for x ∈ [−1..1]. Compare the approximations obtained against
the actual value of the derivative.
•
∆+
h
•
∆+ − 12 ∆2+
h
•
∆+ − 12 ∆2+ + 13 ∆3+
h
•
A∆0
h
•
A∆0 (I − 16 ∆20 )
h
3.- Construct the following finite difference approximations to the derivative d2 y/dx2 of
1
the function y(x) = 1+x
for x ∈ [− 12 ..4]. Compare the approximations obtained against the
actual value of the derivative.
•
∆+
h
•
∆+ − 12 ∆2+
h
•
∆+ − 12 ∆2+ + 13 ∆3+
h
•
A∆0
h
•
A∆0 (I − 16 ∆20 )
h
43
3.- Find an approximate solution u(x) to the problem given by
−
d2u
= −u′′ = 20x3
dx2
subject to the conditions u(0) = u(1) = 0 using the method of finite differences.
4.- Use the finite difference method to solve the BVP given by
d2 y
dy
=
+ 2y + cos(x)
2
dx
dx
subject to
y(0) = −0.3
π
y( ) = −0.1
2
5.- Find an approximate solution u(x) to the problem given by
−
d2 u
= −u′′ = 200
dx2
subject to the conditions u(0) = u(1) = 0 using the Galerkin finite element method with
piecewise linear basis functions.
6.- Find an approximate solution u(x) to the problem given by
d2u
− 2 = −u′′ = 20x3
dx
subject to the conditions u(0) = u(1) = 0 using the Galerkin finite element method with
piecewise linear basis functions.
44
```
Fly UP