...

CH04.pdf

by user

on
Category: Documents
1

views

Report

Comments

Description

Transcript

CH04.pdf
Ernesto Gutierrez-Miravete
Advanced Mathematics for
Engineering Applications
– An Introduction –
February 26, 2015
Springer
Contents
Part I Part Title
1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.1 Mathematical Formulation of Problems in Engineering and Science
1.1.1 Differential Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.1.2 Variational Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Variables and Functions: Real and Complex . . . . . . . . . . . . . . . . . . . .
1.2.1 Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2.2 Variables: Real and Complex . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2.3 Functions of Real and Complex Variables . . . . . . . . . . . . . . . .
1.2.4 Elementary Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2.5 Derivatives and Integrals of Functions . . . . . . . . . . . . . . . . . . .
1.3 Vectors and Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3.1 Definition and Properties of Vectors, Matrices and
Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3.2 Eigenvalues and Eigenvectors . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3.3 Systems of Linear Algebraic Equations . . . . . . . . . . . . . . . . . .
1.3.4 Direct Solution of Systems of Linear Algebraic Equations . .
1.3.5 Iterative Solution of Systems of Linear Algebraic Equations
1.4 Interpolation and Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.4.1 Interpolation and Lagrange Polynomial . . . . . . . . . . . . . . . . . .
1.4.2 Cubic Spline Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.5 Numerical Differentiation and Integration . . . . . . . . . . . . . . . . . . . . . .
1.5.1 Numerical Differentiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.5.2 Numerical Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.6 Equations in a Single Variable and their Solution . . . . . . . . . . . . . . . .
1.6.1 Bisection Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.6.2 Fixed-Point Iteration Method . . . . . . . . . . . . . . . . . . . . . . . . . .
1.6.3 Newton-Raphson Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.6.4 Secant Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.6.5 Convergence Acceleration . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
3
5
8
10
10
11
12
12
14
17
17
23
25
26
30
39
39
43
46
47
48
50
51
52
53
56
56
ix
x
Contents
1.7 Introduction to Ordinary Differential Equations . . . . . . . . . . . . . . . . .
1.7.1 First Order Ordinary Differential Equations . . . . . . . . . . . . . .
1.7.2 Higher Order Ordinary Differential Equations . . . . . . . . . . . .
1.7.3 General and Particular Solutions: Initial and Boundary
Value Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.8 Fundamentals of Probability and Statistics . . . . . . . . . . . . . . . . . . . . . .
1.8.1 Concepts of Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.8.2 Random Variables, Probability Mass and Density Functions
1.8.3 Cumulative Probability Distribution Function . . . . . . . . . . . . .
1.8.4 Expectation and Moment Generating Functions . . . . . . . . . . .
1.8.5 Law of Large Numbers and the Central Limit Theorem . . . .
1.8.6 Examples of Useful Probability Distributions . . . . . . . . . . . . .
1.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
57
58
59
60
61
61
62
63
63
65
65
80
81
2
Series Solutions of Differential Equations and Special Functions . . . . . 83
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
2.2 Power Series and their Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
2.3 Solving Differential Equations using Power Series . . . . . . . . . . . . . . . 87
2.4 The Method of Frobenius . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
2.5 Power Series Solutions Leading to Special Functions . . . . . . . . . . . . . 92
2.6 Bessel Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
2.7 Legendre Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
2.8 Application Examples of Special Functions . . . . . . . . . . . . . . . . . . . . . 105
2.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
2.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
3
Boundary Value Problems, Characteristic Functions and
Representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
3.1 Boundary Value Problems and Characteristic Functions . . . . . . . . . . . 111
3.2 Some Examples of Characteristic Value Problems . . . . . . . . . . . . . . . 112
3.3 Orthogonality of Characteristic Functions . . . . . . . . . . . . . . . . . . . . . . 116
3.4 Characteristic Function Representations . . . . . . . . . . . . . . . . . . . . . . . . 118
3.5 Fourier Series Representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
3.6 Fourier-Bessel Series Representations . . . . . . . . . . . . . . . . . . . . . . . . . 126
3.7 Legendre Series Representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
3.8 Fourier Integral Representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
3.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
3.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
4
Numerical Solution of Initial Value Problems
for Ordinary Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
4.2 Fundamental Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
4.3 Single Step Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Contents
4.4
4.5
4.6
4.7
4.8
xi
4.3.1 Euler’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
4.3.2 Higher Order Taylor Methods . . . . . . . . . . . . . . . . . . . . . . . . . . 146
4.3.3 Runge-Kutta Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
4.3.4 Runge-Kutta-Fehlberg Method . . . . . . . . . . . . . . . . . . . . . . . . . 148
Multi step Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
4.4.1 Adams-Bashforth and Adams-Moulton Methods . . . . . . . . . . 150
4.4.2 Variable Step-Size Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
4.4.3 Richardson Extrapolation Method . . . . . . . . . . . . . . . . . . . . . . 152
Higher Order Equations and Systems of Equations . . . . . . . . . . . . . . . 156
Stiff Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
5
Numerical Solution of
Boundary Value Problems for Ordinary Differential Equations . . . . . . 167
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
5.2 Shooting Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
5.3 Finite Difference Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
5.4 Finite Volume Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
5.5 Approximation Methods based on Variational Formulations . . . . . . . 185
5.5.1 The classical Ritz Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
5.5.2 The Galerkin Finite Element Method . . . . . . . . . . . . . . . . . . . . 187
5.5.3 The Ritz Finite Element Method . . . . . . . . . . . . . . . . . . . . . . . . 197
5.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
5.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
6
Muti-variable Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
6.1 Functions of More than One independent Variable . . . . . . . . . . . . . . . 205
6.2 Spatial Differentiation of Vector Functions . . . . . . . . . . . . . . . . . . . . . 205
6.3 Line, Surface and Volume Integration of Vector Functions . . . . . . . . 212
6.4 Basic Concepts of Differential Geometry . . . . . . . . . . . . . . . . . . . . . . . 218
6.5 Introduction to Tensors and Tensor Analysis . . . . . . . . . . . . . . . . . . . . 223
6.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
6.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
7
Extrema, Solution of Nonlinear Systems of Algebraic Equations
and Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
7.2 Extrema of Functions of More than One Variable . . . . . . . . . . . . . . . . 235
7.3 Extrema of Functionals: Calculus of Variations . . . . . . . . . . . . . . . . . . 241
7.4 Numerical Solution of Systems of Nonlinear Algebraic Equations . . 246
7.4.1 Newton’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
7.4.2 Quasi-Newton Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
7.5 Numerical Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
7.5.1 Steepest Descent Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
xii
Contents
7.5.2 Non-Gradient Numerical Optimization Methods . . . . . . . . . . 255
7.6 Linear Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
7.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
7.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
8
Analytical Methods for Initial and Boundary Value Problems in
Partial Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
8.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
8.2 Partial Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
8.2.1 First Order Quasi linear Equations . . . . . . . . . . . . . . . . . . . . . . 274
8.2.2 Second Order Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
8.2.3 Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
8.2.4 The Equations of Mathematical Physics . . . . . . . . . . . . . . . . . 278
8.3 Elliptic Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
8.3.1 Laplaces’s Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
8.3.2 The Method of Separation of Variables for Elliptic Problems 282
8.4 Parabolic Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
8.4.1 The Heat Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
8.4.2 The Method of Separation of Variables for Parabolic
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
8.5 Hyperbolic Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
8.5.1 The Wave Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
8.5.2 The Method of Separation of Variables for Hyperbolic
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
8.6 Other Analytical Solution Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
8.6.1 Laplace Transform Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
8.6.2 The Method of Variation of Parameters . . . . . . . . . . . . . . . . . . 351
8.6.3 The Method of the Fourier Integral . . . . . . . . . . . . . . . . . . . . . 354
8.6.4 The Method of Green’s Functions . . . . . . . . . . . . . . . . . . . . . . 356
8.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
8.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
9
Numerical Methods for Initial and Boundary Value Problems in
Partial Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
9.2 Elliptic Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
9.2.1 Finite Difference Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
9.2.2 Finite Volume Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
9.2.3 Finite Element Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
9.3 Parabolic Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
9.3.1 Finite Difference Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
9.3.2 Finite Volume Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
9.3.3 Finite Element Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
9.4 Hyperbolic Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
9.4.1 Finite Difference Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
Contents
xiii
9.4.2 Finite Volume Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
9.4.3 Finite Element Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
9.5 More Complex Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448
9.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
9.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
10 Statistical and Probabilistic Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469
10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469
10.2 Generation of Pseudo-Random Numbers . . . . . . . . . . . . . . . . . . . . . . . 469
10.3 Generation of Random Variates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
10.3.1 The Inverse Transform Method . . . . . . . . . . . . . . . . . . . . . . . . . 475
10.3.2 Inverse Transforms for the Normal and Log-Normal
Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
10.3.3 Inverse Transforms for Empirical Distributions . . . . . . . . . . . 476
10.3.4 Inverse Transforms for Discrete Distributions . . . . . . . . . . . . . 485
10.4 Probabilistic and Statistical Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . 485
10.4.1 Stochastic Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
10.4.2 Poisson Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
10.4.3 Markov Chains and the Kolmogorov Balance Equations . . . . 488
10.4.4 Queueing Systems and Little’s Formula . . . . . . . . . . . . . . . . . 489
10.4.5 Matching Field Data with Theoretical Distributions . . . . . . . 491
10.4.6 Discrete Event Simulation Modeling . . . . . . . . . . . . . . . . . . . . 492
10.4.7 Reliability Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496
10.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502
10.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503
Chapter 4
Numerical Solution of Initial Value Problems
for Ordinary Differential Equations
Abstract This chapter presents methods used for the numerical solution of initial
value problems for ordinary differential equations. Powerful single and multi step
methods are discussed. The methods described are applied to the solution of initial
value problems involving single equations and also systems of coupled differential
equations. The accuracy of the obtained approximations is evaluated using examples
4.1 Introduction
As described before, many mathematical models of systems of interest in engineering and the sciences are formulated in terms of differential equations. In these equations, the function of interest is given implicitly in terms of one or more independent
variables by means of a relationship involving the function and its derivatives.
Ordinary differential equation problems where conditions are specified only at
one end of the domain of the independent variable are called Initial Value Problems
(IVP). IVP are typically associated with the time evolution of a dynamical system
from a specified initial condition of the system.
Sometimes, the given differential equation is simple enough that an exact solution can be determined using standard analytical methods. However, except for the
most elementary situations, the solution of most engineering problems cannot be
obtained using classical analytical methods of applied mathematics and numerical
solution procedures must be employed.
Numerical methods exist to solve both ordinary and partial differential equations.
Commonly used methods of numerical solution of engineering problems are the
finite difference method, the Ritz method and the finite element method.
This chapter contains a summary of useful numerical techniques applicable to
the solution of initial value problems for ordinary differential equations. Numerical
methods are classified into two groups: single step methods and multi-step methods.
An initial value problem in first order, ordinary differential equations consists of
an ordinary differential equation for the variable y(t) ∈ [a, b]
139
140
4 Numerical Solution of Initial Value Problems
for Ordinary Differential Equations
dy
= f (t, y)
dt
together with an initial condition
y(a) = y0
where f (t, y) and y0 are given.
Initial value problems are also encountered that involve systems of equations
where for vectors y(t) = (y1 , y2 , ...yN ) and f(t, y) = ( f 1 (t, y), f 2(t, y), .. f N(t, y)) one
has the relationship
dy
= f(t, y)
dt
subject to
y(a) = (y0,1 , y0,2 , ..., y0,N )
Higher order IVPs appear when the given differential equation is of order greater
than one (while conditions are specified only at one end of the solution domain).
These problems are solved by first transforming them into an equivalent system of
first order IVPs. For instance the problem of determining the function y(t) satisfying
d2y
= −ω 2 y
dt 2
subject to
y(0) = y0
and
dy
|0 = 0
dt
can be transformed into the following system of two first-order IVP
dy1
= y2
dt
and
dy2
= −ω 2 y1
dt
subject respectively to
y1 (0) = y0
and
4.2 Fundamental Considerations
141
y2 (0) = 0
4.2 Fundamental Considerations
Let the set of all pairs of numbers (t, y) be called D. The D ⊂ ℜ2 is called convex
if whenever (t1 , y1 ), (t2, y2 ) ∈ D then the set, ((1 − λ )t1 + λ t2 , (1 − λ )y1 + λ y2 ) also
belongs to D for each λ ∈ [0, 1].
A function of two variables f (t, y) is called Lipschitz in y if there is a number
L > 0 such that for any pair (t, yi ), (t, y j) ∈ D
| f (t, yi) − f (t, y j )| ≤ L|yi − y j |
where L is called the Lipschitz constant for f .
If f (t, y) is defined on a convex set and a constant L > 0 exists such that L ≥
|∂ f /∂ y| then f is Lipschitz.
All the above is important because if f is continuous and Lipschitz on D in y then
the solution y(t) with to the IVP for t ∈ [a, b]
dy
= f (t, y),
dt
y(a) = α
is unique.
When the IVP above is solved numerically, one actually works with the associated perturbed problem defined as
dz
= f (t, z) + δ (t),
dt
z(a) = α + δ0
The perturbed problem has a unique solution if and only if there are constants
ε0 > 0 and k > 0 such that
|z(t) − y(t)| < kε
for any number ε such that ε0 > ε > 0, ε > |δ (t)| and ε > |δ0 |.
If an IVP and its associated perturbed problem have unique solutions the original problem is called well posed. Accurate numerical solution of an initial value
problem is only possible when the problem well posed.
An effective numerical method for the approximation of the solution of an initial
value problem must fulfill some basic mathematical requirements in order to be
useful in practice. The key concepts are consistency, stability and convergence. Only
methods with these characteristics are likely to be useful for practical computation.
Consider for instance the problem
dy
= λy
dt
142
4 Numerical Solution of Initial Value Problems
for Ordinary Differential Equations
where λ may be a complex number but it is usually real.
This differential equation must be solved subject to the initial condition
y(0) = y0
for any t such that a ≤ t ≤ b.
For the purpose of numerical approximation, introduce uniformly spaced mesh
points such that tn = a +nh, n = 0, 1, 2, ...N with h = (b −a)/N and use the following
finite difference formula as a (good) approximation to the derivative
dy
yn+1 − yn
=
h
dt
Substituting into the differential equation and solving for yn+1 yields
yn+1 = yn + hλ yn = yn (1 + hλ )
where yn is the computed numerical approximation of y(tn ).
Because of the recursive character of this relationship one can also write it as
yn = y0 (1 + hλ )n = y0 σ n
where the (complex) number σ is called the amplification factor.
A numerical method is stable if its results depend continuously on the initial
data. For stability it is then required that the numerical solution remains bounded as
n increases and this is only possible if
|σ | ≤ 1
Consider now specifically the case when λR ≤ 0. the method above will be stable
only as long as
|σ |2 = (1 + hλR)2 + (hλI )2 ≤ 1
that is, the region of stability represented in complex plane is the unit circle centered
at z = −1. As long as hλ is inside the circle the method will be stable. This despite
the fact that the region of stability of the exact solution is the entire left-hand plane.
Moreover, if λ is real (and ≤ 0), for stability of the numerical method it is required
that
h≤
2
|λ |
and the method is called conditionally stable.
A one step method is consistent if and only if
lim max1≤n≤N |τn (h)| = lim max1≤n≤N |
h→0
h→0
y(tn+1 ) − y(tn )
− f (tn , y(tn ))| = 0
h
4.3 Single Step Methods
143
I.e. if the finite difference approximation to y′ approaches f as h → 0.
A one step method is convergent if and only if
lim max1≤n≤N |yn − y(tn )| = 0
h→0
I.e. if the solution of the difference equation approaches the solution of the differential equation as h → 0.
Now, consider the general initial value problem y′ (t) = f (t, y), y(a) = y0 and
assume the solution is approximated by an explicit single step method
yn+1 = yn + hφ (tn, yn , h)
where 0 ≤ h ≤ h0 > 0 and φ (tn, yn , h) ≈ f (t, y).
Then, it is true that if φ (tn , yn , h) is Lipschitz in y with constant L,
1) The explicit single step method is stable.
2) If the method is consistent then it is convergent.
3) The local truncation error is such that
|y(tn ) − yn | ≤
τ (h) L(tn −a)
e
L
The situation concerning multi-step methods is more involved but proceeds along
similar lines.
4.3 Single Step Methods
When using single step methods, the approximate solution is obtained by marching
forward one step at a time. The simplest method is the explicit Euler forward method
described above. However, it is not very accurate. Accuracy can be improved by
using various Taylor methods. The fourth order Runge-Kutta method is widely used
since it has good accuracy and is easily implemented. Accuracy can be improved
significantly through the introduction of adaptivity. Implementation of this notion
yields the Runge-Kutta-Fehlberg method.
4.3.1 Euler’s Method
Consider the well posed IVP
dy
= f (t, y),
dt
a ≤ t ≤ b,
y(a) = y0
Create now a set of uniformly spaced (step size h = (b − a)/N) mesh points in
[a, b] such that tn = a + nh, n = 0, 1, 2, ...N. Assuming now that the function y(t)
144
4 Numerical Solution of Initial Value Problems
for Ordinary Differential Equations
possesses at least two continuous derivatives, i.e. y(t) ∈ C 2 [a, b]
(tn+1 − tn )2 ′′
y (ξn ) =
2
h2
= y(tn ) + hy′ (tn ) + y′′ (ξn ) =
2
h2
= y(tn ) + h f (tn , y(tn)) + y′′ (ξn )
2
y(tn+1 ) = y(tn ) + (tn+1 − tn )y′ (tn ) +
for some numbers ξi ∈ [tn , tn+1] and with h = tn+1 − tn . Euler’s method is the approximation yn ≈ y(tn ) obtained when the remainder is deleted. I.e.
y0 = y0
yn+1 = yn + h f (tn , yn )
This is called the finite difference scheme for Euler’s explicit method.
It can be shown that if f (t, y) is continuous and Lipschitz with constant L in y on
D and |y′′ (t)| ≤ M < ∞ the error in using Euler’s method is bounded and the bound
is given by
|y(tn ) − yn | ≤
hM
[exp(L(tn − a) − 1]
2L
If roundoff error is included, there is a corresponding bound on the total error
and this is
1 hM δ
(
+ )[exp(L(tn − a) − 1] + |δ0| exp(L(tn − a)
L 2
h
q
Therefore, there is an optimum step size h∗ = 2Mδ giving the lowest total error.
As an example consider the problem consisting of determining an approximate
solution of the initial value problem consisting of
|y(tn ) − yn | ≤
dy
= f (t, y) = y − t 2 + 1
dt
y(a) = y0 = 0.5
in the range t ∈ [0, 4].
The exact solution of this problem is
1
y(t) = t 2 + 2t + 1 − exp(t)
2
The figure below shows graphs of the exact solution and the approximate solution
obtained using the explicit Euler method with h = 51 . The approximation is not very
good in this case.
4.3 Single Step Methods
145
7
’yex-t’
’yE-t’
6
5
4
3
2
1
0
-1
-2
-3
0
0.5
1
1.5
2
2.5
Fig. 4.1 The exact and approximate solutions using Euler’s method
3
3.5
4
4.5
146
4 Numerical Solution of Initial Value Problems
for Ordinary Differential Equations
4.3.2 Higher Order Taylor Methods
The difference method
y0 = y0
yn+1 = yn + hφ (tn, yn )
for each i = 0, 1, ...N − 1 has a local truncation error given by
τn+1 (h) =
yn+1 − (yn + hφ (tn, yn )) yn+1 − yn
=
− φ (tn, yn )
h
h
for each n = 0, 1, ...N − 1.
For example, for Euler’s explicit method, the local error is
τn+1 (h) =
h
yn+1 − yn
− f (tn , yn ) = y′′ (ξn )
h
2
so that τn+1 (h) = O(h), i.e. the approximation error is of the order of the step size.
Ideally, one would like numerical methods such that τn+1 (h) = O(h p ) with p >>
1 but with a manageable amount of computation. One such method is the Taylor
method of order m
y0 = y0
yn+1 = yn + hT (m) (tn , yn )
for each n=0,1,2,...N-1, where
T (m) (tn , yn ) = f (tn , yn ) +
h ′
hm (m−1)
f (tn , yn ) + ... +
f
(tn , yn )
2
m!
If Taylor’s method of order m is used to approximate the solution to the problem
dy
= f (t, y(t))
dt
in a ≤ t ≤ b with y(0) = y0 and y ∈ C n+1 [a, b] with step size h, then the truncation
error is O(hm ).
4.3 Single Step Methods
147
4.3.3 Runge-Kutta Methods
Higher order, single step methods which do not require calculation of the derivatives
of f have been developed. The Runge-Kutta family of methods are in this class. To
motivate one needs the n-th Taylor polynomial in two variables:
If f (t, y) and all its partial derivatives of order ≤ n + 1 are continuous on D
then f (t, y) = Pm (t, y) + Rm (t, y). Here Pm (t, y) is the n-th Taylor polynomial in two
variables and Rm (t, y) is the remainder.
Consider now the Taylor polynomial
h∂f h∂f ′
+
y =
2 ∂t 2 ∂y
h
h
= f (t + , y + f (t, y)) − R1
2
2
T (2) (t, y) = f (t, y) +
Where R1 = O(h2 ). If R1 is discarded and T (2) this is used in Taylor’s method formula of order two one obtains the Runge-Kutta method of order 2, also called the
midpoint method
y0 = y0
h
h
yn+1 = yn + h f (tn + , yn + f (tn, yn ))
2
2
for each n = 0, 1, ...N − 1.
Many other RK methods are also available. The Runge-Kutta method of order
four is the most commonly used method. This is as follows
y0 = y0
k1 = h f (tn , yn )
h
1
k2 = h f (tn + , yn + k1 )
2
2
1
h
k3 = h f (tn + , yn + k2 )
2
2
k4 = h f (tn+1, yn + k3 )
1
yn+1 = yn + (k1 + k2 + k3 + k4 )
6
148
4 Numerical Solution of Initial Value Problems
for Ordinary Differential Equations
The method has local truncation error O(h4 ).
Consider again the problem consisting of determining an approximate solution
of the initial value problem
dy
= f (t, y) = y − t 2 + 1
dt
y(a) = y0 = 0.5
in the range t ∈ [0, 4].
Figure 2 shows graphs of the exact solution and the approximate solution obtained using the fourth order Runge-Kutta method with h = 51 . The approximation
is now much better.
4.3.4 Runge-Kutta-Fehlberg Method
Often, the use of non-uniform step sizes is advantageous. Small step sizes increase
accuracy but also computation time and large step sizes - and faster computation can be confidently used when the solution changes little.
If the step size is selected so that the approximation stays within a pre-selected
value of the (global) truncation error one obtains an adaptive method. The advantage of adaptive methods is that small step sizes are only used where they are most
needed.
Consider two different approximations yn+1 , y∗n+1 to an IVP differing only in
order and obtained using a fixed step size h.
y0 = y0
yn+1 = yn + hφ (tn, yn , h)
with τn+1 (h) = O(hm ), and
y∗0 = y0
y∗n+1 = y∗n + hφ ∗(tn , y∗n, h)
∗ (h) = O(hm+1 ).
with τn+1
Now if the two approximations are close to each other, i.e. yn+1 ≈ y∗n+1 then
τn+1 (h) ≈
y∗n+1 − yn+1
≈ Khm
h
Since one wants to be able to vary the step size during the computation, select a
number q < 1 such that the new spacing is qh. Therefore
4.3 Single Step Methods
149
7
’yex-t’
’yRK4-t’
6
5
4
3
2
1
0
-1
-2
-3
0
0.5
1
1.5
2
2.5
Fig. 4.2 The exact and approximate solutions using the Runge-Kutta method
3
3.5
4
4.5
150
4 Numerical Solution of Initial Value Problems
τn+1 (h) ≈ K(qh)n ≈ q
for Ordinary Differential Equations
∗
n yn+1 − yn+1
h
Therefore, to bound τn+1 (qh) by an amount ε , select q such that
q≤(
εh
)1/m
|y∗n+1 − yn+1 |
In practice, when implementing the Runge-Kutta-Fehlberg with m = 4 q is calculated by
q ≤ 0.84(
εh
)1/4
∗
|wi+1 − wi+1 |
In the Runge-Kutta-Fehlberg method one uses order five for y∗n+1 and order four
for yn+1 . In practice, bounds for h, hmin , hmax are fist selected. An initial value of h =
hmax is chosen and the values of the approximations y∗n+1 and yn+1 are calculated.
With these, the value of q is computed. If q < 1, the initial choice of h is rejected, a
new (smaller) value such as h = qh > hmin is selected and the calculation repeated.
Once q > 1, the calculation proceeds to the next time step. Alternatively, if q > 1,
one can also reject the initial choice of h and select a new (larger) value such as
h = qh < hmax .
4.4 Multi step Methods
In multi-step methods information at more than two mesh points is used to construct
the approximation. The approximate solution is obtained by marching forward several steps at a time. The increased complexity is compensated by increasing accuracy of the approximation. The explicit Adams-Bashforth and Adams-Moulton
methods were the first multi-step methods introduced and are still widely used. Improvements are obtained by using predictor-corrector or variable step approaches.
The highest accuracy is obtained using Richardson extrapolation. There are many
possible methods available. Only a few of these are described and discussed in the
sequel.
4.4.1 Adams-Bashforth and Adams-Moulton Methods
For the problem
dy
= f (t, y),
dt
a ≤ t ≤ b,
an m-step approximation method has the form
y(a) = y0
4.4 Multi step Methods
151
yn+1 = am−1 yn + am−2 yn−1 + ... + a0yn−m+1 +
h[bm f (tn+1 , yn+1 ) + bm−1 f (tn , yn ) + ... + b0 f (tn−m+1 , yn−m+1 )]
Here, the ai ’s and bi ’s are constants.
Note that in addition to the initial condition, the values y1 , y2 , ..., ym−1 must be
specified at the outset.
If bm = 0 the method is called explicit. An example is the fourth order-four step
Adams-Bashforth method (AB4)
yn+1 = yn +
h
[55 f (tn, yn ) − 59 f (tn−1, yn−1 ) + 37 f (tn−2, yn−2 ) − 9 f (tn−3 , yn−3 )]
24
for each i = 3, 4, ..., N − 1.
If bm 6= 0 the method is implicit. An example is the fourth order-three step
Adams-Moulton method (AM4)
yn+1 = yi +
h
[9 f (tn+1, yn+1 ) + 19 f (tn , yn ) − 5 f (tn−1, yn−1 ) + f (tn−2 , yn−2 )]
24
for each i = 2, 3, 4, ..., N − 1.
If y(t) is the solution of y′ (t) = f (t, y), a ≤ t ≤ b, y(a) = α obtained by
yn+1 = am−1 yn + am−2 yn−1 + ... + a0yn−m+1 +
h[bm f (tn+1 , yn+1 ) + bm−1 f (tn , yn ) + ... + b0 f (tn−m+1 , yn−m+1 )]
the local truncation error at the (n + 1)th step can be defined.
Most importantly, the local truncation error for AB4 is
AB4
(h) =
τn+1
251 (5)
y (µi )h4
720
while the one for AM4 is
AM4
τn+1
(h) = −
19 (5)
y (µi )h4
720
However, AM4 requires only three starting points while AB4 requires four.
It is common to combine an approximating prediction obtained explicitly with
a correction obtained implicitly. These are called predictor-corrector methods. The
Adams Fourth Order Predictor Corrector method starts by computing values y1 , y2 , y3
using, for instance, an explicit single step method. Then, the explicit AdamsBashforth method is applied to calculate y04 and the value obtained is used with
the implicit Adams-Moulton method to correct the prediction yielding y14 . The prediction can be improved by decreasing the value of h.
152
4 Numerical Solution of Initial Value Problems
for Ordinary Differential Equations
4.4.2 Variable Step-Size Methods
Let the Predictor-Corrector method described above be used to solve an IVP producing approximations y0i+1 and yi+1 . Assuming that h is small, subtraction of the
truncation error equations leads to
y(5) (µi ) ≈
8
(yi+1 − y0i+1 )
3h5
AM4 (h) produces
Back substituting into τi+1
AM4
τi+1
(h) =
19|yi+1 − y0i+1 |
270h
If now q < 1 is selected so that the new step size is qh and the computation is
∗
repeated, one obtains instead of y0i+1 and yi+1 the new set y0∗
i+1 and yi+1 . One can
AM4
then select the value of q again and again until τi+1 (h) is less than a previously
specified tolerance ε , i.e.
q ≤ 1.5(
εh
)1/4
|yi+1 − y0i+1 |
In practice, the step size reduction is avoided whenever
19|yi+1 − y0i+1 |
ε
≤
≤ε
10
270h
4.4.3 Richardson Extrapolation Method
Extrapolation methods appropriately combine O(h) accurate formulae to produce
higher accuracy formulae. The key idea is Richardon’s extrapolation. For instance,
use of Euler’s method with h0 = h/2 yields the following approximation to y(a +
h0 ),
y1 = y0 + h0 f (a, w0)
Using next the midpoint method to approximate y(a + h)
y2 = y0 + 2h0 f (a + h0 , y1 )
Using now endpoint correction yields the following O(h2 ) approximation to y(t1 )
1
y1,1 = [y2 + y1 + h0 f (a + 2h0, y2 )]
2
4.4 Multi step Methods
153
Now taking h1 = h/4 and proceeding as above yields the following alternative O(h2 )
approximation to y(t1 )
1
y2,1 = [y4 + y3 + h1 f (a + 4h1, w4 )]
2
where
y4 = y2 + 2h1 f (a + 3h1, y3 )
y3 = y1 + 2h1 f (a + 2h1, y2 )
y2 = y0 + 2h1 f (a + h1 , y1 )
y1 = y0 + h1 f (a, y0)
The following combination of the above two approximations is now an O(h4 ) approximation to y(t1 )
1
y2,2 = y2,1 + (y2,1 − y1,1 )
3
To implement the algorithm, one must first specify the function f (t, y), initial
condition α , endpoints a and b, the desired tolerance level TOL, and the maximum
and minimum step sizes hmax and hmin . Moreover, the integers qi , i = 1, 2..., 7 and
the spacings hi = h/qi must also be defined. One then computes first approximations
using Euler and midpoint methods and then compute the extrapolations. The process
continues using smaller values of h until the approximation is acceptable.
A precise comparison of the performance of the methods discussed can be made
using the time-dependent approximation error defined as
e(t) = |yapp (t) − yex (t)|
Figure 3 shows the computed errors for the Euler and Adams-Bashforth methods.
The figure shows that in both cases the size of the error increases with time but at
different rates. Clearly, use of the Euler method should be discouraged.
Finally, figure 4 shows the computed errors for the fourth-order Runge-Kutta,
the Runge-Kutta-Fehlberg (using hmin = 0.02 and hmax = 1) and the Richardson
extrapolation methods. Although also increasing with time, the errors are smaller
that those shown in the previous figure. The Runge-Kutta-Fehlberg and Richardson
extrapolation produce the most accurate results.
154
4 Numerical Solution of Initial Value Problems
for Ordinary Differential Equations
0.7
’eE-t’
’eAB-t’
0.6
0.5
0.4
0.3
0.2
0.1
0
0
0.5
1
1.5
2
2.5
3
3.5
Fig. 4.3 The time-dependent approximation errors obtained with Euler’s method and the AdamsBashforth method
4
4.5
4.4 Multi step Methods
155
0.0004
’eRK4-t’
’eRE-t’
0.00035
0.0003
0.00025
0.0002
0.00015
0.0001
5e-005
0
0
0.5
1
1.5
2
2.5
3
3.5
Fig. 4.4 The time-dependent approximation errors obtained with the fourth order Runge-Kutta
method, the Runge-Kutta-Fehlberg method and the Richardson Extrapolation method
4
4.5
156
4 Numerical Solution of Initial Value Problems
for Ordinary Differential Equations
4.5 Higher Order Equations and Systems of Equations
A higher order IVP can always be rewritten as a system of first order IVPs.
dy
= f(x, y)
dx
with
y(a) = (α1 , α2 , ..., αn)
where y(t) = (y1 , y2 , ...yn) and f(t, y) = ( f 1 (t, y), f 2(t, y), .. f n(t, y))
For instance, the third order equation
dy d 2 y
d 3y
=
f
(x,
y,
)
,
dx3
dx dx2
subject to the initial conditions
y(0) = α1
y′ (0) = α2
y′′ (0) = α3
by introducing the new variables
y1 = y
dy
y2 =
dx
d 2y
y3 = 2
dx
can be transformed into the equivalent system of three first order equations
dy1
= y2
dx
subject to
y1 (0) = α1
dy2
= y3
dx
subject to
y2 (0) = α2
4.5 Higher Order Equations and Systems of Equations
157
and
dy3
= f (x, y1 , y2 , y3 )
dx
subject to
y3 (0) = α3
It can be shown that if f is continuous and Lipschitz in all the variables y, the
system of IVP’s has a unique solution y(t).
Therefore, any of the various Initial Value Problem algorithms used to solve single equations can be readily extended to deal with systems of equations.
Recall that the fourth order Runge-Kutta method is a single step procedure which
produces an approximation wi to the solution y(ti ) of the initial value problem represented by the equation
dy
= f (t, y)
dt
on a ≤ x ≤ b subject to the condition
y(a) = α
given by
1
yi+1 = yi + (k1 + 2k2 + 2k3 + k4 )
6
where
k1 = h f (ti , wi)
k1
h
k2 = h f (ti + , wi + )
2
2
h
k2
k3 = h f (ti + , wi + )
2
2
k4 = h f (ti+1, wi + k3 )
and h = ∆ t is the step size.
In summary, to implement the Runge-Kutta method for the solution of systems
of initial value problems one must specify the number of equations m, the function f (t, y), initial condition αi , endpoints a and b, and the number of subintervals
N, for each variable y j , j = 1, 2, ..., m. Then one proceeds to compute the values
K1, j , K2, j , K3, j, K4, j and the corresponding approximations w j .
As an example consider the forced vibrations of a single degree of freedom system. This is generally described by the initial value problem consisting of the second
order ordinary differential equation
158
4 Numerical Solution of Initial Value Problems
m
for Ordinary Differential Equations
dz
d2z
+ c + kz = f
dt 2
dt
where z(t) is the displacement of the system from its equilibrium position, c and k
are the damping constant and stiffness coefficient, respectively and f is the exciting
force. This problem must be solved subject to the initial conditions
z(0) = z0
dz
(0) = z′0
dt
This equation can be replaced by the following equivalent system of two, coupled, initial value problems for first order ordinary differential equations
du
f
c
k
= − u− z
dt
m m
m
dz
=u
dt
where u(t) is the velocity, subject to
z(0) = z0
u(0) = u0
When using the fourth-order Runge-Kutta method, the calculation of four intermediate values of z, u and u̇ is required to advance the calculation one time step. The
method involves the following computations. First, the following four intermediate
values are calculated
(1)
zj = zj
(1)
uj = uj
du (1) Fj c (1) k (1)
− u − zj
=
dt j
m m j
m
∆ t (1)
u
2 j
∆ t (1)
(2)
u j = u j + u̇ j
2
(2)
Fj c (2) k (2)
du
− u − zj
=
dt j
m m j
m
(2)
zj = zj +
(3)
zj = zj +
∆ t (2)
u
2 j
4.6 Stiff Equations
159
∆ t (2)
u̇
2 j
du (3) Fj c (3) k (3)
− u − zj
=
dt
m m j
m
(3)
uj = uj +
(4)
(3)
(4)
(3)
z j = z j + ∆ tu j
u j = u j + ∆ t u̇ j
du (4) Fj c (4) k (4)
− u − zj
=
dt
m m j
m
Then, time stepping proceed as follows
∆ t (1)
(2)
(3)
(4)
(u + 2u j + 2u j + u j )
6 j
∆ t du (1)
du (2)
du (3) du (4)
+2
+2
+
)
u j+1 = u j + (
6 dt j
dt j
dt j
dt j
z j+1 = z j +
As a specific numerical example consider the problem where m = 1, c = 20,
k = 200, f (t) is the piecewise function shown in the following figure and the system
is initially at rest.
In this case then
du
= f − 20u − 2000z
dt
dz
=u
dt
subject to
z(0) = 0
u(0) = 0
Figure 6 shows the response of the system computed using the Runge-KuttaFehlberg method.
4.6 Stiff Equations
If no restrictions are placed on the step size of approximation methods used to solve
initial value problems whose solutions contain rapidly decaying (exponential) terms
of the form exp(λ t) with λ < 0, the approximation fails. Problems of this type are
called stiff problems. For instance, for the IVP y′ = λ y with y(0) = α , control of
160
4 Numerical Solution of Initial Value Problems
for Ordinary Differential Equations
Fig. 4.5 Exciting force for the example discussed
roundoff error is obtained only if
h<
2
|λ |
Numerical solutions to stiff problems are commonly generated using implicit
multi step methods. For instance use of the implicit trapezoidal formula to approximate the solution to y′ = f yields the (generally nonlinear) equation
4.6 Stiff Equations
161
Fig. 4.6 Response of the system considered to the exciting force in the previous figure
h
F(w) = y j+1 − y j − [ f (t j+1, y j+1) + f (t j , y j )] = 0
2
The solution to this can be obtained by Newton’s method iterations as
y j+1,k = y j+1,k−1 −
F(y j+1,k−1)
F ′ (y j+1,k−1)
162
4 Numerical Solution of Initial Value Problems
for Ordinary Differential Equations
Implicit Trapezoidal with Newton Iteration Algorithm.
• Give function f (t, y), initial condition α , endpoints a and b, tolerance TOL, maximum number of iterations M.
• Set h = (b − a)/N; t = a; w = α .
• Compute w0 = w + (h/2) f (t, w)
−(h/2) f (t+h,w0)−w0
and iterate to tolerance.
• Use w = w0 − w0 1−(h/2)
f (t+h,w )
y
0
4.7 Exercises
1.- Construct the following finite difference approximations to the derivative dy/dx
of the function y(x) = x sin(x) for x ∈ [−1..1]. Compare the approximations obtained
against the actual value of the derivative.
•
•
•
•
•
∆+
h
2
∆ + − 12 ∆ +
h
2 +1∆3
∆ + − 12 ∆ +
3 +
h
A∆ 0
h
A∆ 0 (I− 61 ∆ 02 )
h
2.- Construct the following finite difference approximations to the derivative
1
for x ∈ [− 12 ..4]. Compare the approximations
d 2 y/dx2 of the function y(x) = 1+x
obtained against the actual value of the derivative.
•
•
•
•
•
∆+
h
2
∆ + − 12 ∆ +
h
2 +1∆3
∆ + − 12 ∆ +
3 +
h
A∆ 0
h
A∆ 0 (I− 61 ∆ 02 )
h
3.- Consider the initial value problem
dy
= 2t
dt
for t ∈ [0, 1] and subject to the initial condition
y(0) = 0
4.7 Exercises
163
Determine the maximum step size that can be used for the maximum error not to
exceed 10−3.
4.- Consider the initial value problem consisting of determining the function y(t)
such that
dy
= −20(y − t 2) + 2t
dt
for t ∈ [0, 100] and subject to the initial condition
y(0) = 0
Use the Euler method to compute the maximum approximation error as a function of the mesh spacing h, for h ∈ [10−4, 1].
5.- Consider the initial value problem consisting of determining the function y(t)
such that
1
1
dy
= y−
−
dt
(1 + t)2 (1 + t)
subject to the initial condition
y(0) = 2
The exact solution is
y(t) = exp(t) +
1
(1 + t)
Obtain approximate solution to this problem using the explicit Euler method, the
fourth order Runge-Kutta method, the Runge-Kutta-Fehlberg method, the AdamsBashforth method and Richardson Extrapolation. Compare the approximation errors.
6.- Consider the initial value problem consisting of determining the function y(t)
such that
dy
1
1
= y3 −
−
3
dt
(1 + t)
(1 + t)2
subject to the initial condition
y(0) = 1
The exact solution is
164
4 Numerical Solution of Initial Value Problems
y(t) = exp(t) +
for Ordinary Differential Equations
1
(1 + t)
Obtain approximate solution to this problem using the explicit Euler method, the
fourth order Runge-Kutta method, the Runge-Kutta-Fehlberg method, the AdamsBashforth method and Richardson Extrapolation. Compare the approximation errors.
7.- Consider the initial value problem consisting of determining the function y(t)
such that
dy
y
= −y3 +
dt
(0.01 + t)
for t ∈ [0, 1], subject to the initial condition
y(0) = 1
The exact solution is
y(t) = √
3(1 + 100t)
60000t 3 + 1800t 2 + 18t + 9
Obtain approximate solution to this problem using the explicit Euler method, the
fourth order Runge-Kutta method, the Runge-Kutta-Fehlberg method, the AdamsBashforth method and Richardson Extrapolation. Compare the approximation errors.
8.- Consider the initial value problem consisting of determining the function y(t)
such that
d2y
dy
+ 0.1 + 0.1y = sint
dt 2
dt
subject to the initial conditions
y(0) = 0
dy
(0) = 0
dt
Obtain approximate solution to this problem using the fourth order Runge-Kutta
method. Evaluate the approximation error as a function of the mesh spacing.
9.- Consider the initial value problem consisting of determining the function y(t)
such that
d2y
+ y = sin 1.1t
dt 2
4.8 References
165
subject to the initial conditions
y(0) = 0
dy
(0) = 0
dt
Obtain approximate solution to this problem using the fourth order Runge-Kutta
method. Evaluate the approximation error as a function of the mesh spacing.
10.-Consider the initial value problem consisting of determining the functions
y1 (t), y2 (t) such that
dy1
= −100y1 + y2
dt
1
dy2
= − y2
dt
10
subject to the initial conditions
y1 (0) = 1
y2 (0) = 1
The exact solution is
y1 (t) =
10
t
989
exp(− ) +
exp(−100t)
999
10
999
t
y2 (t) = exp(− )
10
Obtain approximate solutions to this problem and evaluate the errors.
4.8 References
1.- Fox, L. and Mayers, D.F., Numerical Solution of Ordinary Differential Equations
for Scientists and Engineers, Chapman and Hall, London, 1987.
2.- Gear, C.W., Numerical Initial Value Problems in Ordinary Differential Equations, Prentice-Hall, Englewood Cliffs, NJ, 1971.
3.- Hairer E. et al., Solving Ordinary Differential Equations, Vols I Nonstiff
Problems and II Stiff and Differential-Algebraic Problems, Springer-Verlag, Berlin,
1987, 1993.
166
4 Numerical Solution of Initial Value Problems
for Ordinary Differential Equations
4.- Iserles, A., A First Course in the Numerical Analysis of Differential Equations, Cambridge U.P., Cambridge, 1996.
Fly UP