Next: Runge-Kutta methods
Up: Integration of ODEs
Previous: Numerical errors
Consider the following example. Suppose that our o.d.e. is
![\begin{displaymath}
y' = - \alpha\,y,
\end{displaymath}](img269.png) |
(14) |
where
, subject to the boundary condition
![\begin{displaymath}
y(0) = 1.
\end{displaymath}](img271.png) |
(15) |
Of course, we can solve this problem analytically to give
![\begin{displaymath}
y(x) = \exp(-\alpha\,x).
\end{displaymath}](img272.png) |
(16) |
Note that the solution is a monotonically decreasing function of
.
We can also solve this problem numerically using Euler's method. Appropriate
grid-points are
![\begin{displaymath}
x_n = n\,h,
\end{displaymath}](img273.png) |
(17) |
where
. Euler's method yields
![\begin{displaymath}
y_{n+1} = (1-\alpha\,h)\,y_n.
\end{displaymath}](img274.png) |
(18) |
Note one curious fact. If
then
.
In other words, if the step-length is made too large then the numerical
solution becomes an oscillatory function of
of
monotonically increasing amplitude:
i.e., the numerical solution diverges from the actual
solution. This type of catastrophic failure of a numerical integration
scheme is called a numerical instability. All simple integration
schemes become unstable if the step-length is made sufficiently large.
Next: Runge-Kutta methods
Up: Integration of ODEs
Previous: Numerical errors
Richard Fitzpatrick
2006-03-29