next up previous
Next: Runge-Kutta methods Up: Integration of ODEs Previous: Numerical errors

Numerical instabilities

Consider the following example. Suppose that our o.d.e. is
\begin{displaymath}
y' = - \alpha\,y,
\end{displaymath} (14)

where $\alpha>0$, subject to the boundary condition
\begin{displaymath}
y(0) = 1.
\end{displaymath} (15)

Of course, we can solve this problem analytically to give
\begin{displaymath}
y(x) = \exp(-\alpha\,x).
\end{displaymath} (16)

Note that the solution is a monotonically decreasing function of $x$. We can also solve this problem numerically using Euler's method. Appropriate grid-points are
\begin{displaymath}
x_n = n\,h,
\end{displaymath} (17)

where $n=0,1,2,\cdots$. Euler's method yields
\begin{displaymath}
y_{n+1} = (1-\alpha\,h)\,y_n.
\end{displaymath} (18)

Note one curious fact. If $h>2/\alpha$ then $\vert y_{n+1}\vert> \vert y_n\vert$. In other words, if the step-length is made too large then the numerical solution becomes an oscillatory function of $x$ of monotonically increasing amplitude: i.e., the numerical solution diverges from the actual solution. This type of catastrophic failure of a numerical integration scheme is called a numerical instability. All simple integration schemes become unstable if the step-length is made sufficiently large.


next up previous
Next: Runge-Kutta methods Up: Integration of ODEs Previous: Numerical errors
Richard Fitzpatrick 2006-03-29