Discussion:
Numerical Stability of Euler scheme solving du/dt=u?
(too old to reply)
Shi Jin
2004-04-13 02:36:43 UTC
Permalink
Hi there,

I am a little confused about the numerical stability issue. I think there
are at least two kinds of stability, one is D-Stable(or zero stable), the
other is absolute stable(or eigenvalue stable, time stable). Let's
restrict our mind to the absolute stable definition, which means that for
the fixed time step, as time goes to infinity, the numerical solution is
always bounded.

But this definition is only designed for the model equation
du/dt=c u
where the real part of c is negative: Re(c)<0.

I guess the idea is that the stability is a property of a scheme,
independent of the actual problem. If a scheme is stable for the model
equation, it should be also stable for other equations. But is this true?
How can we justify it?

Another thing is how to choose the proper time step for a given equation
using some scheme, say Euler scheme. The stability region is dependent on
the actual equation we are solving. We could find its Jacobian and then
the eigenvalues. But what if the eigenvalues are positive? Then all these
stability theories are no longer working.

For example, let's try to solve
du/dt=u
u(0)=1
using Euler scheme: u(n+1)=u(n)+dt*u(n).
How can I know whether my numerical solution is stable or not? Since the
actual solution is unbounded, the previous definition is meaningless.
How can I choose a proper time step?
I did some quick numerical experiments,it turned out that when the time
step is small enough, the numerical solutions are very close to the actual
solution. But when it is not that small, the numerical results are very
different from the exact one. But is this due to numerical accuracy or
stability? I am really confused here.

Thank you very much.

Shi Jin
J. W. Peterson
2004-04-13 14:10:07 UTC
Permalink
Post by Shi Jin
Hi there,
I am a little confused about the numerical stability issue. I think there
are at least two kinds of stability, one is D-Stable(or zero stable), the
other is absolute stable(or eigenvalue stable, time stable). Let's
restrict our mind to the absolute stable definition, which means that for
the fixed time step, as time goes to infinity, the numerical solution is
always bounded.
But this definition is only designed for the model equation
du/dt=c u
where the real part of c is negative: Re(c)<0.
I guess the idea is that the stability is a property of a scheme,
independent of the actual problem. If a scheme is stable for the model
equation, it should be also stable for other equations. But is this true?
How can we justify it?
For a general ODE:
du/dt = f(u,t)
u(0) = u0

a numerical scheme is said to be "convegent" if it is both
"consistent" and stable. Consistency is related to the truncation
error of the method (in the case of Euler, it is O(delta t)) and
stability is related to the function f. Specifically if f is
Lipschitz continuous in some interval [t0 t1] then stability is
guaranteed in that interval, but this does not suggest a way
to determine the timestep...
Post by Shi Jin
Another thing is how to choose the proper time step for a given equation
using some scheme, say Euler scheme. The stability region is dependent on
the actual equation we are solving. We could find its Jacobian and then
the eigenvalues. But what if the eigenvalues are positive? Then all these
stability theories are no longer working.
For example, let's try to solve
du/dt=u
u(0)=1
using Euler scheme: u(n+1)=u(n)+dt*u(n).
How can I know whether my numerical solution is stable or not? Since the
actual solution is unbounded, the previous definition is meaningless.
How can I choose a proper time step?
I did some quick numerical experiments,it turned out that when the time
step is small enough, the numerical solutions are very close to the actual
solution. But when it is not that small, the numerical results are very
different from the exact one. But is this due to numerical accuracy or
stability? I am really confused here.
Probably numerical accuracy. Note that the error between the true and
numerical solutions is proportional to exp(t) since the first term in
the truncated Taylor series is O(delta_t^2 * y''(t)), and so for long
enough t it will become quite large. If you try a higher-order method
e.g. RK4 on the same problem with the same timestep you should observe
improved accuracy.


Hope that helps,
John
Post by Shi Jin
Thank you very much.
Shi Jin
Shi Jin
2004-04-13 18:49:53 UTC
Permalink
Thanks.I did some experiments and it seems to be numerical truncation
error.
Can I safely say that when the eigenvalues are positive, there is no
stability issue at all? Of course, Lipschitz condition has to be
satisfied.

Thank you.

Shi Jin
Post by J. W. Peterson
Post by Shi Jin
Hi there,
I am a little confused about the numerical stability issue. I think there
are at least two kinds of stability, one is D-Stable(or zero stable), the
other is absolute stable(or eigenvalue stable, time stable). Let's
restrict our mind to the absolute stable definition, which means that for
the fixed time step, as time goes to infinity, the numerical solution is
always bounded.
But this definition is only designed for the model equation
du/dt=c u
where the real part of c is negative: Re(c)<0.
I guess the idea is that the stability is a property of a scheme,
independent of the actual problem. If a scheme is stable for the model
equation, it should be also stable for other equations. But is this true?
How can we justify it?
du/dt = f(u,t)
u(0) = u0
a numerical scheme is said to be "convegent" if it is both
"consistent" and stable. Consistency is related to the truncation
error of the method (in the case of Euler, it is O(delta t)) and
stability is related to the function f. Specifically if f is
Lipschitz continuous in some interval [t0 t1] then stability is
guaranteed in that interval, but this does not suggest a way
to determine the timestep...
Post by Shi Jin
Another thing is how to choose the proper time step for a given equation
using some scheme, say Euler scheme. The stability region is dependent on
the actual equation we are solving. We could find its Jacobian and then
the eigenvalues. But what if the eigenvalues are positive? Then all these
stability theories are no longer working.
For example, let's try to solve
du/dt=u
u(0)=1
using Euler scheme: u(n+1)=u(n)+dt*u(n).
How can I know whether my numerical solution is stable or not? Since the
actual solution is unbounded, the previous definition is meaningless.
How can I choose a proper time step?
I did some quick numerical experiments,it turned out that when the time
step is small enough, the numerical solutions are very close to the actual
solution. But when it is not that small, the numerical results are very
different from the exact one. But is this due to numerical accuracy or
stability? I am really confused here.
Probably numerical accuracy. Note that the error between the true and
numerical solutions is proportional to exp(t) since the first term in
the truncated Taylor series is O(delta_t^2 * y''(t)), and so for long
enough t it will become quite large. If you try a higher-order method
e.g. RK4 on the same problem with the same timestep you should observe
improved accuracy.
Hope that helps,
John
Post by Shi Jin
Thank you very much.
Shi Jin
J. W. Peterson
2004-04-14 05:05:27 UTC
Permalink
Post by Shi Jin
Thanks.I did some experiments and it seems to be numerical truncation
error.
Can I safely say that when the eigenvalues are positive, there is no
stability issue at all? Of course, Lipschitz condition has to be
satisfied.
If you mean the solution is not stable if there exists at least one
positive eigenvalue in the sense of a linear stability analysis, yes I
believe that is correct.

[rest snipped.]
Carlos Felippa
2004-04-14 00:15:06 UTC
Permalink
Post by Shi Jin
Hi there,
I am a little confused about the numerical stability issue. I think there
are at least two kinds of stability, one is D-Stable(or zero stable), the
other is absolute stable(or eigenvalue stable, time stable). Let's
restrict our mind to the absolute stable definition, which means that for
the fixed time step, as time goes to infinity, the numerical solution is
always bounded.
But this definition is only designed for the model equation
du/dt=c u
where the real part of c is negative: Re(c)<0.
I guess the idea is that the stability is a property of a scheme,
independent of the actual problem. If a scheme is stable for the model
equation, it should be also stable for other equations. But is this true?
How can we justify it?
Another thing is how to choose the proper time step for a given equation
using some scheme, say Euler scheme. The stability region is dependent on
the actual equation we are solving. We could find its Jacobian and then
the eigenvalues. But what if the eigenvalues are positive? Then all these
stability theories are no longer working.
For example, let's try to solve
du/dt=u
u(0)=1
using Euler scheme: u(n+1)=u(n)+dt*u(n).
How can I know whether my numerical solution is stable or not? Since the
actual solution is unbounded, the previous definition is meaningless.
How can I choose a proper time step?
I did some quick numerical experiments,it turned out that when the time
step is small enough, the numerical solutions are very close to the actual
solution. But when it is not that small, the numerical results are very
different from the exact one. But is this due to numerical accuracy or
stability? I am really confused here.
Thank you very much.
Shi Jin
The issue you mentioned (relative versus absolute stability) is one
that engineering students have trouble with. Recently I wrote a
couple of tutorials for a course in fluid-structure interaction I am
teaching. See

http://caswww.colorado.edu/courses.d/FSI.d/FSI.Ch03.d/FSI.Ch03.pdf
http://caswww.colorado.edu/courses.d/FSI.d/FSI.Ch04.d/FSI.Ch04.pdf

and recommended refs there.

One problem few books mention is overstabilty: the masking of physical
instability, eg airplane flutter, by the numerical dissipation of an
integrator like Backward Euler. Ideally the numerical scheme should
have exactly the same stability region as the differential equation
being integrated. Only a few integrators realize that goal.

As to the last point: if the time step is extremely small, roundoff
and truncation errors can dominate the discretization error. In my
experience that is not a very likely occurrence if one works in double
precision, unless you are dealing with a singular perturbation problem
(say, a differential difference system or a delayed differential
equation)
Phil Hobbs
2004-04-17 19:03:27 UTC
Permalink
Post by Carlos Felippa
Post by Shi Jin
Hi there,
I am a little confused about the numerical stability issue. I think there
are at least two kinds of stability, one is D-Stable(or zero stable), the
other is absolute stable(or eigenvalue stable, time stable). Let's
restrict our mind to the absolute stable definition, which means that for
the fixed time step, as time goes to infinity, the numerical solution is
always bounded.
But this definition is only designed for the model equation
du/dt=c u
where the real part of c is negative: Re(c)<0.
I guess the idea is that the stability is a property of a scheme,
independent of the actual problem. If a scheme is stable for the model
equation, it should be also stable for other equations. But is this true?
How can we justify it?
Another thing is how to choose the proper time step for a given equation
using some scheme, say Euler scheme. The stability region is dependent on
the actual equation we are solving. We could find its Jacobian and then
the eigenvalues. But what if the eigenvalues are positive? Then all these
stability theories are no longer working.
For example, let's try to solve
du/dt=u
u(0)=1
using Euler scheme: u(n+1)=u(n)+dt*u(n).
How can I know whether my numerical solution is stable or not? Since the
actual solution is unbounded, the previous definition is meaningless.
How can I choose a proper time step?
I did some quick numerical experiments,it turned out that when the time
step is small enough, the numerical solutions are very close to the actual
solution. But when it is not that small, the numerical results are very
different from the exact one. But is this due to numerical accuracy or
stability? I am really confused here.
Thank you very much.
Shi Jin
The issue you mentioned (relative versus absolute stability) is one
that engineering students have trouble with. Recently I wrote a
couple of tutorials for a course in fluid-structure interaction I am
teaching. See
http://caswww.colorado.edu/courses.d/FSI.d/FSI.Ch03.d/FSI.Ch03.pdf
http://caswww.colorado.edu/courses.d/FSI.d/FSI.Ch04.d/FSI.Ch04.pdf
and recommended refs there.
One problem few books mention is overstabilty: the masking of physical
instability, eg airplane flutter, by the numerical dissipation of an
integrator like Backward Euler. Ideally the numerical scheme should
have exactly the same stability region as the differential equation
being integrated. Only a few integrators realize that goal.
As to the last point: if the time step is extremely small, roundoff
and truncation errors can dominate the discretization error. In my
experience that is not a very likely occurrence if one works in double
precision, unless you are dealing with a singular perturbation problem
(say, a differential difference system or a delayed differential
equation)
Overstability is also common in circuit simulation, e.g. the Gear integrator
in SPICE, which will often make an oscillator look like an amplifier.

Cheers,

Phil Hobbs

Loading...