# How can I understand why certain sets of random variates lead to the warning message "Unable to meet integration tolerances without reducing the step size below the smallest value allowed (1.136868e-13) at time t=51"?

12 views (last 30 days)
Vijay on 6 Jul 2020
Commented: Vijay on 13 Jul 2020
My model uses ODE15s to solve a system of 9 ODEs. This model has several parameters out of which few vary in time dependent fashion beyond certain time point (in this case t=51). When I use the paramter values that incorporated from literature, the model has no issues in the deterministic simulation of this single set.
But when I perform intensive simulations with random variates of these paramter values (generated using mvnrnd), few sets of them (not all) giving the warning message "Unable to meet integration tolerances without reducing the step size below the smallest value allowed (1.136868e-13) at time t=51". This is eventually limiting the solution only upto time point t = 50 and ending the intensive simulations, that wouldn't serve the purpose of simulations.
I have been trying to work on debugging the source code for ode15s function why only few sets (that too at the time point 51 where few of the parameters would become nonlinear with respect to the time point) are creating this tolerance vs step size issue. I belive that there might be an inversion issue where the determinant of certain matrix might be too high. But I couldn't clearly figured out what's the link with few sets of parameter values beyond t=50.
I appreciate if experts in the community could shed some light on this.
Vijay
PS: I could postpone the warning message for certain extent by increasing the tolerance in the odeset options. However this isn't the surmountable solution to the issue.

darova on 8 Jul 2020
Here is what i know about differential equations
x(i+1) = x(i) + f(t(i),x(i))*h; % euler formula
Vijay on 8 Jul 2020
Thank you so much Darova. I just have contacted Walter. Hope I will get some way to progress.
Vijay on 9 Jul 2020
I sort of found sort of the issue (still need to be confirmed) by changing the solver to ode23tb. Hence my observations are with respect to ode23tb
It is observed that 'err' at line 572 was noted to be inflated when there a significant difference between y2 and y at line 467. But in general 'err' should be low with an exception of certain number of iterations until h<hmin. Then based on 'ynew', I figured out that my 7th and/or 8th differential equations producing this difference (this is matching with my guess in my comment above https://www.mathworks.com/matlabcentral/answers/560018-how-can-i-understand-why-certain-sets-of-random-variates-lead-to-the-warning-message-unable-to-meet#comment_929627).
Since these 7th and 8th equations in my model are assocotaed with time varying parameters, it seems the rate constant (k, shown below) is quite sensitive to small change in 't'.
if t<50
P = P(i,1)
elseif t>=50 && t<70
k = ((P(i,2)-P(i,1))./P(i,1)/20)
P = P(i,j)*exp(k*t)
end

John D'Errico on 9 Jul 2020
Edited: John D'Errico on 9 Jul 2020
Nobody yet wants to explain it?
The standard reason for this error is because you are creating a problem that is too stiff for the ODE solver to handle.
In there you can find some characterizations of what a stiff equation signifies, as well as some examples of classically stiff systems. The one I tend to think of is an atmospheric system model, used as the sun is rising. You would see sudden large transients that will also very quickly decay (think of turbulence in the system), while other things happen more slowly. Chemical reaction systems can also have the same behavior. Systems of springs, where one spring has a very different rate constant from the others could be an example too.
The ODE solvers in MATLAB (like ode45, for example) will reduce the step size as needed when the enter a problematic part of the solution. But if the needed reduction in step size is too small, then ODE45 will give up. Stiff solvers (in MATLAB, they are ode15s, ode23s, and ode23tb) employ methods specifically designed to handle such problems.
MATLAB even provides a classically quasi-stiff example problem, in vdp1000.m. This has the defining line of:
dydt = [y(2); 1000*(1-y(1)^2)*y(2)-y(1)];
We can try it out.
tic,[t,y] = ode15s(@vdp1000,[0 3000],[2 0]);toc
Elapsed time is 0.076076 seconds.
plot(t,y(:,1),'-o') ode15s returned with a solution almost instantly. By the density of points in certain regions of the curve though, you can see that it had to greatly reduce the step size to meet the default integration tolerances. Other solvers, for example ode23tb have a similar profile. There are still places where the time step suddenly gets small.
tic,[t,y] = ode23tb(@vdp1000,[0 3000],[2 0]);toc
Elapsed time is 0.097514 seconds.
plot(t,y(:,1),'-o') The spacing between points has changed a wee bit, but not that dramatically so. The clearest difference is it seems to have chosen a relatively large step in some places. But now, if I replace it with a call to ode45, while the other functions came back instantly for me, ode45 was slow.
tic,[t,y] = ode45(@vdp1000,[0 3000],[2 0]);toc
Elapsed time is 13.815437 seconds.
plot(t,y(:,1),'-o') Perhaps this next plot will say as much:
semilogy(diff(t)) So ODE45 had to work quite diligently over almost the entire curve. Only rarely was it able to come up for air and use a reasonably large step size, although it did not completely give up. A reasonably small step size was adequate here, since the problem is not really that stiff as things go. But for seriously stiff problems, ODE45 can sometimes be foreced to reduce the relative step size to eps, or smaller. And then ODE45 just gives up, getting too upset to continue. Then you will see a message like this:
"Unable to meet integration tolerances without reducing the step size below the smallest value allowed (1.136868e-13) at time t=51"
So what are you doing? This is difficult to know for sure, and why you have likely found nobody yet to answer your question. We don't see your code, not that I even remotely wish to see it. You have a moderately large problem, but one where you then randomly varied some of the parameters as a function of time. A characteristic of random numbers is some of the time, you get stuff in the tails, a parameter wanders into a corner where things just go to hell, and no solution is then available.
You appear to be generating systems of ODEs that may sometimes be far too randomly nasty, because of the randomly generated constants in there. For some sets of those constants the problem is getting getting just too stiff. That means the solvers need to use really, really small time steps. As you saw, even the stiff solvers were sometimes forced to reduce the time step, and that was for only a moderately stiff problem. You even point out that when you use parameter sets that are comparable to ones found in the literature, things worked well.
Sorry, but this is not a bug in the source code, perhaps something you can fix in ode15s. Inside the code, you will probably see a point where there is a sometimes ill-conditioned linear equation that is solved. When that linear system becomes rank deficient or at least so in double precision arithmetic, effectively numerically singular? This forces a reduction in the step size necessary to meet the integration tolerances to be too small to continue.
Not all problems will have a viable numerical solution. And, even were you to work in a higher precision, thus able to use smaller step sizes where really needed, the solution time will probably get absurdly large because of the infinitessesimally small step sizes needed. Higher precision code also runs much more slowly too.

Vijay on 9 Jul 2020
Hi John,
Thank you so much for orientating me to get to the crux of the common probelms with ode solvers. It was quite informative and motivated me to learn more.
Not sure if this is an inherent singulairy issue. If 'dfdy' in matlab ode functions (be it ode15s else other) is a key, as I tested with some of my other successful models there are many iterations where 'dfdy' is rank deficient.
Grossly, the error in my model is due to how the parameters are supplied into differential equations (which is aligning with what you explained above). I am aware that it's not uncommon to supply time varying parameters. But in this case I have two kinds of paramters where one kind is constant for the entire tspan, and the other kind of paramters stay constant until t<50 then they start varying until t=70, then stay unchanged (as shown with a reference code below). The solver has been trying to repeat the time steps between 48< t< 51 for hundreds of times. Therefore, I think, although it was t==50 the system is trying to makesure it hasn't missed any information. In this process the solver is experiencing a continuous switch of the second kind of parameters between time-variant and time-invariant streams. This is causing a naunce (sometimes huge) in 'dfdt', and the steps size was kept reduced, and triggers the 'tolerance warning' when it touches 'hmin'. (all these observations made with ode23tb)
I just wonder, if this is not the appropriate way of passing the second kind of parameters (time varying and not varying; shown as P_twostate below) into the differential equations, can you advise a better way to do this?
function dAdt = odefun(t, A, tspan, P_constant, P_twostate)
P_constant;
if t<50
P_twostate;
elseif 50<=t&&t<=70
P_twostate=P_twostate*exp(k*t)
elseif t>70
P_twostate=P_twostate*exp(k*70)
end
end
Thank you so much for your time and suggestions.
John D'Errico on 10 Jul 2020
First, a problem can STILL be stiff, even with parameters that are constant over the entire time domain. Again, look carefully at the example case MATLAB provides, where we see:
dydt = [y(2); 1000*(1-y(1)^2)*y(2)-y(1)];
There is no sudden transition here. Yet still moderately stiff, as I explained in my answer. Suppose I had created my own version, with different constants? In this case, suppose we changed the 1000 in there to 100, or 2000, or 5000? The change in my constants would now be a change in how stiff the system is seen to be.
You need to recognize that a problem is not stiff only when there are transients. However, transients can be a cause of stiffness.
What you did is not uncommonly seen. You introduce a transient as a step function, suddenly turning on at some fixed exact point in time, as if a light switch had been turned on.
For example, consider the differential equation:
dx/dt = 1 + H(t - 1)*2
with the initial condition that x(0) = 1. Here, H(t) is the unit Heaviside function, so H(t) is zero for t < 0, and 1 for t >= 0. Effectively, we can write the differential equation in a piecewise form, as:
dx/dt = 1 for 0 <= t <1
dx/dt = 3 for t >= 1
As I said, this is fairly common. In fact, it is pretty easy to solve. A symbolic solution even exists for such a simple problem.
syms x(t)
xsol = dsolve(diff(x) == 1 + heaviside(t - 1)*2,x(0) == 1)
xsol =
(sign(t - 1) + 2)*(t - 1) + 2
What is the prediction at t == 2?
subs(xsol,2)
ans =
5
Note the sign function can take the place of heaviside, with one subtle distinction:
sign(0)
ans =
0
heaviside(0)
ans =
0.5
sign(eps)
ans =
1
heaviside(eps)
ans =
1
That is a minor point though.
What happens if I throw it at a solver like ODE45?
fty = @(t,y) 1 + heaviside(t - 1)*2;
[tout,yout] = ode45(fty,[0,2],1);
yout(end)
ans =
4.95505462827878
The end point is not terribly accurate.
[tout,yout] = ode23tb(fty,[0,2],1);
yout(end)
ans =
4.99808880382504
More than another digit is correct there with the stiff solver, because they can better handle transients. As well, if you look at the behavior of ODE45, looking at the time steps generated, ODE45 will spend more time hunting for that exact transition point.
Vijay on 13 Jul 2020
That's a great tutorial. Thanks a lot @John for the explanation.
I think I got to know the problem now.
The issue is related to some sets of random variates and subsequent exponentiation
For instance...
A = [a b]; % a>0 && b>0
rd = (A(2)-A(1))/A(1);
k = rd/20;
A(50) = A*exp(k*50);
Suppose a = 17.4044865017650 and b = 143.273471300743, then A(50) = 7.1126e+07. This sudden spike leads the derivative in the respective differential equation to inflate infinitesimally. This is a super stiff scenario that pushing the solver, irrespective of the efficiency, to reduce the step size below the limt (i.e. 'hmin').
Now I am trying to understand the plasuble way to generate random variates for 'a' and 'b' while keeping exp(b-a)/a*1000 < 5.