Interpolation schemes that produce positive second derivatives of the interpolant

25 views (last 30 days)
Given a set of x-values and y-values. The interpoland to this data should have non-negative second derivatives, allowed to be discontinuous.
The y-values are strictly increasing and the interpoland is allowed to be a C1 function.
Take this data set for example:
x = [3, 3.8125, 4.6250, 5.4375, 6.25, 7.0625, 7.8750, 8.6875, 9.5, 10.3125, 16];
y = [0, 0.0111, 0.0293, 0.0521, 0.0787, 0.1086, 0.1416, 0.1774, 0.2155, 0.2556, 1.0248];
plot(x,y)
The first 10 points are equally distributed, only the last segment is bigger. Another characteristic of my y-values is that they are close to linear in the beginning, and nonlinear after that.
As for the cubic interpolation family, I tried makima, pchip, as well as csape, but they all result in negative second derivatives at some points.
Is there an interpolation scheme for my purpose? If not, are there easy adjustments to the linear system for the spline coefficients to enforce positive second derivatives?
Thank you!
UPDATE:
As stated in the comments below, I implemented an optimization problem to find the unknown deivatives at the break points using a C1 cubic hermite interpolant. On the interval , we have
where and are the interpolation values and and the unknown derivatives at the break points.
The basis functions are given by
and is the mapping to transform from the "real" interval to the unit interval . All these equations can be taken from wikipedia "cubic hermite interpolation".
In the optimization problem, I want to find the unknown derivatives at the break points under the constraints that the second derivative of the interpolant is positive everywhere and that the second derivative is as smooth as possible. That said, the objective function measures the change in the second derivative at the break points. Also, I decided to implement "second derivative > 0" as a penalty term in the objective function, although these are linear constraints.
I solved the problem using fmincon, but as stated in the comments, linprog or quadprog could work too. If this is indeed a linear optimization problem, I expect that the derivative of the interpolant w.r.t the interpolation values is the unit spline. My overall goal is to get these sensitivities, and the question is how to get them, if my problem were non-linear.
Here is my code:
% inerpolation points (break points)
x = [3, 3.8125, 4.6250, 5.4375, 6.25, 7.0625, 7.8750, 8.6875, 9.5, 10.3125, 16];
% interpolation values)
y = [0, 0.0198463, 0.0397084, 0.0596019, 0.0795445, 0.0995199, 0.119908, 0.140298, 0.160773, 0.181772, 6.64478];
% create points at which the change of the 2nd derivative is measured
% for every breakpoint x(i) in the interior, i.e., x(2:end-1),
% two points are created x(i)-e and x(i)+e
xObj = zeros(2*(numel(x)-2), 1);
% offset "e" in x(i)+e
offset = min(diff(x))/100;
ctr = 1;
for i=2:numel(x)-1
xObj(ctr) = x(i) - offset;
xObj(ctr+1) = x(i) + offset;
ctr = ctr + 2;
end
% objective function handle
objectiveFunc = @(yprime)Objective_function(yprime, x, y, xObj);
% start vector
yprime0 = ones(size(x));
% call optimizer
yprime = fmincon(objectiveFunc, ...
yprime0, [], []);
function fval = Objective_function(yprime, pointsx, pointsy, pointsxObj)
% minimize the jump of the second derivative at the
% break points (to obtain best smoothness of 2nd derivative)
fval = 0.0;
ctr = 1;
for i = 1:numel(pointsxObj)/2
% calculate second derivative at point (x(i) - e)
idxL = find(pointsxObj(ctr) >= pointsx, 1, "last");
hiL = pointsx(idxL + 1) - pointsx(idxL);
tL = (pointsxObj(ctr) - pointsx(idxL)) / hiL;
secDerivL = Evaluate_cubicSegment(tL, 2, hiL, ...
pointsy(idxL), pointsy(idxL + 1), ...
yprime(idxL), yprime(idxL + 1));
% calculate second derivative at point (x(i) + e)
idxR = find(pointsxObj(ctr+1) >= pointsx, 1, "last");
hiR = pointsx(idxR + 1) - pointsx(idxR);
tR = (pointsxObj(ctr+1) - pointsx(idxR)) / hiR;
secDerivR = Evaluate_cubicSegment(tR, 2, hiR, ...
pointsy(idxR), pointsy(idxR + 1), ...
yprime(idxR), yprime(idxR + 1));
% add squared difference to objective function
fval = fval + (secDerivL - secDerivR)^2;
ctr = ctr + 2;
end
% ----------------------------- 2nd derivative > 0 (lienar constraints) .......................
% impose the constraint at discrete points in the interpolation domain
% choose 100 points in each cubic segment
nConstraints = 100*(numel(pointsx)-1);
xCtr = linspace(min(pointsx), max(pointsx)-1e-3, nConstraints);
for i=1:nConstraints
% find polynomial segment in which xCtr(i) is located
idx = find(xCtr(i) >= pointsx, 1, "last");
hi = pointsx(idx+1) - pointsx(idx);
t = (xCtr(i) - pointsx(idx)) / hi;
% calculate second derivatives
% - of basis functions H
H = Evaluate_basisFunctions(t, 2);
% - of the interpolant
secondDeriv = pointsy(idx)*H(1) + pointsy(idx+1)*H(2) + hi*yprime(idx)*H(3) + hi*yprime(idx+1)*H(4);
% add contribution to objective function if derivative is negative
tmp = max(0.0, -secondDeriv);
fval = fval + 10*tmp^2;
end
end
function H = Evaluate_basisFunctions(t, order)
if order==0
% value
H1 = 1 - 3*t^2 + 2*t^3;
H2 = 3*t^2 - 2*t^3;
H3 = t - 2*t^2 + t^3;
H4 = -t^2 + t^3;
elseif order==1
% first derivative
H1 = -6*t + 6*t^2;
H2 = 6*t - 6*t^2;
H3 = 1 - 4*t + 3*t^2;
H4 = -2*t + 3*t^2;
elseif order==2
% second derivative
H1 = -6 + 12.*t;
H2 = 6 - 12.*t;
H3 = -4 + 6.*t;
H4 = -2 + 6.*t;
elseif order==3
% third derivative
H1 = 12;
H2 = -12;
H3 = 6;
H4 = 6;
else
assert(false, "Evaluations up to 3rd derivative possible");
end
H = [H1; H2; H3; H4];
end
function pi = Evaluate_cubicSegment(t, order, hi, yi, yi1, yiprime, yi1prime)
H = Evaluate_basisFunctions(t, order);
chainRuleFactor = (1/hi)^order;
pi = chainRuleFactor*( H(1).*yi + H(2).*yi1 + H(3).*hi*yiprime + H(4).*hi*yi1prime );
end

Accepted Answer

Bruno Luong
Bruno Luong on 23 Jan 2024
Edited: Bruno Luong on 23 Jan 2024
You can use approximation spline, just put more (denser) knots than the data points so it actually interpolates.
For example using my FEX BSFK to get cubic spline interpolant, C^2, and convex interpolant functtion:
Note that the "knee" location of the interpolant seems to be quite unstable.
x = [3, 3.8125, 4.6250, 5.4375, 6.25, 7.0625, 7.8750, 8.6875, 9.5, 10.3125, 16];
y = [0, 0.0198463, 0.0397084, 0.0596019, 0.0795445, 0.0995199, 0.119908, 0.140298, 0.160773, 0.181772, 1.02478];
% FEX https://www.mathworks.com/matlabcentral/fileexchange/25872-free-knot-spline-approximation?s_tid=srchtitle_BSFK_1
opt = struct('KnotRemoval','none','sigma',1e-10,'shape',struct('p',2,'lo',0,'up',inf));
pp = BSFK(x,y,4,20,[],opt);
% prepare graphical data
xi = linspace(min(x),max(x),1025);
yi = ppval(pp,xi);
xb = pp.breaks(2);
yb = ppval(pp,xb);
% Check if approximation is close to interpolation
norm(y-ppval(pp,x),'inf')/norm(y,'inf') % 2.6938e-05
% Evaluate secpn derivative
fdd = ppder(ppder(pp));
yddi = ppval(fdd,xi);
all(yddi >= 0) % true
figure(1)
clf
plot(x, y, 'or')
hold on
plot(xi, yi, 'b')
xlabel('x')
ylabel('y')
yyaxis right
plot(xi, yddi) % second derivative is postive
ylabel('y''''')
legend('data', 'spline interpolation', 'second derivative')
Note: you can regularize the interpolant by setting parameter d and lambda, however function will not interpolate the data as stricty as the above
opt = struct('KnotRemoval','none','sigma',1e-10,'d',2,'lambda', 1e-6,'shape',struct('p',2,'lo',0,'up',inf));
  6 Comments
SA-W
SA-W on 24 Jan 2024
I see.
So if you use the approach with lsqlin and f''>=0, a solution always exists and is unique as you said above. If I go with your FEX, I can achieve similar results by playing with the regullarization. Right?
Bruno Luong
Bruno Luong on 24 Jan 2024
Yes.
There is a subtility you should be aware of. My code make a trade-off between fitness and regularized solution with the lambda value. Ideally if the solution of non-regularized is not unique, you should rather formulate theproblem as the interpolation values as hard constraints and minimize the second derivative energy.

Sign in to comment.

More Answers (4)

Torsten
Torsten on 18 Jan 2024
Moved: Torsten on 18 Jan 2024
If you do function approximation (e.g. by splines), there are methods to enforce f''>=0. But for function interpolation, it doesn't make sense in my opinion.
  46 Comments
SA-W
SA-W on 30 Jan 2024
Ok. But where in your proof becomes fc''>=m > 0 relevant? Are not the steps and the conclusion the same if we only assume fc''>=0 ? What we wanted to show is that the constrained solution can not fulfill fc''>=m > 0.
Bruno Luong
Bruno Luong on 30 Jan 2024
Edited: Bruno Luong on 31 Jan 2024
"Are not the steps and the conclusion the same if we only assume fc''>=0 ? "
Because I can only show a strict smaller lower bound, strictly smaller than m (with epsion). If m is 0 (your case) the whole proof cannot work, since m-epsilon is negative and fh no longer meet the constraint, so I canot tell it is an admissible solution.

Sign in to comment.


John D'Errico
John D'Errico on 18 Jan 2024
Edited: John D'Errico on 18 Jan 2024
Are there easy adjustments? Not really. At least not totally trivial. It sounds as if you want an interpolant, NOT an approximant, so you want a curve that truly passes exactly through the points. That may not always be possible of course.
x = [3, 3.8125, 4.6250, 5.4375, 6.25, 7.0625, 7.8750, 8.6875, 9.5, 10.3125, 16];
y = [0, 0.0111, 0.0293, 0.0521, 0.0787, 0.1086, 0.1416, 0.1774, 0.2155, 0.2556, 1.0248];
plot(x,y,'-o')
In this case, an interpolant with a positively curved second serivative should work, though I would need to look very carefully at the data to be sure of that.
How would I do it? I would start with a C1 cubic interpolant, probably one formulated in terms of a set of unknown first derivatives at each break point. Then choose the set of first derivatives such that the second derivative is everywhere positive, and that minimizes the integral of the square of the second derivative over the support of the interpolant. You could write it using a quadratic programming tool to solve the problem, so it would be efficient. That square of the integral would be written as a simple quadratic form.
So, not utterly trivial to write, but not hard either. It that would depend on your skill in writing and formulating things like quadratic programming, and working with piecewise interpolants.
Can it be done? Using my SLM toolbox, I built this curve:
With a second derivative plot as shown below:
As you can see, it is everywhere non-negative as required.
  33 Comments
SA-W
SA-W on 22 Jan 2024
Yes, but in "usual quartile spline interpolation", I have no control over the second derivative. Then, I do not see a benefit compared to "usual cubic spline interpolation".
And what do you mean by "prescribe the second derivative" ? Does it mean "keep continuous" ?
For the cubic hermite form, we made an ansatz of the form
p_i(t) = H_0(t)y_i + H_1(t)y_i+1 + H_2(t)y'_i + H_3(t)y'_i+1.
A cubic polynomial has four degrees of freedom such that we can prescribe the function values (y_i and y_i+1) as well as the first derivative (y'_i and y'_i+1) at the left and right end to have a C1-interpolant.
Similar, for a quintic polynomial, we can make an ansatz with size summands
p_i(t) = H_0(t)y_i + H_1(t)y_i+1 + H_2(t)y'_i + H_3(t)y'_i+1 + H_4(t)y''_i + H_5(t)y''_i+1
which gives two more degrees of freedom to prescribe the second derivatives (y''_i and y''_i+1). Since we can prescribe the second derivatives at the breaks, the interpolant is a C2-function.
I think the whole debate is in vain because proving that a unique solution for an optimization problem exists (even if it's a linear one) will be hopeless.
Probably yes. At least I do not have the mathematical background to come up with a proof of the uniqueness of the problem. However, I know my y-values result from sampling of a convex curve, so I can at least give anything of what we discussed before a try.
Above, you said
If this is not that important and you only want to test the approach, I'd start with quartic splines, demand continuity of 0th, 1st and 2nd derivatives and positivity of the second derivative in the breakpoints and minimize sum p_i''(0) over the breakpoints in the objective function.
that you would implement this approach. However, it is not clear to me how you can demand continuity of the second derivative with a quartic spline. A quartic spline in hermite form has only five degrees of freedom, but we need six (two function values, two first derivatives, two second derivatives). Maybe you can comment again on how you want to realize the continuity of the second derivative with a quartic spline.
Torsten
Torsten on 22 Jan 2024
Edited: Torsten on 22 Jan 2024
Maybe it's a language problem, but the word "prescribe" is wrong for y',y'',... at the breakpoints in usual spline interpolation. The only thing you prescribe are x and y. All other parameters (y',y'',... at the breakpoints) are computed, namely such that the spline function has continuous first, second, third,... derivatives at the breakpoints. So usually you don't have control over the numerical values of y',y'',... at the breakpoints.
But you want to have some sort of control over y'', namely y'' should be >=0. Since the usual cubic spline only has the property that the second derivative is continuous in the breakpoints, you either have to weaken smoothness properties of the cubic spline or add one or more polynomial degree(s) to gain degrees of freedom for the additional conditions p_i''(0) >= 0 in the breakpoints. That's why I suggested the next higher polynomial degree, namely quartic. In usual spline interpolation, for each additional degree of the local polynomials, you can demand 1 degree higher smoothness in the breakpoints. So while in usual spline interpolation, cubic polynomials lead to C^2 functions, quartic polynomials lead to C^3 functions. But maybe if you reduce the demand for quartic splines to C^2 instead of C^3, you gain degrees of freedom for the additional conditions p_i''(0) >= 0. I don't know - it has to be programmed to see what comes out.

Sign in to comment.


David Goodmanson
David Goodmanson on 26 Jan 2024
Hi SW,
Here is one way to go about it. The data has 10 closely spaced points and one well separated point x(11). Fit the first 10 points with a cubic polynomial. It so happens that the second deriviative is positive in that interval. Then fit the last interval, from x(10) to x(11), with a quadratic that has the right values at the end points and the same first deriviative at x(10) as does the cubic fit (three eqns, three unknowns). That way C1 is good, and by eye (and calculation) the second derivative is going to be positive in that interval.
x = [3, 3.8125, 4.6250, 5.4375, 6.25, 7.0625, 7.8750, 8.6875, 9.5, 10.3125, 16];
xshort = x(1:end-1);
xfin = x(end);
y = [0, 0.0111, 0.0293, 0.0521, 0.0787, 0.1086, 0.1416, 0.1774, 0.2155, 0.2556, 1.0248];
yshort = y(1:end-1);
yfin = y(end);
% fit all but the last point with a cubic
p = polyfit(xshort,yshort,3)
% make sure that y''> 0
d2ydx2 = 6*p(1)*xshort + 2*p(2);
figure(1); grid on
plot(xshort,d2ydx2) % it does stay positive
xs1 = xshort(1);
xse = xshort(end);
yse = yshort(end);
% find the derivative at xse, and fit the interval from xse to xfin
% by a quadratic that matches the derivative at xse
dydx = sum(p.*[3*xse^2 2*xse 1 0])
q = [xse^2 xse 1; xfin^2 xfin 1; 2*xse 1 0]\[yse; yfin; dydx];
q = q'
xfit1 = linspace(xs1,xse,50);
yfit1 = polyval(p,xfit1);
xfit2 = linspace(xse,xfin,50);
yfit2 = polyval(q,xfit2);
xfit = [xfit1 xfit2];
yfit = [yfit1 yfit2];
figure(2); grid on
plot(x,y,'o-',xfit,yfit)
p = -0.0002 0.0058 -0.0186 0.0074
dydx = 0.0496
q = 0.0151 -0.2611 1.3464
  1 Comment
SA-W
SA-W on 26 Jan 2024
Thank you for that new idea.
Yes, it might be a good approach for cases where the last point is "far away" from the others. But, also, there is no real control about f''.

Sign in to comment.


Alex Sha
Alex Sha on 31 Jan 2024
Moved: Bruno Luong on 31 Jan 2024
How about to replace interpolation with a fitting function, which ensure non-negative second derivatives:
1: For data:
x = [3, 3.8125, 4.6250, 5.4375, 6.25, 7.0625, 7.8750, 8.6875, 9.5, 10.3125, 16];
y = [0, 0.0111, 0.0293, 0.0521, 0.0787, 0.1086, 0.1416, 0.1774, 0.2155, 0.2556, 1.0248];
Result:
Sum Squared Error (SSE): 1.81612553970697E-5
Root of Mean Square Error (RMSE): 0.00128492148317141
Correlation Coef. (R): 0.999990141249137
R-Square: 0.999980282595468
Parameter Best Estimate
--------- -------------
p1 0.260835290168589
p2 -17.6776599023055
p3 0.00142484062709233
p4 0.400795422709269
2: For data:
x = [3, 3.8125, 4.6250, 5.4375, 6.25, 7.0625, 7.8750, 8.6875, 9.5, 10.3125, 16];
y = [0, 0.0198463, 0.0397084, 0.0596019, 0.0795445, 0.0995199, 0.119908, 0.140298, 0.160773, 0.181772, 6.64478];
Result:
Sum Squared Error (SSE): 7.99353965884897E-5
Root of Mean Square Error (RMSE): 0.00269571033965396
Correlation Coef. (R): 0.999999018472319
R-Square: 0.999998036945602
Parameter Best Estimate
--------- -------------
p1 -0.468107959471592
p2 -12.9882511576562
p3 2.13196222726725E-6
p4 0.88833547002951
3: For data:
x = [3, 3.8125, 4.6250, 5.4375, 6.25, 7.0625, 7.8750, 8.6875, 9.5, 10.3125, 16];
y = [0, 0.0198463, 0.0397084, 0.0596019, 0.0795445, 0.0995199, 0.119908, 0.140298, 0.160773, 0.181772, 1.02478];
Result:
Sum Squared Error (SSE): 8.04747420418266E-5
Root of Mean Square Error (RMSE): 0.00270478938924384
Correlation Coef. (R): 0.999953749555241
R-Square: 0.999907501249586
Parameter Best Estimate
--------- -------------
p1 -0.467984356631285
p2 -12.993660082635
p3 9.9621944291978E-6
p4 0.736411490505736
  19 Comments
Bruno Luong
Bruno Luong on 1 Feb 2024
Edited: Bruno Luong on 1 Feb 2024
" IMO, it affects the quality of the fit a lot, but it is unconstrained optimization."
It should not affect the fit quality. With transformation you simply reparametrize the feasible space differently. Now in pratice the numerical optimizer can prefer some parametrization or in contrary migh fail to find the solution, but that is another issue.
Alex Sha
Alex Sha on 2 Feb 2024
The fitting function I provided previously will produce negative values for second derivatives in some cases (for example, second data set). Try the function below, which will ensure all positive values of second derivatives.
Fitting function:
The second derivatives of function:
It is easy to see that the above function will always be greater than zero.
1: For data:
x = [3, 3.8125, 4.6250, 5.4375, 6.25, 7.0625, 7.8750, 8.6875, 9.5, 10.3125, 16];
y = [0, 0.0111, 0.0293, 0.0521, 0.0787, 0.1086, 0.1416, 0.1774, 0.2155, 0.2556, 1.0248];
Result:
Sum Squared Error (SSE): 8.47449963637154E-5
Root of Mean Square Error (RMSE): 0.00277562435832365
Correlation Coef. (R): 0.999949332317241
R-Square: 0.999898667201696
Parameter Best Estimate
--------- -------------
p1 -74.8659942748385
p2 -2.31846362397161
p3 0.056083091099568
p4 4.31132419470017
2: For data:
x = [3, 3.8125, 4.6250, 5.4375, 6.25, 7.0625, 7.8750, 8.6875, 9.5, 10.3125, 16];
y = [0, 0.0198463, 0.0397084, 0.0596019, 0.0795445, 0.0995199, 0.119908, 0.140298, 0.160773, 0.181772, 6.64478];
Result:
Sum Squared Error (SSE): 1.46665162719541E-7
Root of Mean Square Error (RMSE): 0.000115469461810764
Correlation Coef. (R): 0.999999998124083
R-Square: 0.999999996248166
Parameter Best Estimate
--------- -------------
p1 -22.141281236849
p2 1.89538261915128
p3 0.00134010429376403
p4 1.38071369179591
For data:
x = [3, 3.8125, 4.6250, 5.4375, 6.25, 7.0625, 7.8750, 8.6875, 9.5, 10.3125, 16];
y = [0, 0.0198463, 0.0397084, 0.0596019, 0.0795445, 0.0995199, 0.119908, 0.140298, 0.160773, 0.181772, 1.02478];
Result:
Sum Squared Error (SSE): 7.55692231606095E-8
Root of Mean Square Error (RMSE): 8.28850371191159E-5
Correlation Coef. (R): 0.999999954352124
R-Square: 0.99999990870425
Parameter Best Estimate
--------- -------------
p1 -18.4830172399681
p2 1.74240209734898
p3 0.00155605799596946
p4 1.05985027372439

Sign in to comment.

Categories

Find more on Quadratic Programming and Cone Programming in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!