polyfit - uncertainty on coefficient.

Hi all,
I've got data points :
x = [0 0 5 5]; y = [0 1 4 7];
I would like to fit these data with polyfit (ax + b) and then, derive the "maximum and minimum" lines compatible with these data - that should be the uncertainty on a and b.
What I should obtain in this easy example is : y = 7/5 * x + 0 and y = 3/5 * x + 1.
I've found on this forum things about using cov and the S output of the function, but the error I got from the sqrt(...) formula doesn't give the numbers above...
Could anyone help me ?
Thanks

4 Comments

I don't think I get it. Are the first N/2 points the hypothetical start points of the line and the final N/2, the hypothetical end points?
What would the result be if you had data like this
x = [0 0 5 5 5.01];
y = [0 1 4 7 0];
in which the steepest sloped line is between (5,7) and (5.01,0)?
Thanks for your reply.
I should be clearer. What I actually have are data points with error bars. In the example I've given they are (0,.5+-.5) and (5,5.5+-1.5). I want to find the straight-lines compatible with these errorbar with the largest and lowest slopes.
Am I more understandable this way ?
Do you have the Statistics Toolbox?
I think so.

Sign in to comment.

 Accepted Answer

If you have the 2012b version of nlinfit, you're in luck! You can use the 'weights' option with it to weight your regression using the error bars in your data.
Error bars are usually the ‘standard errors’ or the ‘95% confidence intervals’, also based on the standard errors. Standard errors are defined as:
SE = std/(sqrt(N)
where N are the number of observations (that may change between data points). The usual weighting vector is the inverse-variance, so to get a weight vector from standard errors, the calculation becomes:
Wgt = 1./(N .* SE.^2);
Your objective function is:
f = @(B,x) B(1) + B(2).*x;
If you don't have the 2012b version, this gets slightly more complicated. You need two objective functions, one weighted (that you call in nlinfit) and your original unweighted function.
The weighted function becomes:
fw = @(B,x) Wgt .* f(B,x);
When you have fitted your function, use nlparci and nlpredci to generate the confidence intervals for your parameters (‘B’ here ) and your fitted function. NOTE that it is necessary to define Wgt and f before you define fw or call nlinfit.
Nonlinear fitting is a bit heuristic, and while your function should be easy to fit, sometimes it's necessary to use different starting guesses to get a good fit. I strongly suggest using rand(2,1) (in your current problem) rather than ever starting with a vector of zeros. Zeros can cause problems in some situations.

More Answers (1)

Matt J
Matt J on 2 Oct 2012
Edited: Matt J on 2 Oct 2012
The problem is a 2D linear program. If you have the Optimization Toolbox, you could use LINPROG to solve it. Otherwise, the code below, applied to your original example, uses a brute force approach and relies on my lcon2vert() function available here
barwidths=[1,3];
x = [0 5];
y = [0.5 5.5];
x=x(:); y=y(:); barwidths=barwidths(:); %ensure column form
lb=y-barwidths/2; %lower bounds
ub=y+barwidths/2; %upper bounds
%Linear program data
A=[x(:), ones(size(x(:)))];
A=[A;-A];
b=[ub;-lb];
f=[1,0];
%Brute force linear program solve
V=lcon2vert(A,b);
[minslope,imin]=min(V(:,1));
[maxslope,imax]=max(V(:,1));
minParams=V(imin,:),
maxParams=V(imax,:),
Using LINPROG the final 5 lines would be replaced by:
minParams = linprog(f,A,b);
maxParams = linprog(-f,A,b);

1 Comment

thanks. I can't clic twice on accept this answer...

Sign in to comment.

Categories

Find more on Creating, Deleting, and Querying Graphics Objects in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!