The sufficient and necessary conditon of fmincon funciton can find out the global optimal solution but not some other local optimal solutions

87 views (last 30 days)
I'm not quite clear about that question, by experiments I found out that fmincon can also find out the global optimal solution though f(x) is unconvex, that proves that convex is just a sufficient condition but not necessary. So who can help me that question, also please give the provement of that. It's better to be generic.

Accepted Answer

Matt J
Matt J on 2 Apr 2024 at 11:50
Edited: Matt J on 2 Apr 2024 at 12:36
fmincon has no awareness of, and so does not consider, the function's global properties, like whether it is convex or not. It only applies local optimality conditions, based on the function's derivatives.
Moreover, it doesn't apply sufficiency conditions, only necessary ones. As an example, clearly x=0 is not a global or local minimum of f(x)=x^3, but fmincon believes it is regardless:
x=fmincon(@(z) z.^3,0)
Initial point is a local minimum that satisfies the constraints. Optimization completed because at the initial point, the objective function is non-decreasing in feasible directions to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance.
x = 0
This happens because x=0 has zero gradient, and therefore satisfies necessary local conditions, but nothing beyond that.
  5 Comments
Matt J
Matt J on 2 Apr 2024 at 16:11
Edited: Matt J on 2 Apr 2024 at 16:13
Yes, an infinite precision computer with zero error tolerance settings would converge to the global mimimum, assuming it is the only feasible stationary point. Convexity is not needed for that.

Sign in to comment.

More Answers (2)

Torsten
Torsten on 2 Apr 2024 at 9:43
Moved: Torsten on 2 Apr 2024 at 9:43
The answer is: there is no necessary and sufficient condition for "fmincon" to converge to the global optimum.
I think it would be great breakthrough in mathematics if such a magic condition existed.
  4 Comments
世轩 张
世轩 张 on 2 Apr 2024 at 12:25
Exactly correct, but I don't assure it's convex. Do you mean that the fmincon can work in my situation? Could you give me some provements? Just be simple.
Torsten
Torsten on 2 Apr 2024 at 13:07
An algorithm (which assumes infinite precision in all computations involved) can be proven to work, but I've never heard about a proof that a numerical solver will always work under given assumptions.

Sign in to comment.


John D'Errico
John D'Errico on 2 Apr 2024 at 12:48
Edited: John D'Errico on 2 Apr 2024 at 12:53
I'm sorry, but no. Fmincon CANNOT be assured of always finding a global solution. In fact, I can think of many ways it will fail. If your experiments have proven anything, it is just that your experiments were inadequate.
Of course, if your objective function has multiple local solutions, it must fail if your starting values start it in the basin of attraction of a local but not the global solution.
Even in the case of a purely linear objective though, consider the case of a U-shaped domain, where the global solution lies in one of the ends of the U. But now start fmincon in the wrong lobe of the U. Since it cannot climb uphill on such a problem, it will be trapped in that local sub-optimum. Remmber, fmincon is a constrained solver, so a U-shaped domain is allowable.
If you decide to remove even that simple failure mode, I can still construct a function that has only one solution, ONE minimizer, on an unbounded domain, and fmincon will provably fail. Consider a function of two variables:
f = @(X) -exp(-(X(1) - 1000)^2 - (X(2) - 1000)^2);
Start it in the vicinity of (X,Y) = (0,0). Near the origin, the gradient of that function, as computed in double precision arithmetic, is an underflow to all zeros. There the function is constant at 0 to far beyond the ability of double precision arithmetic to measure. Of course, the function has a single global minimum at (1000,1000).
f([0,0])
ans = 0
f([1000,1000])
ans = -1
If the above case is just too difficult, then consider even this simpler example.
f = @(X) 1e-1000*(X(1)^2 + X(2)^2);
So f is a PURELY quadratic function. Apply no constraints at all. So simple that the solution is obvious. The solution lies at the origin. Now, start it at the point (1,1). Again, fmincon will fail when working in double precision arithmetic, REGARDLESS of the tolerance you apply. This is because any computation of f in that vicinity will yield again an underflow to f==0. The gradient in that vicinity is also an underflow to zeros of course.
I can surely give other examples, equally easy to construct, where it is easy to show that fmincon must fail. Functions that are not everywhere differentiable would be another failure mode.
So if you want a necessary AND sufficient condition for convergence of fmincon to a global solution, then you need to work on only smooth objectives simple enough that the domain is purely convex. You need to consider problems only where there are no issues with double precision arithmetic, and the gradient will be always computable, where underflows or overflows are not an issue. And you need to consider problems that lack multiple local solutions.
I don't need to prove any of these examples, because a counter-example is sufficient proof in itself.
  1 Comment
世轩 张
世轩 张 on 2 Apr 2024 at 14:18
Thanks for your patience, I understand, my conditon won't exist the conditon that the reciprocal direction tends too close to zero that is able to fool the computer, so if my condition have no local optimum other than the gloabal optimum, whether it's convex or not, the fmincon will find the glabal optimum, right?

Sign in to comment.

Products


Release

R2024a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!