Is it possible to store the intermediate values of fmincon?

69 views (last 30 days)
If I'm runnning fmincon, and I want to see the values of the parameters at each iteration, is there a way to do this?
The documentation is a bit rubbish for this, I can see the value of the objective function but that doesn't tell me a great deal.
Steven Lord
Steven Lord on 13 Mar 2020
I'm curious why you want to see the values of the parameters at each iteration. How are you planning to use this information? There may already be a way to accomplish your goals that you just aren't aware exists.

Sign in to comment.

Accepted Answer

Matthew Hunt
Matthew Hunt on 17 Mar 2020
So I had to do a lot of searching and turns out that this is possible without downloading code. The key was to use 'OutputFcn' in the options set, all I had to do was define a global array, Y, and just use the following function:
function stop = outfun(x,optimValues,state)
global Y;
stop = false;
To get the values I wanted
Alexandra Beaven
Alexandra Beaven on 15 Aug 2022
Why all these snarky comments from "MVPs"...there is a dire need for a tutorial set giving a lot of optimization examples of varying nature. Kind of a cookbook or cheat sheet if you will. For busy people the documentation is all over the show. I have solved all my problems but it took quite few hours.

Sign in to comment.

More Answers (4)

John D'Errico
John D'Errico on 13 Mar 2020
Edited: John D'Errico on 13 Mar 2020
The problem is that people using fmincon often have problems with many dimensions. So the display option for fmincon cannot logically also display the function values at each iteration. As well, the intermediate function calls to fmincon will include many evals where it LOOKS like fmincon is passing the identical set of numbers in. The only difference will probably be a small offset down around 8 significant digits or so, because these calls are used to generate a gradient estimate of the objective.
However, nothing stops you from storing the intermediate calls in a global array, that is grown with each iteration. Note that this will get inefficient, because you should never grow an array dynamically. You can get around this to some extent, by the use of some tools I put online to help you to grow an array dynamically.
So if you download my incremental array growth tools from the file exchange, you might do something like this:
function z = rosenbrock(xy)
x = xy(1);
y = xy(2);
z = (1 - x).^2 + 100*(x - y.^2).^2;
Externally, you could now call fmincon like this:
[xfinal,fval] = fmincon(@rosenbrock,[-1 2])
Local minimum found that satisfies the constraints.
Optimization completed because the objective function is non-decreasing in
feasible directions, to within the value of the optimality tolerance,
and constraints are satisfied to within the value of the constraint tolerance.
<stopping criteria details>
xfinal =
0.999998011487611 0.999998998818855
fval =
Now, we extract the calls to the objective function with one final call to growdata.
Xset = growdata
Xset =
-1 2
-1.00000001490116 2
-1 2.00000002980232
1003 -3998.00007629395
501 -1998.00003814697
250 -998.000019073486
124.5 -498.000009536743
-0.65571200860354 0.628334669469192
-0.655712023504701 0.628334669469192
-0.65571200860354 0.628334684370353
150.372594502699 32.4715047701911
2.11186827138076 1.21185792851036
2.11186830285005 1.21185792851036
2.11186827138076 1.21185794656845
1.44664021964596 1.12716026103169
1.44664024120258 1.12716026103169
1.44664021964596 1.12716027782768
1.14974747574498 1.0805670609575
1.14974749287755 1.0805670609575
1.14974747574498 1.0805670770592
1.17222513832236 1.083699788281
1.17222515578987 1.083699788281
1.17222513832236 1.08369980442938
1.16986710428275 1.08272565352416
1.16986712171513 1.08272565352416
1.16986710428275 1.08272566965803
1.15914407227683 1.07776100133656
1.15914408954943 1.07776100133656
1.15914407227683 1.07776101739645
1.10997533868462 1.05467340103262
1.10997535522454 1.05467340103262
1.10997533868462 1.05467341674848
1.01295440731929 1.00821723615498
1.01295442241348 1.00821723615498
1.01295440731929 1.00821725117858
1.00939611229392 1.00501021384076
1.00939612733509 1.00501021384076
1.00939611229392 1.00501022881658
1.0008350788412 1.00044181560257
1.0008350937548 1.00044181560257
1.0008350788412 1.00044183051032
1.00002288770236 1.0000127041558
1.00002290260386 1.0000127041558
1.00002288770236 1.00001271905715
0.999998030572635 0.999998995185113
0.999998045473796 0.999998995185113
0.999998030572635 0.999999010086274
0.999997747489818 0.999998866774367
0.999997889031226 0.99999893097974
0.999997959801931 0.999998963082427
0.999997995187283 0.99999897913377
0.999997747489818 0.999998866774367
0.999997886695132 0.999998936401064
0.999997956601434 0.999998971366287
0.999997992186123 0.999998989164777
0.999998011487611 0.999998998818855
0.999998026388773 0.999998998818855
0.999998011487611 0.999999013720016
You can find my incremental array growth tools on the file exchange.
John D'Errico
John D'Errico on 17 Mar 2020
Edited: John D'Errico on 17 Mar 2020
By that definition, any code is "clever" if you won't bother to fully understand how the code works, or even why the code was written in the first place. In fact, unless you have world class expertise in numerical analysis you should never trust even the simplest codes written in MATLAB. You should therefore re-write every basic function in MATLAB, so the performance of every code is known and fully understood by you. Of course, I will bet that your implementations of most of those codes would fail miserably in comparison to those written by those with far more expertise than you. For example, even something as simple as a matrix multiplication is a "black box" by your definition, unless you are fully capable of writing that code from scratch, unless you fully understand how the blas routines are employed to gain high efficiency, etc. So, really, you should be writing even those matrix multiplies as nested for loops, not trusting those black boxes. As well, don't trust sin(x) or exp(x), as they also use hidden black box tricks inside to be accurate and efficient.
growdata simply stores the values passed into it in an array that is grown in an efficient manner. You do NOT want to grow an array dynamically, as that is an operation that will take quadraticly increasing time to perform. That is, the time required will get slower and slower as the size of the array grows. This is a well known problem with dynamically grown arrays. Your code will get VERY slow, and then your next plaintive request on this forum will be to ask why is my code so slow???
This is why it is ALWAYS recommended that you preallocate arrays to their final size in MATLAB, IF they will change in size over the course of a loop.
If you do not preallocate an array or vector, then MATLAB is forced to reallocate an entirely new arrays every time the array is grown, and then it copies the entire contents of the array. Since the array is growing in size, this take progressively more time at every iteration. If the final size of that array is likely to have thousands of elements in it, then the CPU cycles needed just to store the array will O(N^2) where N is the final size of the array.
Since you do not know in the end how many iterations a call to an optimizer will require, then if you simply store the data in those iterations in a simply grown array, you will often be unhappy in the end.
So all that growdata did was to store the data passed into it in a moderately efficient manner, then release it at the end. It did the internal storage using persistent variables, which allows the user to not worry about the implications of perhaps creating global arrays. (There are other versions of growdata I provided which can be more efficient yet. But they are not as clean to use for this application, so I did not recommend them.)
You asked for a solution to a problem that you did not know how to solve. I provided a solution that required you to add all of 3 lines of code. My solution did not require global variables to be created, nor does it inefficiently grow arrays as you are doing. But hey, if you found a "simple" solution, then great. Just don't expect it to be very good.
Choosing not to use a better solution, just because you don't understand it, nor even understand why it is better, then you choose to refuse to learn.

Sign in to comment.

Daniel Frisch
Daniel Frisch on 7 Oct 2020
Use my function
as 'OutputFcn' for this. It stores all intermediate results in a global variable, and you can work with all those data after the optimization is complete.
Walter Roberson
Walter Roberson on 23 Sep 2021
Daniel's function stores the data as a global variable, so it will still be available after the function terminates.
I recommend that you do one of three things (only one, you do not need more than one of them)
  1. near the top of your objective function file, add dbstop if naninf and at the bottom of the file add dbclear if naninf . This will trigger debugging mode as soon as it creates a nan or inf. However, this can slow down your function
  2. On the bottom line of your objective function, on the end line, right-click on the line number and set a condition breakpoint with the condition ~isfinite(NAMEOFVARIABLE) where NAMEOFVARIABLE is the variable you assigned the objective function to. This will trigger debugging mode at the end of an objective call if the result is inf or nan. This can slow your function a bit, but not as much as dbstop would. However, you will not get immediate feedback about which expression triggered the problem, so you then need to work backwards to figure out where the problem came from.
  3. Just before your objective function would return, add in the line if ~isfinite(NAMEOFVARIABLE); keyboard; end . This will trigger command line (debugging) mode if that happens, but should be less overhead than the first two options I described. When you are finished use dbcont to continue.

Sign in to comment.

Steven Lord
Steven Lord on 7 Oct 2020
Another potential approach is to use the memoize function to store the inputs that are passed into the objective function.
function storedValues = example510713
% Memoize the rosenbrock function
objfun = memoize(@rosenbrock);
objfun.CacheSize = 1e5;
% The memoize object does not have an feval method
% so we need to wrap it in an anonymous function so fmincon can call it
wrapper = @(x) objfun(x);
% Since I'm just showing the memoization I don't need the fmincon outputs
[~, ~] = fmincon(wrapper,[-1 2]);
% Extract data from the memoize object
S = objfun.stats;
in = S.Cache.Inputs;
unwrap1 = [in{:}];
storedValues = vertcat(unwrap1{:});
function z = rosenbrock(xy)
x = xy(1);
y = xy(2);
z = (1 - x).^2 + 100*(x - y.^2).^2;

Alexander Mair
Alexander Mair on 30 Aug 2021
Edited: Walter Roberson on 30 Aug 2021
For those reading this here is something (in my opinion) not too "Matlab-esque" (the less Matlab-esque the better).
As an extra argument you can pass a handle object that stores your data. I know of no way to pass by reference without a class that inherits from handle.
classdef StoreOptimizationValues < handle
%Class to store the optimization processs
% Needs to be a handle to be passed to a function
function obj = StoreOptimizationValues(maxIterations,numberVariables)
obj.variableValues = zeros(maxIterations,numberVariables);
obj.targetFunctionValues = zeros(maxIterations,1);
function addValues(obj,index,fval,variables)
obj.variableValues(index,:) = variables;
obj.targetFunctionValues(index) = fval;
Now you can create your script like this:
maxIterations = 50
s1 = StoreOptimizationValues(maxIterations+1,size(xstart,1));
options = optimoptions('fminunc', ....
'OutputFcn', @(x,optimValues,state)outfun(x,optimValues,state,s1), ...
[x, fval, exitflag, output] = fminunc(@objectiveFunction,xstart,options);
function stop = outfun(x, optimValues, state, store)
stop = false;
switch state
case 'iter'
%Store important values in array
s1 will be passed as handle to the function on each Iteration and store the values you need.


Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!