# How to implement optimization on data array (for maximum entropy deconvolution)

4 views (last 30 days)
Mike on 11 Jul 2019
I'm an undergrad working on deconvolving a noisy signal using a known instrument response function. It's a time-of-flight experiment from a chopped beam. We have the response function measured measured in counts/s over time - it resembles a gaussian with some poisson noise. Measurement signals are also sorta gaussian shaped with poisson noise. The measured signal is sort of like our "image", though 1-D with the counts/s like the intensity. The measured chopper function is kinda like our point-spread function - instead of "blurring", it adds a width/spread to our measured signal.
A simple deconvolution yields a mess in the presence of noise. We've successfully tried a Wiener filter to get a pretty good recovered signal, narrower than the measured signal. I also tried Matlab's built-in lucy-richardson deconvolution tool. We'd like to try maximum entropy deconvolution, which is a sort of constrained optimization problem. The maxent methods produce a good representation of the data by a principle that asserts the best feasible representation is the one with largest entropy constrained to some measure of fidelity to the measured data (like chi-squared.)
While I understand the principles behind it from the many papers out there, I'm having trouble coding something from scratch or using Matlab tools as I did with the Wiener filter. I see that Matlab has an optimization toolbox, but I can't tell if it's appropriate to our needs.
I think our objective function to maximize would be the entropy S = sum(f ln(f)) where f is an array of the intensities over time representing the deconvolved signal. The constraint would be chi2 of the measured data and a re-convolution of the sought deconvolved signal and the known instrument response function. This chi2 would be constrained to be close to some ideal value. Other constraints might include total intensity being the same as measured. The literature describe this as an optimization by Lagrange multipliers, as in to maximize C = S(f) - lambda*chi2(f,data) by finding the optimal reconstructed values of f.
From the documentation and examples for Matlab's optimization tools, I can only see how I might perform an optimization similar to what we did in undergrad calc. Given some analytic function and constraints, find the point(s) where the value is min/max. However, the result we're looking for is the best representation f, which is an array with many data points - like a distribution. Is the Matlab optimization toolbox appropriate for my needs?