[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference rusure::math

Title:Mathematics at DEC
Moderator:RUSURE::EDP
Created:Mon Feb 03 1986
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:2083
Total number of notes:14613

153.0. "least squares problem " by FUTBAL::BENNISON () Wed Sep 19 1984 16:29

Here's a problem that I know someone has solved, but I haven't been able
to find out how they did it.  I have raw observed data points (x, y) and
I want to do a least squares fit of the data to an equation of the form:

    		      b 
    		y = ax  + c

    			       b
Equations y = ax + c and y = ax   are no sweat, but the c seems to throw
a monkey wrench into the exponential, because you can't then linearize
by taking the log of both sides.  I have some kludgy workarounds that I
can use for the time being, but if anybody has any ideas I'd appreciate
hearing them.  Solving this problem is central to the sub-project on which
I'm currently working.  
    					- Vick Bennison
    					  FUTBAL::BENNISON
    					  DTN 381-2156
T.RTitleUserPersonal
Name
DateLines
153.1ADVAX::J_ROTHThu Sep 20 1984 23:5321
You don't mention whether you want to minimize the square of the error
with respect to y (assuming x is exact) or some combination of the two...
That would have some effect on the solution.

One general approach to such a problem would be to treat it as an optimization
problem.  There are well known techniques (and FORTRAN routines in the ACM)
available which can seek a solution, some of which are quadratically
convergent when they get near a minimum, in the manner of multidimensional
Newton's method of seeking a root.  I've used some of these techniques on
circuit analysis problems and they do work.

I've found Rosenbrock's method of rotating coordinats to be pretty robust.
Fletcher-Powell is another excellent method (but I find it can get stuck in
a local minimum if you're not careful).  There are methods oriented
specifically toward least squares and least p'th but I haven't had extensive
experience with them.  Stochastic gradients is another interesting method.

Hope this helps...  Your equation is simple enough that analytic processing
would probably yield a better way, but the above will work.

- Jim
153.2ADVAX::J_ROTHFri Sep 21 1984 01:0012
Upon checking my files, I remembered that the real reason I disliked
Fletcher-Powell minimization was that it required derivatives and I
had to numerically estimate them - time consuming and innacurate.

In your case the quasi-Newton method may be the best.  I have copies
of ACM algorithm 450 (Rosenbrock minimization) and 500 (quasi-Newton)
in a directory somewhere and can supply them if you need them.

Expanding the partial derivatives of the error squared of your equation
looks pretty messy.

- Jim
153.3TURTLE::GILBERTFri Sep 21 1984 02:594
Expanding the partial derivatives of the error squared is pretty easy.
The hard part is making use of these; since some of the partial derivatives
include sums of x^b, or sums of log(x)*x^b, where b is one of the unknown
constants.
153.4ADVAX::J_ROTHFri Sep 21 1984 12:4714
re .3

That's what I really meant - 3 nonlinear equations in 3 unknowns to solve.

Least squares fitting of linear equations is mathematically tractable
because you only have to solve a linear system of equations and can
be assured that the solution represents a minimum.

Variable metric minimization routines can solve the more general class
of problems very efficiently provided you have analytic expressions for
the partial derivatives of the error function - as you do in the problem
stated.

- Jim