T.R | Title | User | Personal Name | Date | Lines |
---|
153.1 | | ADVAX::J_ROTH | | Thu Sep 20 1984 23:53 | 21 |
| You don't mention whether you want to minimize the square of the error
with respect to y (assuming x is exact) or some combination of the two...
That would have some effect on the solution.
One general approach to such a problem would be to treat it as an optimization
problem. There are well known techniques (and FORTRAN routines in the ACM)
available which can seek a solution, some of which are quadratically
convergent when they get near a minimum, in the manner of multidimensional
Newton's method of seeking a root. I've used some of these techniques on
circuit analysis problems and they do work.
I've found Rosenbrock's method of rotating coordinats to be pretty robust.
Fletcher-Powell is another excellent method (but I find it can get stuck in
a local minimum if you're not careful). There are methods oriented
specifically toward least squares and least p'th but I haven't had extensive
experience with them. Stochastic gradients is another interesting method.
Hope this helps... Your equation is simple enough that analytic processing
would probably yield a better way, but the above will work.
- Jim
|
153.2 | | ADVAX::J_ROTH | | Fri Sep 21 1984 01:00 | 12 |
| Upon checking my files, I remembered that the real reason I disliked
Fletcher-Powell minimization was that it required derivatives and I
had to numerically estimate them - time consuming and innacurate.
In your case the quasi-Newton method may be the best. I have copies
of ACM algorithm 450 (Rosenbrock minimization) and 500 (quasi-Newton)
in a directory somewhere and can supply them if you need them.
Expanding the partial derivatives of the error squared of your equation
looks pretty messy.
- Jim
|
153.3 | | TURTLE::GILBERT | | Fri Sep 21 1984 02:59 | 4 |
| Expanding the partial derivatives of the error squared is pretty easy.
The hard part is making use of these; since some of the partial derivatives
include sums of x^b, or sums of log(x)*x^b, where b is one of the unknown
constants.
|
153.4 | | ADVAX::J_ROTH | | Fri Sep 21 1984 12:47 | 14 |
| re .3
That's what I really meant - 3 nonlinear equations in 3 unknowns to solve.
Least squares fitting of linear equations is mathematically tractable
because you only have to solve a linear system of equations and can
be assured that the solution represents a minimum.
Variable metric minimization routines can solve the more general class
of problems very efficiently provided you have analytic expressions for
the partial derivatives of the error function - as you do in the problem
stated.
- Jim
|