T.R | Title | User | Personal Name | Date | Lines |
---|
791.1 | linear recurrance relations | MATRIX::ROTH | May you live in interesting times | Mon Nov 23 1987 08:08 | 63 |
| What you've been playing with is called a linear recurrance relation.
This is a recurrance of the general form
z[n+1] = c[n]*z[n] + c[n-1]*z[n-1] + ... + c[0]*z[0]
where the c[k]'s are constants, and the z[k]'s are the sequence
you're generating. This is related to the field of digital
signal processing, where such a sequence is known as the impulse
response of a recursive (or infinite impulse response) filter.
You were lucky to have picked such a simple example, since most
recurrances are chaotic in behaviour. In addition, your first example
was fortunate to have resulted in an undamped sine wave - most such
sequences will result in a combination of exponentially decaying or
increasing sequences, or exponentially decaying or increasing sinusoids.
Let's analyze your example. The linear recurrance relation is a linear
operator, thus the eigenfunction will be an exponential.
You have
z[n+1] = c[1]*z[n] + c[0]*z[n-1]
with coefficients c[0] = -1.0 and c[1] = 1.8,
and initial conditions of z[0] = 5.0 and z[1] = 10.0.
We assume the sequence is of the form z[k] = r^k, where r is some
as yet undetermined number.
Thus
r^(k+1) = c1*r^k + c0*r^(k-1)
Divide thru by r^(k-1) and rewrite this as a quadratic equation
r^2 - c1*r - c0 = 0
Solving for r by the grade school quadratic formula:
r = c1/2 +/- sqrt((c1/2)^2 - c0)
Plugging in the c's,
r = 0.9 +/- sqrt(-0.19)
And we find that r is a complex number with a magintude of one!
r = 0.9 +/- i*sqrt(0.19)
|r| = 0.9*0.9 + 0.19 = 0.81 + 0.19 = 1.0
This is why you got an undamped sinusoid in your example. If you
take the real part of successive powers of r, you'll get your sequence
(within a scale factor.)
To generate a sine wave of a given frequency set c1 to 2.0*cos(2*pi/T),
where T is the period of the sinusoid in sample points, and leave c0 = -1.0.
For more info, see any book on digital signal processing, such as
Oppenheim and Shafer, "Digital Signal Procssing", or probably better
Hamming's little book "Introduction to Digital Filters".
- Jim
|
791.2 | use a pocket calculator | VIDEO::OSMAN | type video::user$7:[osman]eric.six | Tue Nov 24 1987 13:05 | 9 |
| Similar patterns can easily be shown on a pocket calculator.
After turning it on, enter 0.5 and then alternately hit SIN and COS
buttons. (I think those two buttons are right, but there are other
pairs that work too). You converge on a single or pair of numbers.
What other pairs of buttons work ?
/Eric
|
791.3 | What other pairs of buttons work? | HERON::BUCHANAN | | Tue Nov 24 1987 14:45 | 3 |
| "On" and "Off"
(Re: .2)
|
791.4 | More than you ever wanted to know about it | SQM::HALLYB | Profitus Interruptus | Wed Nov 25 1987 11:06 | 24 |
| Just hitting COS will do fine, if your calculator is doing radians.
There's a theorem by Banach, quite simple to prove, that
"Every contraction of a complete metric space has a unique fixed point".
A contraction (in the matehmatical sense, but then not many women
read this file) can be loosely defined as a function f that satisfies
| f(a) - f(b) | < | a - b | for every "a" and "b" in some domain D.
Even more loosely, but more clearly, think of the function as
taking an interval (a,b) and mapping it into a smaller interval
(f(a),f(b)) INSIDE the interval (a,b).
In the interval (0,1) there are lots of contractions, such as
f(x) = x/47 and f(x) = x�. EXP(1/x) is a 2-button function that
seems to contract for proper values of x (which ones?).
I think you can see that COS(X) is a contraction by looking at the
power series expansion for COS(X), though being rigorous might take
a bit of sweat.
Evidently Eric has discovered that SIN(COS(X)) is also a contraction.
John
|
791.5 | and automorph�OK�.y�it | SDOGUS::HOOKER | SEDin' in San Diego | Sat Nov 28 1987 20:24 | 9 |
| In even more of an abstract vein than .-1 in topology there are
function defined as automorphisms that is maps that go from a set
to itself and is bijective (one-to-one and onto or equivalently
has an inverse). Topology (or more precisely algebraic topology)
proves that any such functions have a fixed point. The consequences
are numerous, the collic in ones hair, the area of calm in a hurricane,
and why a polynomial always has at least one root (the fundamental
theorem of algebra), etc. .
|
791.6 | | TLE::BRETT | | Sun Nov 29 1987 12:40 | 5 |
| I think you must have missed at least one condition - for example
continuity, since obvious f(x) ( (x - 0.5) {+1 if x < 0.5} doesn't
have a fixed point yet is bijective on [0,1].
/Bevin
|
791.7 | Counter to the counterexample | ZFC::DERAMO | Daniel V. D'Eramo | Mon Nov 30 1987 12:29 | 5 |
| Your counterexample does not satisfy |f(a) - f(b)| < |a - b|.
Prove (or disprove!) that that condition is strong enough to
imply continuity for functions from one metric space to another.
Dan
|
791.8 | Not every automorphism has a fixed point. | ZFC::DERAMO | Daniel V. D'Eramo | Mon Nov 30 1987 12:33 | 9 |
| Re .5:
It is not true that "any such" functions have a fixed point, although
it is true for a large class of functions in a large class of spaces.
For example, the set {x | 0 <= x <= 1 or 2 <= x <= 3} can be mapped
[even contracted] into itself bijectively without a fixed point;
just switch halves.
Dan
|
791.9 | | TLE::BRETT | | Mon Nov 30 1987 16:20 | 6 |
| I hadn't realised you intended to inherit the contraction requirement
from earlier replies as one of the conditions.
Sorry.
/Bevin
|
791.10 | Mass confusion reigns! | ZFC::DERAMO | Daniel V. D'Eramo | Mon Nov 30 1987 18:57 | 9 |
| Re .-1
Oops. I responded as I did to .6 because I thought that you were
replying to .4's claim about contractions. If you were replying
to .5, then yes, there are assumptions there that are either
not stated or are assumed as included in his use of certain terms.
I'm not sure which.
Dan
|
791.11 | Fix a couple of oops's | ZFC::DERAMO | Daniel V. D'Eramo | Wed Dec 02 1987 12:33 | 80 |
| First, a retraction of a claim that I made in .8:
>> For example, the set {x | 0 <= x <= 1 or 2 <= x <= 3} can be mapped
>> [even contracted] into itself bijectively without a fixed point;
>> just switch halves.
The function that swicthes halves that I had in mind is not
a contraction as defined in .4. If f(1) and f(2) are in
different halves then |f(1) - f(2)| >= 1 = |1 - 2|.
It is a continuous bijective function with a continuous
inverse in a complete metric space; but it has no fixed
point. A more obvious example of that is f(x) = x + 1 on
the real line.
But the definition of a contraction given in .4 is not
correct:
>> There's a theorem by Banach, quite simple to prove, that
>>
>> "Every contraction of a complete metric space has a unique fixed point".
>>
>> A contraction (in the matehmatical sense, but then not many women
>> read this file) can be loosely defined as a function f that satisfies
>> | f(a) - f(b) | < | a - b | for every "a" and "b" in some domain D.
The simple proof mentioned is this: Let x be any point of
the complete metric space. Define a sequence x0, x1, ... by
taking x0 = x and xn+1 = f(xn). Use the fact that f is a
contraction to show that this is a Cauchy sequence. Since
the space is complete, the sequence converges to some value,
call it y. Then, because f is a contraction, the only
possible value that f(y) can have is y. So f(y) = y and y
is a fixed point of f. The fixed point must be unique,
otherwise if both a and b are fixed points then
| f(a) - f(b) | = | a - b | which can't happen for a
contraction.
This proof does not work for the definition of contraction
given. The ratio | f(a) - f(b) | / | a - b | is less than
one, but can get closer and closer to one as a and b move
down the sequence. Consider the following metric space:
The points are the positive integers, and the metric is
given by d(n,m) = 1 + | 1/n - 1/m |. The only Cauchy
sequences are eventually constant, so the space is complete.
The function f(n) = n + 1 obviously has no fixed point, but
it satisfies the definition d( f(a) , f(b) ) < d(a , b) in .4.
[Actually, after I had worked all of this out I realized
that no function satifies this definition: if a=b it
reduces to 0 < 0 which is false. I mean here that my f
satisfies it for "a" not equal to "b"]
The correct definition of a contraction [for an arbitrary
distance function d] is:
There is a real number r such that 0 < r < 1 and that
for every a and b in the metric space,
d(f(a),f(b)) <= r * d(a,b)
Now the ratio of the distance between successive terms is
bound away from one.
With this definition, if x0 is arbitrary and xn+1 = f(xn)
for some contraction f, and c = d(xo,x1), it is easy to
prove by induction that d(xn,xn+1) <= c * r^n and that
d(xn,xm) <= (c * r^min(n,m)) / (1-r). Thus x0, x1, ... is a
Cauchy sequence which by definition [of complete] must
converge in a complete metric space. The limit point will
then be a fixed point of f. Uniqueness follows as before.
Dan
P.S.
For some particular spaces, such as the closed unit interval
of the real numbers, *every* continuous function from the
space into itself has a fixed point. The same is true for a
closed disk as a subset of the Euclidean plane. The
fundamental theorem of algebra may be proven from that, as
stated in .5. [In these examples, there may be more than
one fixed point.]
|
791.12 | About that oops in .11 | SQM::HALLYB | Khan or bust! | Wed Dec 02 1987 13:05 | 21 |
| > This proof does not work for the definition of contraction
> given. The ratio | f(a) - f(b) | / | a - b | is less than
> one, but can get closer and closer to one as a and b move
> down the sequence. Consider the following metric space:
> The points are the positive integers, and the metric is
> given by d(n,m) = 1 + | 1/n - 1/m |...
-Noting- that I commented the definition of contraction was
"loosely defined", it's nice to see someone nail it down more
rigorously. Unfortunately, the nail is a bit rusty. My memory
is also a bit rusty, but I recall the definition of a METRIC
on a set S is a function "d" from S cross S into R (the reals)
satisfying:
(1) d(s,s) = 0 for all s in S
(2) d(s,t) = d(t,s) for all s,t in S
(3) d(s,u) <= d(s,t) + d(t,u) for all s,t,u in S. <"Triangle inequality">
The non-counterexample supplied violates (1), since d(m,m) = 1.
John
|
791.13 | Getting forgetful in my old age. | ZFC::DERAMO | Daniel V. D'Eramo | Wed Dec 02 1987 15:57 | 9 |
| Re .-1
>> (1) d(s,s) = 0 for all s in S
>> The non-counterexample supplied violates (1), since d(m,m) = 1.
I forgot about that case. Let d(n,m) = 0 if n=m, otherwise
d(n,m) = 1 + | 1/n - 1/m |.
Dan
|
791.14 | Where O where has Eric gone? | SQM::HALLYB | Khan or bust! | Thu Dec 03 1987 13:53 | 20 |
| > I forgot about that case. Let d(n,m) = 0 if n=m, otherwise
> d(n,m) = 1 + | 1/n - 1/m |.
That seems to work, and shows why the "loose" definition of contraction
is too loose. BTW, metric spaces also have to satisfy two other
properties not mentioned in .12:
d(m,n) = 0 ==> m=n (not just d(m,m) = 0).
d(m,n) >=0 always.
The above metric satisfies these criteria as well. It's a useful
counterexample to have in your bag of gooies.
I wonder, he thought idly, if the loose definition of contraction
would work in a normed space? Norms yield metrics that are more
nicely-behaved than D'Eramo's "d" above. This would mean relaxing
one set of criteria for Banach's theorem, while tightening another.
As such, is it legitimate fodder for status as a separate theorem?
John
|
791.15 | My favorite fixed point | ZFC::DERAMO | Daniel V. D'Eramo | Thu Dec 03 1987 18:54 | 41 |
| My favorite fixed point theorem is the recursion theorem.
Let M0, M1, M2, ... be an "effective enumeration" of all
Turing machines, say with two symbols -- "1" and blank --
and with a single two-way infinite tape. Let f0, f1, f2, ...
be the functions from the nonnegative integers to the
nonnegative integers computed by the Turing machines.
[When I say "effective enumeration" I mean that it is
possible to determine the state transition table for Mn
given n, and vice versa. There must also be a convention
for when a machine has been given input k and computed
output j. The functions f0, f1, f2, ... corresponding to
these conventions are not necessarily defined for all
nonnegative integers. They are called "partial recursive
functions" and if by chance one is defined for all
nonnegative integers then that is also a "total recursive
function."]
Now let g be a total recursive function from the nonegative
integers to the nonegative integers. Think of g as changing
the Turing machine Mn into machine Mk where k = g(n). Now
this is quite a transformation. If you encoded your last
lisp program into an integer, cubed the integer, and then
decoded back to a lisp program, you would not expect the
result to run as well as the original.
However, the recursion theorem states that there will be an
integer j such that the function fj corresponding to Mj will
be the same function -- defined and undefined for the same
sets of numbers -- as the function fk corresponding to Mk
where k = g(j). Note that it is not machines j and k = g(j)
that are identical, it is the functions that are identical.
[You can effectively compute j given an index i for a Turing
machine that computes g.]
Does this mean that no matter how great a debugger you are,
that there will be at least one program whose behavior you
cannot change?
Dan
|