T.R | Title | User | Personal Name | Date | Lines |
---|
1698.1 | | AUSSIE::GARSON | | Fri Dec 04 1992 07:08 | 24 |
| re .0
Assuming I've understood the problem, here is a counterexample.
Let S = I[1] R[2] (I'm using [] for the function subscripts)
It can be seen that S(x) = x == 0 ? 1 : 2 (to borrow some notation from C)
while SS(x) = 2 (for all x)
thus S != SS
However since S consists of only two functions there are only two
possibilities for T viz. T = I[1] or T = R[2]. Manifestly S does not
equal either of these functions.
More generally the only counterexamples are S = I[a] R[b] where neither
a nor b is zero.
For all other S, S == SS and hence the antecedent is false.
For all S except S = I[a] (a non-zero) and S = R[b] (b non-zero) and
the general counterexample above, S is a constant function (the
constant value depends of course on the functions making up the
sequence).
|
1698.2 | re: .1, thanks, a revision | MOVIES::HANCOCK | Peter Hancock | Fri Dec 04 1992 12:22 | 26 |
| Thanks! I was thrown for a bit because I do composition the other
way round from you. Fortunately, the problem stays the same, and your
counterexample applies. (But I'd write it R[2];I[1].)
Actually, my conclusion is wrong: I'd meant to capture R[2];I[1],
which of course I haven't. (Yes, I'm that stupid.)
Can you characterise the S for which S;S = S ?
What if I changed the conjecture to:
REVISED CONJECTURE
IF S ; S != S (where S = S_1, .. , S_k)
THEN For any x,
let y[1] = S_1(x),.., y[k] = S_k(y[k-1])
then for some n in {1,..,k-1}, y[n] = y[n+1]
You might see how I messed up: I was floundering in the general
direction of the idea that there would have to be something
`redundant' about such an S. In your sequence, there is always
a step at which the integer isn't changed. (But the step can
be different for different starting integers.)
Thanks enormously, for the notation, counterexample, and making me switch
my brain on. Any idea about the revised version?
(If it is true, my application works.)
|
1698.3 | | AUSSIE::GARSON | | Fri Dec 04 1992 20:40 | 61 |
| re .1
Just one minor revision to the general counterexample
> More generally the only counterexamples are S = I[a] R[b] where neither
> a nor b is zero.
should have the additional condition that a <> b. (If a = b then S would
be a constant function and would not be a counterexample.)
re .2
> Thanks! I was thrown for a bit because I do composition the other
> way round from you. Fortunately, the problem stays the same, and your
> counterexample applies. (But I'd write it R[2];I[1].)
I'm reasonably sure that, by mathematical convention, functions compose
in such a way that the rightmost function is applied first, just as if
I were writing in C or PASCAL f(g(x)). The compostion of two functions
is normally written f <little ring> g where <little ring> looks like
the following, �, but is positioned like � and is not available on my
terminal so I am just juxtaposing functions to indicate composition.
> Can you characterise the S for which S;S = S ?
Yes. SS != S if and only if S satisfies the conditions to be a counter-
example to your original conjecture (further details below). Thus SS !=
S implies that S consists of just the two functions I[a] R[b] which
means you should be able check your revised conjecture easily. (It
looks false to me.)
> You might see how I messed up: I was floundering in the general
> direction of the idea that there would have to be something
> `redundant' about such an S. In your sequence, there is always
> a step at which the integer isn't changed. (But the step can
> be different for different starting integers.)
Key observations are:
I[0] can be ignored except when S is entirely composed from I[0]s where
we should leave just one and I[0] by itself is an easy special case
R[0] is a constant function (same as E in the base note)
if S is constant function then SS = S and we are done
...and with a <> 0 and b <> 0 (since we have dealt with I[0] and R[0])
I[a] I[b] = I[b] (thus reducing two or more adjacent Is to one I)
R[a] R[b] = R[a] (ditto for Rs)
I[a] by itself can be dealt with as an easy special case
R[b] ditto
R[a] I[b] is a constant function
thus leaving only the case
I[a] R[b]
and as it happens we need a <> b to get SS != S.
|
1698.4 | aST = (aS)T, or fgx = f(g(x)) | CSC32::D_DERAMO | Dan D'Eramo, Customer Support Center | Fri Dec 04 1992 22:00 | 12 |
| For functions it is usually as in .-1, with the composition
f o g meaning f after g, i.e., (f o g)(x) = f(g(x)). But in
studying algebraic structures such as groups or rings you will
often see the image of element "a" under permutation or
homomorphism "S" (for the Greek letter sigma) written as
"aS". If then T is tau, you have the composition ST meaning
aST = (aS)T i.e. sigma applied first, then tau.
So like many notations the conventions sometimes depend on
context. :-)
Dan
|
1698.5 | My cup runneth over | MOVIES::HANCOCK | Peter Hancock | Sat Dec 05 1992 18:02 | 44 |
| The argument at the end of .3 establishes that any S is *equivalent*
to one of a few special cases (with length <= 2).
This characterises the S for which SS != S (as I asked).
It's not obvious to me, though perhaps it should be, that this
settles the revised conjecture.
It does seem easy to check it for these special cases.
[ It's true - I'm puzzled .3 says it looks false -
in the only case in which SS!=S, namely I[a]R[b] (in .3's notation)
with a and b non-zero and different,
0 isn't moved by the R, the first step.
non-zero integers may move on the R, but their images aren't moved
on the second (the I) ]
However, although the property S;S != S depends only on the equivalence
class of s (i.e. the composite function), its not obvious to me that
the property in the conclusion isn't sensitive to the actual sequence
of component functions. i.e. I'm only about four-fifths convinced that
(using the base note's postfix `%')
IF S% = T% and P(S)
THEN P(T)
where P(X) means the `trajectory' of any integer under X includes a repetition (a non-move)
trajectory(x) means (I'll write the compositions .3's way round)
the sequence (y_1,..,y_k),
where k is the length of X,
y_1 = X_1(x), .. , y_k = X_k(y_(k-1))
and (to accord with .3's order of composition),
X=(X_k,..,X_1)
If it's obvious one way or the other, I'd be very grateful for the
insight. (I actually dimly see how it might be proved, and might still be
labouring through it when put out of my misery.)
--
re composition notation.
Sometimes I use `;' for little-ring-with-the-operands-reversed
It's more program-like, which sometimes helps - but it's not
at all common. In private life I use juxtaposition for application,
which can save a lot of brackets.
(It takes all sorts to make a world, doesn't it!)
|
1698.6 | | AUSSIE::GARSON | | Sun Dec 06 1992 17:29 | 31 |
| re .5
>The argument at the end of .3 establishes that any S is *equivalent*
>to one of a few special cases (with length <= 2).
>This characterises the S for which SS != S (as I asked).
One point I glossed over (but you seem to have picked up) is that
I[a]R[b] is not the only case where SS != S but that all those S such
that SS != S are equal to a function of that form. Where a conjecture
depends on the actual individual functions making up S this may be
relevant.
Hence a more general form would include any number of I functions
inserted before I[a] and any number of R functions inserted after R[b]
and any number if I[0] functions interspersed arbitrarily.
>[ It's true - I'm puzzled .3 says it looks false -
> in the only case in which SS!=S, namely I[a]R[b] (in .3's notation)
> with a and b non-zero and different,
> 0 isn't moved by the R, the first step.
> non-zero integers may move on the R, but their images aren't moved
> on the second (the I) ]
You are right. I mis-interpreted the "for all x, there is an n" to mean
that it had to be same n that works for all x whereas you meant that
there is an n, dependent on x. Right?
Given that interpretation and the elaboration of my previous paragraph
I think your conjecture (in .3) is true.
Must go and do some work...
|
1698.7 | Satisfied customer | MOVIES::HANCOCK | Peter Hancock | Wed Dec 09 1992 17:32 | 26 |
| re .6
> You are right. I mis-interpreted the "for all x, there is an n" to mean
> that it had to be same n that works for all x whereas you meant that
> there is an n, dependent on x. Right?
Right (more like continuity than uniform continuity).
Thanks for your help.
The problem is actually about idempotence of sequences of commands
to a funny kind of storage cell. (Meaning S;S has the same net effect as S.)
I'm interested in things around:
* some data-types have idempotent command sequences "by their nature".
[ As opposed to artificially, an example of an artifice being adding
a sequence number component to a simpler data-type, and sequence
number parameters to its commands. ]
Are there simple ways to check you've got one?
I've a vague intuition that "naturally idempotent data-types" (pardon
slop) are in some sense "storage".
* sequence idempotence is related to "restartability", meaning that
the net effect of a command sequence isn't disturbed by a certain
kind of backward-jumping jitter. Exactly what the relationship is
I don't know.
If someone knows all about this kind of stuff, or recalls seeing it
discussed somewhere, I'd be grateful to learn.
|