|  |     For three piles, I've found that the following {,,} triples
    are wins for the second player.
	{1,2*q,r},  where 2 <= 2*q <= r < 4*q
	{3,2*r-1,2*r},  where 8 <= 2*r
	{7,15,20}
    NB: I searched all {p,q,r} <= {10,15,20} to find these.
    Then there's also:
	{5,19,27}
 | 
|  | >Dick Carlin
>                                   -< {p,q} >-
>     i) q <= 2p
>	a win for player 2
>
>     ii) q > 2p
>	a win for player 1
    I agree.
				   -< {p,q,r} >-
Peter Gilbert: These are all wins for player 2
>	{1,2*q,r},  where 2 <= 2*q <= r < 4*q
>	{3,2*r-1,2*r},  where 8 <= 2*r
>	{7,15,20}
>    NB: I searched all {p,q,r} <= {10,15,20} to find these.
>	{5,19,27}
	Useful body of results.   The following analysis partially confirms 
them.   I think there's one slight error:
>	{1,2*q,r},  where 2 <= 2*q <= r < 4*q
                                        < 4*q-1, rather.
e.g: {1,2,3} is a win for player 1 after he makes a pile of 3.
My contribution:
	In order to handle the 3-pile case exhaustively, it seems to be a
decent idea to pursue the 2-pile case a little further.
	Theory of nim-type games:
	(two-players, finite, deterministic, last player to move wins.) 
	With each position, P, can be associated a natural number, v(P).   
Iff P is a forced loss for the player with the move, then v(P) = 0.
	If Q1,Q2...,Qk are the legal positions which the the player with the
move can transform the game to, v(P) can be defined recursively as the 
smallest natural which doesn't appear in S = {v(Qi), i=..k}.   ie, it is
min ( Nat \ S ).
	Now, the most amazing thing about this is that one can *add* games
together.   If I have two games, G & H, G+H is defined as the game where I
play two games at once, and when it's my turn, I can choose which of the
two games to play a move in.   When one game finishes, then I can only play on
in the surviving game.   When both game are over, then the winner is the
last player to make a move.
	Addition is associative and commutative, with an identity.   Abusing
the notation and letting G denote the starting position of the game G, have
v(G+H) = v(G) ++ v(H), where ++ denotes bitwise-add.   This is just a 
generalization of familiar nim theory.
	Practice: revnim. r >= q >= p >= 1
	In the 3-pile case, we can work out e=v({p,q}).   Then if 
r-q > e, player 1 can win the game by placing a pile of r-e 
tokens.
	Proof: This is a symmetry-breaking move, and so we can then regard the 
game as {e} + {p,q}.   By construction v({e} + {p,q}) = 0.
	But what happens if r-q =< v({p,q})?   Well, the situation is more
complicated, and it's best to find an explicit formula for v({p,q}) first, so
we know where we stand.
	Answer out of a hat:
	Given q >= r.
	Over the naturals, define x << y iff there exists z such that 
z+x = z++x = y.   So x = y is not precluded.   v({p,q}) is the minimum natural 
w such that q-2p =< w << q-p.   (Note that q-2p may be negative.)
	This implies, amongst other things, that q-2p =< v({q,p}) =< q-p.   So 
r-q-1 >= q-p is sufficient (but not necessary) to ensure that the game is a 
win for player 1. ie: p -2q +r >= 1.
	Less crudely, we have shown that the game for all r > q + v({p,q)} is
a victory for the first player.
	Now suppose that in fact, r =< q + v({p,q}).   Then the first player
must place a pile of size x =< q.   There are two possibilities.
	(i) x > p
	(ii) x =< p
	But to analyze these, I need to go back to the theory.   My v
formula will evaluate a game, but not a game in progress, with one of the two
piles bigger than the other.
	I'll be back.
Slobotny,
Andrew.
 | 
|  | >	If Q1,Q2...,Qk are the legal positions which the the player with the
>move can transform the game to, v(P) can be defined recursively as the 
>smallest natural which doesn't appear in S = {v(Qi), i=..k}.   ie, it is
>min ( Nat \ S ).
	These are called Grundy numbers.  The problem (as noted in .4) is
	that the piles can't always be broken into separate games.
	There are five types of game positions:
	R -----------------------------------------------
	                           X               X     
	Q -------------------------X---------------X-----
	                  X        X     X X     X X     
	P ----------------X--------X-----X-X-----X-X-----
	      X X X   X X X    X X X   X X X   X X X     
	0 -----------------------------------------------
	       T1      T2       T3      T4      T5
	Let p, q, and r be the current numbers of matches in the pile, and
	let P, Q, and R be the limits.  Now T5 can use Grundy numbers directly.
	If
		0 <= p <= P		\
		P <  q <= Q		(i.e., it's T5 above)
		Q <  r <= R		/
	Then
		v   (p,q,r) = (P-p) xor (Q-q) xor (R-r)
		 P,Q,R
	(Note the use of subscripts; I'll usually omit them below).
	Andrew recognized something about T3 and T5 that also applies to T4.
	First, Andrew's result:
		If r > Q then v(p,q,r) = v(p,q) xor (R-r)
	And we also have (for T4):
		if (q > P) and (r > P) then v(p,q,r) = (P-p) xor v  (q,r)
								  Q,R
	These follow from the games being separable.
 | 
|  | 	Let's solve the two pile game completely before we address the three
pile game.   Because in order to determine who wins a three pile game, we
will generally rely on more detailed Grundy number info for the two pile
case.
p =< q are the piles
1 =< P =< Q are the limits
q =< Q
p =< P
Note that:
		v(p,q) = v(p+M,q+M) = u(N,p+M,q+M) say.
		 P,Q      N-1,2N-1
where N = Q-P, M = Q-2P-1.   This gets rid of an superfluous degree of 
freedom, by a consistent change of origin.   It also brings out the fact that 
there the space of games with two piles is essentially one-dimensional, 
parametrized by N.   Everything else is just variation in starting positions. 
The cost of this simplification is that when M is negative, so may p or q be.  
But the brevity of the following formulae suggests that this is a 'natural'
formulation.
	So the result I gave can be written as:
u(N,p,p) = min (w << N such that w > p)					  [F1]
	For instance, take N=5.   This in binary form is 101, so the valid
w are 101,100,001,000, or 5,4,1,0.   Thus as we examine different values of
p, from its maximum of 4 downwards, u(N,p,p) runs through: 5,4,4,4,1,0,0,0...
	Now, what I don't have yet is a formula for the off-diagonal terms:
u(N,p,q) = ??? if p < q.
	However, if N = 2^a*N', then I have:
u(N,p,q) = (p mod 2^a xor q mod 2^a) + 2^a * u(N', p div 2^a, q div 2^a)  [F2]
	This is quite clean, and what it means is that we really 'only' have
left to solve the case N odd.   The final handle currently is the behaviour
when symmetry is broken:
	For q >= N, u(N,p,q) = N-p xor 2N-q 				  [F3]
	I want to unify [Fi] into a single formula, but I need more data on
how U behaves, in the off-diagonal terms.   It seems quite an unusual
situation: normally one solves these things sequentially, evaluating the
position at move n, by knowing all the possible descendents at n+1.   Here we
know what the values are at all symmetric positions, but not what some of
their assymetric descendents evaluate to.
Andrew.
 |