[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference rusure::math

Title:Mathematics at DEC
Moderator:RUSURE::EDP
Created:Mon Feb 03 1986
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:2083
Total number of notes:14613

1491.0. "A roulette "system"" by VMSDEV::HALLYB (Fish have no concept of fire) Thu Sep 12 1991 14:17

    The following problem came up on an obscure USENET group and it looks
    like it might be interesting to readers of this conference.
    
    Assume you are to play American roulette, betting on a single number.
    The payoff is 35 to 1 ($1 wins $35 additional) while the true probability
    of winning is 1/38.  Also assume you may bet any real number greater
    than 0 and up to your current holdings.  Yes, that includes irrational
    sized wagers.  You need not wager the same amount each time.
    
    Your objective is to double your initial bankroll B.  No extra credit
    is earned for a better-than-double result.  What is the best strategy?
    I.e., what betting strategy maximizes the probability of at least
    doubling your initial bankroll?  Note this problem is (apparently)
    different from trying to maximize your expected return.  Some examples:
    
    The Thag strategy, throw your whole bankroll down on one spin, has a
    probability of 1/38 ~= .0263 of succeeding.
    
    The Solomon strategy, dividing your wager B into two halves, will
    succeed if either bet wins.  This probability is (1-(37/38)^2) ~= .052
    So the Solomon strategy is almost twice as effective as the Thag.
    
    The Lilliputian stragety, dividing your wager into 1000 equal-sized
    lots, would require 29+ winners out of 1000 wagers.  This probability
    works out to be > ~0.26, ten times the Thag strategy, assuming my 
    binomial arithmetic code is correct.  The true probability is even
    higher because you would quit after doubling your bankroll -- not play
    out the full 1000 bets in that case.
    
    What is the optimal playing strategy, and what is its chance-of-doubling?
    
      John
T.RTitleUserPersonal
Name
DateLines
1491.1Start with a BIG roll of billsCIVAGE::LYNNLynn Yarbrough @WNP DTN 427-5663Thu Sep 12 1991 14:5324
It depends on how much money you have, relative to how much the bank has.

Let me illustrate with a simpler example, a straight 50-50 bet, repeated 
several times; say black vs red.

Assume you use the following [losing] strategy: each time you bet exactly
half your capital, and assume further that you win exactly half the time.
After two bets (one win, one loss) you will have .75 of your starting
capital, *regardless of in what order the wins and losses occur*. After 4
plays you will have .5625, again independent of the order. And so on. You
will never go completely broke, but *you will lose* as long as the bank
starts with significantly more money than you started with. 

But now turn the strategy around: bet half of what the bank has left at each 
turn. Now *you are sure to win*, as long as your original capital starts 
bigger than the bank's, and you can win an arbitrarily large percentage of
the bank's capital, but never break the bank. 

The same principle applies regardless of the betting scheme. If you play 
long enough, the law of large numbers will catch up with whoever starts with 
the smaller bankroll. In the case you describe, your expectation is 35/38-1
for each bet, so you can expect eventually to go broke. By the way, I think
the computations of the base note are not correct. I'm sure someone else
will point them out. 
1491.2BZZZZZT! Wrong, thanks for playing.VMSDEV::HALLYBFish have no concept of fireThu Sep 12 1991 17:2424
> It depends on how much money you have, relative to how much the bank has.
    
    No, you are incorrect, this is wrong.  If it makes you feel better,
    assume the house has an unbounded amount of money, but in fact all
    it needs is 70 * <your initial stake>.  The bettor in .0 is not out 
    to break the house, just double his starting bankroll.
    
> 							By the way, I think
> the computations of the base note are not correct. I'm sure someone else
> will point them out. 
    
    Well I think you're wrong, and I think you're looking to answer 
    the more "classical" questions regarding maximizing expected value.
    That's not what was asked.  .0 shows quite clearly how 3 different 
    betting strategies result in 3 different probabilities of "succeeding".
    Each may have the same (negative) expected value, but that wasn't
    the question.
    
    So then, if there are 3 different strategies with 3 different chances
    of success, what about other strategies?  Some will have higher chances
    of success than others.  Which strategy has the highest probability,
    and what is that probability?
    
      John
1491.3I Think This Is ItVAXRT::BRIDGEWATEREclipsing the pastThu Sep 12 1991 21:5332
    I think the optimal way to bet is to keep playing the minimum bet
    necessary so that if you win it you will double your money.  Keep
    doing this as long as possible and the first time you win, you quit,
    having doubled your money (exactly).  If the bet that is called for
    is more than the money you have left, then bet all you have instead.

    I believe this is the best strategy because if you bet more than is
    called for (when you have enough money left) and win, you will have
    more than doubled your starting bankroll and stop betting.  The
    probability of (at least) doubling your bankroll is the same as with
    the called for bet.  However, if you lose that bet your reserves will
    be less than they would have been and thus the probability of eventually
    doubling your bankroll is smaller.

    On the other hand, if you bet less than the called for amount, the
    result of that spin of the wheel is guaranteed to leave you with less
    than the bankroll-doubled amount.  And, your expectation following
    that spin will be reduced from what it was prior to the spin for
    any positive bet because such a bet has a negative expectation.

    When you have less money than what you need to double your bankroll,
    betting all you have seems to give you the best chance to recoup.
    Although this makes gambler's ruin possible in a finite number of
    bets, any other strategy appears to lead to larger probabilities
    of being ruined in the limit.

    I wrote a short program to calculate the probability and it converged
    to approximately 0.4808897287+.  I expect this number is irrational
    and probably transcendental.  I didn't take any precautions against
    floating point error accumulation except by using double precision.

    - Don
1491.4You win the Pontiac but not the MiataVMSDEV::HALLYBFish have no concept of fireFri Sep 13 1991 11:2914
>    When you have less money than what you need to double your bankroll,
>    betting all you have seems to give you the best chance to recoup.
    
    I agree with your analysis except for this point.  If your original
    bankroll B drops under B/35, then you need *two* wins in order to
    double your money.  Bet-it-all is one strategy, but there are others
    such as "target your winnings to restore you to B" (or B/35 or ...).
    
    Conjecture:  the optimal strategy yields probability (37/38)/2 of success.
    (~= 0.486842).   Similarly, if tripling your money were the goal, the
    optimal strategy would have (37/38)/3 probability of success.
    A handwaving argument can be constructed based on a consistent expectation.
    
      John
1491.5Very Interesting ProblemVAXRT::BRIDGEWATEREclipsing the pastFri Sep 13 1991 13:4448
   >I agree with your analysis except for this point.  If your original
   >bankroll B drops under B/35, then you need *two* wins in order to
   >double your money.  Bet-it-all is one strategy, but there are others
   >such as "target your winnings to restore you to B" (or B/35 or ...).
    
    I realized that you need at minimum two wins once your bankroll
    drops below *B/18*.  My algorithm made it easy to calculate the probability
    because a win when bankroll was >B/18 terminated the game, while a loss
    with bankroll <B/18 terminated the game.  (=B/18 terminates the game
    regardless of outcome but I doubt that it can ever occur.)  This allowed
    straightforwardly for a single infinite series to evaluate the prob-
    ability of doubling the initial bankroll.

    What I missed and where I think you are right is that maximizing the
    expectation of your bet(s) when <B/18 does not give you the maximum
    probability of eventually doubling your initial bankroll.  Unfor-
    tunately, so far I don't see a method to use to go about finding
    this optimal strategy.  My intuition tells me that there is likely to
    be some fractal-like similarity in the betting strategies as you go
    through the wealth range levels approaching 0 where each "level"
    requires one more win in order to double your initial bankroll.

    I don't think any of my original analysis is solid yet.  For example,
    as your wealth approaches B/18 from above, at some point it may be better
    to bet less than what my algorithm calls for so that it requires more
    than one win to double your bankroll.  Otherwise, if you lose that bet,
    your reserves may be very near 0 and may require significantly more than
    2 wins to enable you to double the initial bankroll.

   >Conjecture:  the optimal strategy yields probability (37/38)/2 of success.
   >(~= 0.486842).   Similarly, if tripling your money were the goal, the
   >optimal strategy would have (37/38)/3 probability of success.
   >A handwaving argument can be constructed based on a consistent expec-
   >tation.
    
    I'm still trying to find the other view of this game where a probabilty
    of (37/38)/2 is arrived at.

    This problem seems to indicate an interesting phenomenon -- that with
    a possibly infinite series of positive bets that are strictly bounded
    that a game with a payoff greater than even-money but with negative
    expectation can be converted to an even-money payoff with a smaller
    magnitude negative expectation.  Of course, this expectation is still
    negative.  I think this has to do with the expected amount of total
    bets played before doubling or losing your bankroll being less than
    your bankroll.

    - Don
1491.6A Couple More StrategiesVAXRT::BRIDGEWATEREclipsing the pastMon Sep 16 1991 00:5434
    I've tried simulations of a couple of other strategies and they
    appear to be inferior to my original strategy.  I haven't lowered
    the standard error enough to have near certainty and I need to do
    more analysis on how the cutoff level (these are strategies that
    cannot reach gambler's ruin in a finite number of steps) affects
    these results.  "Level" means the minimum number of wins needed to
    reach the original doubling goal.

    The first strategy I tried was to bet the minimum necessary to reach
    the doubling goal until it was no longer possible.  At that point,
    an intermediate goal was set of reaching the amount that the bankroll
    was at just prior to losing the previous bet.  This rule was applied
    recursively as the bankroll reached smaller and smaller levels.  This
    strategy has the consequence that in order to escape a level you have
    to have two consecutive wins.  For if you win once and then lose on
    the next spin, you will be back at the lower level.

    The second strategy was one where intermediate goals were set of
    doubling the amount of the bankroll that existed whenever the
    prior goal first became unreachable.

    In my simulations, the first strategy appeared better than the
    second, and both appeared inferior to my original strategy.

    I did a quick comparison of cutting off the simulation of the first
    strategy above level 7 and above level 10 and the difference in results
    appeared to be extremely small.  I did not do this with the second
    strategy, but I ran it exclusively with a cutoff above level 10.

    These two strategies have a recursive structure that might lend itself
    to direct computation of the probability sought.  I tried briefly to
    do that and failing, I decided to try the simulation route.

    - Don
1491.7"The first answer is usually right"VMSDEV::HALLYBFish have no concept of fireMon Sep 16 1991 10:249
    Right you are, Don!  Apparently there's a theorem about this called
    the Bold Play Theorem, that says always bet the *Maximum* that won't
    place you OVER your target.  So if your bankroll drops below B/18
    the right thing to do is shoot the whole wad (and cross your fingers).
    
    I am informed that:  Billingsley, _Probability and Measure_, discusses
    the Bold Play Theorem for problems similar to this one.
    
      John
1491.8ruminationsCORREO::BELDIN_RPull us together, not apartWed Sep 18 1991 16:3914
    
    Do the probabilities you calculated represent the probability of ever
    doubling your money or doubling your money before you go bankrupt?
    
    This is clearly an example of a random walk with an absorbing barrier,
    so probabilities must be calculated for specified sequences of
    outcomes, not just a number of outcomes of certain type.
    
    Since one particular state is identified as the initial state and the
    terminal state is specified in terms of it, the process has memory.
    
    I am more interested in your methods than in the results.
    
    Dick
1491.9VAXRT::BRIDGEWATEREclipsing the pastWed Sep 18 1991 20:0030
    Re: .8

>   Do the probabilities you calculated represent the probability of ever
>   doubling your money or doubling your money before you go bankrupt?
    
    All three strategies that I calculated probabilities to (either directly
    or by simulation) did not have the capability of ever letting the
    bankroll get outside of the range of [0,2*B], where B=initial_bankroll.
    The two inferior strategies which I simulated, had a range of (0,2*B].

    So, I guess that means that it's the probability of doubling your money
    before going bankrupt.

>   This is clearly an example of a random walk with an absorbing barrier,
>   so probabilities must be calculated for specified sequences of
>   outcomes, not just a number of outcomes of certain type.

    In addition to the barriers at 0 and 2*B, the varying bet sizes that the
    strategies use also make it necessary to pay attention to certain
    sequences of outcomes and not just the number of outcomes of a certain
    type.

>   I am more interested in your methods than in the results.

    I don't think I can describe my methods much better than I did in my
    previous replies to this note unless you have specific questions for
    me.  However, I do have the (uncommented) source code written in a
    preprocessor dialect of FORTRAN that I could publish or mail you.

    - Don
1491.10Computing E(x); Self-SimilarityCLT::TRACE::GILBERTOwnership ObligatesThu Sep 19 1991 18:37107
    After considerable work on this problem, I went back and found that
    Don had already said almost all there was to say in .3 and .5.  Nice work!
    Here's my effort to say something previously unsaid.

.7> So if your bankroll drops below B/18
.7> the right thing to do is shoot the whole wad (and cross your fingers).

    Normally, gambling odds are biased against the player (so that the house 
    can expect to gain money).  If the odds are biased *for* the player, then
    the Thag shoot-the-whole-wad strategy is not optimal (when the bankroll
    is small relative to the goal).  For an example, consider what if a $1 bet
    returns $4 with with probability �.

.8>   I am more interested in your methods than in the results.

    I think I can explain the methods.  Define p and R as follows:
    a bet of $B return 0 with probability (1-p), and returns $R*B with
    probability p (in .0, R = 36, and p = 1/38; since R*p < 1, this
    favors the house).

    Let x be the amount of money on hand, (w.l.g.) let 1 be the goal amount,
    and let E(x) be the expectation that the goal will be reached.  The
    strategy is

	bet B = x if x <= 1/R, and
	bet B = (1-x)/(R-1) if x >= 1/R (since a win here gives x-B + R*B = 1).

    Now the expectation of ultimate success, E(x) for 0<=x<=1, can be expressed:

	E(0) = 0; E(1) = 1

	E(x) = (1-p)*E(0) + p*E(R*x)		for x <= 1/R
	     = (1-p)*E(x-(1-x)/(R-1)) + p*E(1)	for x >= 1/R

    Rewrite this to simplify:

	E(x) = p*E(R*x)				for 0 <= x <= 1/R
	E(x) = p + (1-p)*E((R*x-1)/(R-1))	for 1/R <= x <= 1

    E(x) is efficiently calculated by the following CORDIC-like algorithm
    (for values of p that are not extremely close to 0 or 1):

	term <- 0;  fact <- 1;  amt <- x;
	! Assert: E(x) = term + fact * E(amt)	(Exactly!)

	while fact > some_miniscule_amount do
	    ! Assert: E(x) = term + fact * E(amt)

	    if R*amt <= 1	! we use E(w) = p*E(R*w)
		fact <- fact*p;
		amt <- R*amt;
		! Assert: E(x) = prevterm + prevfact * p * E(R*prevamt)
		!              = term + fact * E(amt)

	    else		! we use E(w) = p + (1-p)*E((R*w-1)/(R-1))
		term <- term + fact*p
		fact <- fact*(1-p);
		amt <- (R*amt-1)/(R-1)
		! Assert: E(x) = prevterm + prevfact *
		!			(p + (1-p)*E((R*prevamt-1)/(R-1))
		!	       = prevterm + prevfact*p + prevfact*(1-p)*E(amt)
		!	       = term + fact * E(amt)

	endwhile
	! Assert: E(x) = term + fact * E(amt)
	! Assert: term <= E(x) <= term + some_miniscule_amount * E(amt)
	!                      <= term + some_miniscule_amount

	Using this, I found E(0.5) = 0.48088972874868297508391716791612.

.3>    My intuition tells me that there is likely to
.3>    be some fractal-like similarity in the betting strategies as you go
.3>    through the wealth range levels [...]

	Indeed!  The curve for E(x) is defined by two parameters, p and R.
	Three values are easily determined: E(0) = 0, E(1) = 1, and
	E(1/R) = p.  If we plot this:

	     1	+-----YYYYYYYYYY*
		|     Y         Y
		|     Y         Y
	 E(x)	|     Y         Y
		|     Y         Y
		|     Y         Y
	     p	XXXXXX*YYYYYYYYYY
		X     X         |
	     0	*XXXXXX---------+
		0    1/R        1
			x

	Then the curve in the 'X' box [0..1/R,0..p], the curve in the 'Y' box
	[1/R..1,p..1], and the whole region [0..1,0..1] are similar to each
	other!  That is, they are identical, except for vertical and horizontal
	'stretch' and translation.  This is a self-similar fractal curve!

	To see this, we take the equations for E(x), and substitute variables:

	E(w/R) = p*E(w)				for 0 <= w <= 1
	E((w*(R-1)+1)/R) = p + (1-p)*E(w)	for 0 <= w <= 1

	And rearranging:

	E(w) = (1/p) * E(w/R)				for 0 <= w <= 1
	E(w) = (1/(1-p))*E((w*(R-1)+1)/R) - p/(1-p)	for 0 <= w <= 1

	Thus, give or take a stretch and translation, the entire E(w) curve
	(from 0 to 1) is identical to two smaller pieces of the E(w) curve!