Risk, Ruin, and Trip-Stake Wipeout by Bryce Carlson
by
, 03-08-2013 at 05:48 PM (134806 Views)
There's hardly anything more discouraging than traveling hundreds or thousands of miles to a prime casino destination, spending hundreds or thousands of dollars on airfare, food and lodging, only to have your entire trip stake wiped out by a bad run of cards because you didn't adequately anticipate your true playing cash needs. That really hurts. Big-time.
Now, of course, no matter how carefully or conservatively you calculate your trip stake, once in a great while a sustained run of really bad luck is likely to result in trip-stake wipeout. That's just an unfortunate but inevitable part of the game. But for some players this happens far more often than necessary because they're actually playing with a significantly higher risk of trip-stake ruin than they believe. This occurs when they often spread to two or more hands, but calculate their short-run risk of ruin based on the equivalent one hand long-run risk of ruin. The subtle fallacy here is that the equivalency only holds for the long run, not the short run. In the short run, the risk of ruin with multiple hands is significantly greater than for one long-run equivalent hand because the overall risk of ruin is "front loaded" with multiple hands. I discovered this fact several years ago and published it in a paper entitled, "Risk, Ruin, and Trip-Stake Wipeout." At Norm's request, I have included it below (>> <<) in the hope some of you may find it interesting and helpful.
Bryce Carlson
>>
Risk, Ruin, and Trip-Stake Wipeout
by Bryce Carlson
For camouflage purposes and to win at a faster rate it is often desirable to spread to two or more hands rather than just bet all your money on one hand. The trick is to size these multiple-hand bets so that they have the same risk of ruin as one-hand bets would have.
Now, it is well established in the literature that if instead of betting (x) amount of units on one hand you want to maintain the same risk of ruin but bet (y) amount of units on each of (n)>1 hands, then the amounts bet, (x) and (y), are inversely proportional to their respective variances (which in the case of (y) includes the covariance between the (n) hands). In other words, v(x)/v(y) = (y)/(x).
What I am going to show here is that this is not always true. Depending on the situation, to keep the risk of ruin equal sometimes (x) and (y) are essentially inversely proportional to their respective standard deviations, not their respective variances. To my knowledge, this is a new result, and it has important implications for safely sizing your bets and trip stake when you go out to play.
To understand how this works, let's first take a look at the situation with respect to the usual ruin formula (see Blackjack for Blood, p. 156ff):
(1) R = {(1-E/d1)/(1+E/d1)}^(C/d1)
Where R is risk of ruin, E is expectation, C is total units of capital, and d1 is the sd for one hand of BJ.
Let's say we have an expectation (E) of 2.0%, a stake of $20,000, and we will be making $200 bets so (C) is equal to 100 units. The one hand sd (d1) has a known value of 1.1225 (1.26^.5).
If we plug these values into the equation for (R), above, we get:
(2) R = {(1-0.020/1.1225)/(1+0.020/1.1225)}^(100/1.1225) = 4.18%
Now, $200 is the value for the one-hand bet (x); to find the value for the n-hand bet (y) we first have to find the variances for v(x) and v(y); v(x) has a known value of 1.26; v(y) includes the covariance for the (n) hands and is calculated from the following equation:
(3) V(n) = 1.26(n)+.5(n)*(n-1)
where (n) is the number of simultaneous hands played, and .5 is the covariance. In this case, if we assume (n) is equal to two, it reduces to:
(4) V(2) = 1.26(2)+.5(2)*(1) = 3.52
Now, v(2) is the total variance for both hands; the variance for one hand, v(y), is half of this, or 1.76.
So, based on the inversely proportional relationship, v(x)/v(y) = (y)/(x), we can calculate (y) as follows:
(5) 1.26/1.76 = (y)/$200, so (y) = $143.18
So, if we play two hands of $143.18, instead of one hand of $200, our risk of ruin should be equal. Let's plug this into our equation for ruin (R), above, and see if it's true.
Now, in this case, because we are playing two hands of (y) units, (E) is also doubled and is, therefore, equal to 4.0%. Our stake remains constant at $20,000, but since we will be making individual bets of $143.18, C is equal to 139.68 units ($20,000/$143.18). Also, in this case, the sd has a value of 1.8762 (3.52^.5). Now, plugging all of this into our equation for ruin (R) yields:
(6) R = {(1-0.040/1.8762)/(1+0.040/1.8762)}^(139.68/1.8762) = 4.18%
Which is, indeed, the same value for ruin we got for one hand of $200. So, case closed, and the values for (x) and (y) are indeed inversely proportional to their respective variances. Right? Yes. No. And, it all depends.
The key, here, is that in both cases we want to keep the same risk of ruin and we want to win the same amount of money (units). The above formula for ruin (R) assumes you want to win an infinite amount of money, but even if we use the more general-case formula, where target win (W) is finite, the above inversely proportional relationship between (x), (y), and their respective variances still holds as long as we want to win the same amount of money (units) in both cases.
Let's do one more example to show that this equivalency holds. The general-case ruin formula (R) for a finite target win (W) looks like this:
(7) R = {(((1+E/d1)/(1-E/d1))^(W/d1))-1}/{(((1+E/d1)/(1-E/d1))^((C+W/d1))-1}
Let's assume we want to double our money, so that (W) = (C). Using the numbers for (x) = $200 from (2), above, we get:
(8) R = {(((1+0.020/1.1225)/(1-0.020/1.1225))^(100/1.1225))-1}/
{(((1+0.020/1.1225)/(1-0.020/1.1225))^(200/1.1225))-1} = 4.01%
And, using the numbers for (y) = $143.18 from (6), above, we get:
(9) R = {(((1+0.040/1.8762)/(1-0.040/1.8762))^(139.68/1.8762))-1}/
{(((1+0.040/1.8762)/(1-0.040/1.8762))^(279.36/1.8762))-1} = 4.01%
Which confirms the equivalency of risk of ruin for one hand of $200 and two hands of $143.18, when the target wins (W) are the same. Since, when we calculate ruin for our entire stake, we assume we will play "forever" and wish to win an infinite amount of money, this equivalency, v(x)/v(y) = (y)/(x), is valid and appropriate for such total-bankroll ruin calculations.
But, as I point out in Blackjack for Blood (p. 160, par. 4), although the main variable in the calculation of risk of ruin for total bankroll is target win (W), this is NOT the case for the calculation of risk of ruin for a trip bankroll. For a trip bankroll, the main variable is number of trials played, and, as it turns out, it makes a big difference.
For example, suppose we are going to go on a BJ-playing trip for two days and expect to play at a rate of 40 large bets per hour, five hours a day, for the two days. This would result in 400 large-bet trials (40x5x2).
Suppose further, that we want to be safe from trip-stake wipeout to a certainty of two standard deviations (note, "endpoint" risk calculations disregarding the so-called "mid-trip barrier" effect [see Blackjack for Blood, p. 162, fn 5] will be used here because it simplifies the math and has no net effect on the results, conclusions or validity of the study).
The formula for percentage sd (standard error), d(r), is as follows:
(10) d(r) = d1/(n^.5)
where d1 is the sd for one hand of BJ, and n is the total number of trials considered.
Since we are assuming 400 trials, and d1 has a known value of 1.1225, this results in:
(11) d(r) = 1.1225/20 = .0561 or 5.61%
This is one d(r); 2d(r) is double this, or 11.225%.
This means, that to a certainty of 2d(r)'s, if our large bets are $200 and we play 400 trials, the risk "overlay" on our expectation (E) would be $8,980 ($200x400x.11225). Since (E), as before, is assumed to be 2.0%, this means that we would need a bankroll of $7,380 to be safe from trip-stake wipeout to a certainty of 2d(r)'s ($8,980-($200x400x.02)).
Now, let's take a look at the situation if we play two hands of $143.18. If the risk of ruin is the same, the required trip-stake bankroll should be the same, as well. Let's see:
As we saw before, when we play two hands d1 rises to 1.8762; so, in this case, d(r) is equal to:
(12) d(r) = 1.8762/20 = .0938 or 9.38%
This is one d(r); 2d(r) is double this, or 18.76%.
This means, that to a certainty of 2d(r)'s, if our large bets are two hands of $143.18 each, and we play 400 trials, the risk "overlay" on our expectation (E) would be $10,744 ($143.18x400x.1876). Since we are playing two hands of $143.18 each, (E) would also be doubled to 4.0%, this means that we would need a bankroll of $8,453 to be safe from trip-stake wipeout to a certainty of 2d(r)'s ($10,744-($143.18x400x.04)).
Obviously, something is very wrong. There is a big difference between $7,380 and $8,453. Two hands of $143.18 appear to be much riskier than one hand of $200. How can this be so? The answer is because the equivalency of ruin for one hand of $200 and two hands of $143.18 assumes the same target win (W). If, instead, you assume the same number of trials -- as you generally do -- and do not care about target win -- which you generally don't -- then this equivalency does NOT hold. In fact, for the $200 (one-hand) bets to be as risky as the $143.18 (two-hand) bets, you would have to multiply the number of trials played at $200 by 2v(x)/v(y) = 2x1.26/1.76 = 1.432, and 1.432x400 = 572.8 trials (note, the general-case equation for this "phase" differential is n*v(x)/v(y)).
Plugging 572.8 trials into the equations, above, for $200 bets, we get a 2d(r) overlay of $10,744; and since (E) is assumed to be 2.0%, to be safe from trip-stake wipeout to a certainty of 2dr's we would need a bankroll of $8,453 ($10,744-($200x572.8x.02)), which is the same figure we computed above for only 400 trials of two hands of $143.18.
So, in fact, for an equal number of trials, two hands of $143.18 are way riskier than one hand of $200.
Now, suppose we do not use the assumed relationship, v(x)/v(y) = (y)/(x), but substitute instead the relationship, d(x)/d(y) = (y)/(x). In this case, since d(x) = 1.1225, and d(y) = 1.8762, we have:
(13) 1.1225/1.8762 = (y)/$200, so (y) = $119.657
As we saw before, when we play two hands d1 rises to 1.8762; so, in this case, d(r) is equal to:
(14) d(r) = 1.8762/20 = .0938 or 9.38%
This is one d(r); 2d(r) is double this, or 18.76%.
This means, that to a certainty of 2d(r)'s, if our large bets are two hands of $119.657 each, and we play 400 trials, the risk "overlay" on our expectation (E) would be $8,980 ($119.657x400x.1876). Note, this figure for risk overlay ($8,980) is exactly the same figure for risk overlay that we got for one hand of $200 -- this is the key equivalency that demonstrates the validity of this result. Since we are playing two hands of $119.657 each, (E) would also be doubled to 4.0%, this means that we would need a bankroll of $7,065.49 to be safe from trip-stake wipeout to a certainty of 2d(r)'s ($8,980-($119.657x400x.04)).
This figure for trip-stake risk ($7,065.49) is slightly lower than the figure we got for one-hand bets of $200 ($7,380) because the size of our per-hand bets has only increased by a factor of 1.6714 ($200/$119.657), whereas our expectation (E) has effectively doubled from 2.0% to 4.0%. Of course, this disproportionate increase in (E) relative to risk is the main reason we spread to two hands in the first place. To further demonstrate the claimed equivalency, if we arbitrarily decrease (E) from its actual value of 4.0% to the value proportionate to the increase in per-hand bets (1.6714), we get a revised value for (E) of 3.343% (1.6714x2). If we then adjust the $8,980 2d(r) risk overlay for this value for (E), we get a figure of $7,380 ($8,980-($119.657x400x.03343)) as the necessary bankroll to be safe from trip-stake wipeout to a certainty of 2d(r)'s. Note, that this figure ($7,380) is exactly the same figure we got for the 2d(r) risk for one hand of $200.
What this all means is that the usual relationship for keeping risk constant when you spread to two hands, v(x)/v(y) = (y)/(x) is not valid for a constant number of trials (rounds), but only for a constant target win (W). It is therefore appropriate for the calculation of total playing stake, but not for the calculation of trip stakes (except for very long trips). For trip-stake calculations the proper relationship is d(x)/d(y) = (y)/(x). This relationship yields the exact same risk overlay in both cases, and is only very slightly conservative with respect to overall risk of trip-stake wipeout for trips of up to several thousand trials.
Although, for precisely equal risk, the number-of-trials "phase" differential defined by n*v(x)/v(y) always holds for bets defined by v(x)/v(y) = (y)/(x), as the number of trials passes the point of maximum short-term risk (defined in each case by the trial equal to (m^2)*v(n)/(2*E)^2, where (m) is the number of sd's considered), the actual difference between the long-term risk at the limit and the shorter-term risk for the trip narrows and eventually approaches zero at the limit. This means that for very long trips of many thousands of trials where you will be bringing along virtually your entire stake, the n-hand (y) bets can be properly sized using v(x)/v(y) = (y)/(x), rather than d(x)/d(y) = (y)/(x), without a significant increase in risk of ruin. However, for the vast majority of (shorter) trips, to avoid a dangerous increased risk of trip-stake wipeout the n-hand (y) bets should be sized using d(x)/d(y) = (y)/(x). Or, with equivalent short-term risk, if the n-hand (y) bets for such trips are sized using v(x)/v(y) = (y)/(x) then the trip stake should be increased by a factor f = {v(x)*d(y)}/{v(y)*d(x)} over the trip stake required for n-hand (y) bets sized using d(x)/d(y) = (y)/(x).
ADDENDUM
Several people who have read this article have asked me how I got the equation, above, defining the trial of maximum short-term risk: (m^2)*v(n)/(2*E)^2. It's a simple but instructive derivation, and for those interested it's included, below.
As shown in the article, the equation for expected result (R) for (N) trials, expectation (E), and (B) units bet per hand for (n) simultaneous hands, with a standard deviation per hand of sd(n), assuming a loss of (m) standard deviations is:
(1) R = B*E*N - B*m*sd(n)*sqrt(N)
Since we are looking for a minimum for (R) with respect to (N), we need to find the derivative dR/dN, and set it to zero for some N = N0:
(2) dR/dN = B*E - B*m*sd(n)/(2*sqrt(N))
Setting dR/dN = 0, and N = N0:
(3) B*E = B*m*sd(n)/(2*sqrt(N0))
(4) N0 = (m*sd(n))^2/(2*E)^2 or,
(5) N0 = (m^2)*v(n)/(2*E)^2
And there you have it.
qed
Bryce Carlson
<<