Thanks for these examples, this is very helpful. If I understand the description of TD in particular, you're using the strategy described in Table B10 of BJA3, correct? I am almost, but not quite, able to reproduce your values: my CA is able to evaluate "user-defined" strategies like this, including e.g. standing on multi-card hard 16 vs. T, but the additional departure of standing *only* after splits would require some code changes. Leaving out that additional departure (i.e., I think the following is "almost" your TD strategy, except that hard 16 vs. T post-split is hit with 2 cards, stand with 3+ cards) has only a small effect; following is the exact expected return and EORs (in fraction, not percentage, of initial wager):
Code:
E[X] = -7347769581814513817766698807563202595964/2215167626902928405093200851144936079126875
EOR[A] = -811200833345198412983928163283105851343/874011716737209846907521424261267296526250
EOR[2] = 13719468501548752857749982590651293131403031/19657397521136586666797064353060162766171888750
EOR[3] = 660607488875634224558917995367714966800127/786295900845463466671882574122406510646875550
EOR[4] = 18777140316753690879378570835279739836915964/16381164600947155555664220294216802305143240625
EOR[5] = 3888281490917116217617340335271365702861079/2656405070423863063080684372035157130563768750
EOR[6] = 42797883125837886449191199447714781408889/56682230453104344483267198249885129083540625
EOR[7] = 39743536040566670440241604251875546215872057/98286987605682933333985321765300813830859443750
EOR[8] = -322291245645322470232943939841030978605531/2808199645876655238113866336151451823738841250
EOR[9] = -773634316055270571866465880292379703210151/1820129400105239506184913366024089145015915625
EOR[T] = -4492590055762117480462106986344363416380399/4680332743127758730189777226919086372898068750
For comparison, following are the same calculations done in double precision, including all necessary digits to unambiguously "round-trip" the underlying 64-bit values:
Code:
std::cout << std::setprecision(17);
E[X] = -0.0033170264374474123
EOR[A] = -0.00092813496410931035
EOR[2] = 0.00069792903596727074
EOR[3] = 0.00084015125624524456
EOR[4] = 0.0011462640645019395
EOR[5] = 0.0014637381678756084
EOR[6] = 0.00075504938291479778
EOR[7] = 0.00040436213387715775
EOR[8] = -0.00011476792475138137
EOR[9] = -0.00042504357987414865
EOR[T] = -0.00095988689316105907
You can verify that these values, calculated with limited double precision and compared with the exact values above, are accurate to only roughly 12 digits, as you similarly note in your example.
My question is, how did you know that your values are accurate to ~12 digits, and *only* to 12 digits (i.e., that the low-order ~4 digits or so were *inaccurate*), without *exact* values to compare against? Did you just do the average EOR calculation with successively fewer digits until the result was zero (at least to within an ulp-- note that your average EOR *isn't exactly* zero, it's -1/1.3e13)?
In other words, when double precision affords us roughly 16 (log10(2^53)) digits of precision, what justifies being okay with only 12 digits of accuracy, i.e. being willing to chalk up 4 digits of error *solely* to floating-point rounding?
Consider, for example, my CA from over 20 years ago. I had just made updates at the time to compute exact EVs for SPL1, but still with an only approximate-but-fast algorithm for SPL3... that passed this "consistent with average EOR==0" test accurate only to roughly 8 most significant digits. The lower 8 digits were in error... but why not claim that those lower 8 digits of inaccuracy were similarly due solely to cumulative rounding error, and not to more fundamental algorithm flaws? (I'm not claiming this, merely presenting this as a devil's advocate to try to illustrate my point of the potential benefit of being able to *eliminate* cumulative rounding error as a confounder.)
Bookmarks