I made a minor update to my blackjack CA software (on GitHub), to support computing probabilities and expected values to arbitrary (rational) precision, instead of the usual 64-bit double (floating-point) precision.

For example, playing the usual "realizably optimal" CDZ- strategy with 6D, S17, DOA, DAS, SPL3, LS, theexactexpected return as a fraction of initial wager is:

or about -0.33%.Code:636491186124390779995774550739146028101753 - --------------------------------------------- 192719583540554771243108474049609438884038125

This is mostly just a parlor trick of purely academic interest (especially for the probably-unrealistic specific rules in this example)... but it turns out to have some practical utility for analysis. I made these changes motivated by a couple of parallel discussions about CA algorithms for pair splitting. Suppose we have a new algorithm for computing expected values, perhaps using some new post-split playing strategy, or maybe just a more efficient/faster algorithm, etc. How can we check whether our algorithm is "exact"?

Recall that the true count theorem-- or really the "extended" version of it proved by Thorpe to apply to non-linear functions like exact expected return-- essentially says that "the average effect of removal must be zero." That is, if we compute the expected return from playing CDZ- from a full 6-deck shoe (as reported below by my CA, for example, using the above rules):

-0.0033026803733750346

then we should get exactly the same result if we instead compute the average of the expected return after burning the top card of the shoe, but still playing the same "full-shoe" strategy (again as reported below by my CA):

-0.0033026803733748255

These aren't exactly the same, but they should be. So how can we tell if this discrepancy is because of a bug or other flaw in the algorithm, as opposed to simple rounding error due to working in limited double precision arithmetic?

I "typedef"ed the numeric data type in my CA code, to default to double, but to support arbitrary-precision rationals instead. Using the latter, I verified that we always get the exact result above, both off the top of the shoe as well as averaged across possible burn cards (and a few other more complex removal strategies as well).

In other words, if we asked some other analyst to compute the exact return using CDZ- with the above rules, then at least in principle, they should be able to verifyexactagreement with the above fraction, if they also used arbitrary-precision rational arithmetic in the implementation of their algorithm... despite possiblydisagreeingwith the double-precision values above in low-order bits, due to the various rounding error effects of differing order of operations in their (otherwise equivalent) code, using a different compiler and/or a different hardware architecture, etc.

I wrote a related blog post on this subject, with more details and a simpler example demonstrating the issue and potential problems.

## Bookmarks