This is a good point, although I suspect it's actually the former (more on this shortly); in fact, note the "disclaimer" in the same linked/quoted write-up of my past analysis:
"(A word of caution: before anyone runs off quoting this as “the” formula for playing efficiency, note that these particular constants depend on all of the rule variations, number of decks, and penetration assumed at the outset of this discussion.)"
In other words, the provided "definition" of PE(v) = (v+0.004239)/0.001906 is only valid for 6D with 75% pen, S17, DOA, DAS, SPL3, NS... and even then, we still need to compute v, the actual expected return from using whatever playing strategy is under consideration, ranging from fixed basic strategy at one end to optimal (CDZ-) at the other end.
If you want to evaluate this proposed PE metric for, say, 2D with different rules, or 4D with yet another set of rules, or whatever, then three steps are needed:
1. Compute the expected return v_min for fixed basic strategy (the "lower bound" on reasonable achievable EV). Even this requires substantial computing resources, at least for a fixed burn card position (vs. fixed number of hands), to account for the cut-card effect, etc.
2. Compute the expected return v_max for optimal strategy (the "upper bound" on achievable EV). This is hard to do efficiently, and was/has been the primary goal of my CA over the last 20 years or so.
3. Compute the expected return v for the playing strategy being evaluated (e.g., Hi-Lo I18 indices, or Hi-Opt II full indices, etc.). The ability to compute this *exactly* for any sampled depleted shoe is a more recent addition to my CA.
Given these values, the proposed measure of playing efficiency is (v-v_min)/(v_max-v_min). In other words, how far does the evaluated strategy get you towards optimal (100% PE), with fixed basic strategy as the starting point (0% PE)?
If this sounds like a lot of work, it is . But this proposed definition, I think, more closely reflects our intuition, what we really *want* to measure, whether it's computationally hard to do so or not; Griffin's approach of using effects of removal is a good approximation... but it's an approximation based on limitations of algorithms and computing resources both of which have improved significantly in the intervening years. We can do better now. (We had almost exactly this same discussion some years back in one of the private forums, in that case focusing on betting correlation (BC), whose approximate formula suffers from the same problem. Amusingly, a forum search for posts by ericfarmer with keyword "drunk" will get you there.)
So to answer Three's question: why are my reported numbers in the referenced write-up different from those reported elsewhere, even for the same setup (number of decks, penetration, rules, etc.)? The short answer is because they are computed differently. Most reported figures use the calculation described by Griffin, based on single-card removals, effectively "linearizing" behavior which is decidedly non-linear. My figures use what I argue is a more useful/intuitive-- but much more computationally expensive-- method of calculation.
Eric
The very best discussion in print about the interaction of BC, PE, and IC can be found in Bryce Carlson's Blackjack for Blood, Chapter 5, pages 59-67. It is crystal clear, very well written, and perfectly accurate.
And no, Moses, of the three, insurance is surely the least important consideration, simply because it doesn't occur frequently enough. Dealer has an ace up 1/13 of the time (slightly higher in high positive counts), and TC >= +3 only about 9% of the time. Together, we insure only about 1 hand out of 145, and when the TC is right at +3, the edge is obviously minimal. By +4 or +5, it's greater, but then, clearly, the frequency is even smaller still.
Don
P.S. I know your insurance threshold is lower, for SD, but you're the only person on the planet who plays SD, so I'm writing for a wider audience.
"Suppose you have a max bet out of $1600. Perfect Insurance picks up two bets that regular insurance would've missed. That is a savings of $3,200. But you'd have to guess correctly on 32 minimum bets to achieve the same value."
You have a strange way of reasoning, as if "perfect insurance" means that you win every insurance bet you make!! The vast majority of the time, insuring with 100% efficiency or with 75% efficiency is going to lead to identical results. Naturally, knowing more is better than knowing less, but the gain from perfect insurance over counts that have, say, 75% IC can't be very much.
Don
I guess part of what I am saying is "perfect" insurance can lose up to two third's of the time, but still be the correct bet to make. "Regular" insurance index could make a bet you collect on that "perfect" would tell you not to bet on and vice/versa. Your statement implies (I infer) there is no overlap between "perfect" and "regular" indices, where in fact the overlap is substantial, 95+% in many cases.
Bookmarks