Got it!
Yeah, in hindsight, I did ask a stupid question here!
I think I have a better understanding of what is going on here. CDZ* is much like CDZ-, except there are specific 'relaxation of the rules' governing CDZ-. As a consequence, such relaxation results in developing 'exceptions' much like [2 6] vs 5 as per the given example. This is akin to taking the TD strategy and saying: "Follow this strategy, except for +3-Card Hard 16 vs T; Stand." If I am not mistaken standing on all multi-card hard 16's vs a Ten results in a small improvement for the basic strategist, no? But, even then! Not all multi-card hard 16's are to result in standing. Some are still hit! What's nice is, we can delineate each of these exceptions and see how much gain in EV they offer us each. We can choose which ones to use and which to ignore. Much like post-split strategy, we can choose to include doubling [2 6] vs 5 or not!
With that said, it seems (correct me if I am *still* wrong) that what you are trying to look for is what post-split exceptions you want to include that maximizes EV for our zero-memory CD strategy. The issue is you don't know which ones to use as there may be multiple such strategies (hence CDZ*) and a full combinatorial analysis would be too much work to find out.
Storing/caching dealer probabilities is definitely an optimization technique that is to be used. (I know Eric caches his, and now I know you do to! )
With that said, I have been pondering if we can even speed up further on our computation of dealer probabilities. What I noticed is that there were some instances where some CA recomputes specific probabilities multiple times over for different dealer probability subsets and that the next optimization would be to store/cache pre-computed shoe subset probabilities.
A question I asked Eric a while back:
I speculate that "yes we can." How much of an improvement is up for debate.The genesis thought is this: There is a corresponding collection of dealer and player hand subsets that exists for some fixed deck set. That is, for an infinite deck, there are many probable dealer hand subsets for player hand [T6] and player hand [AA245] alike. Now, let's assume that there are a total of 3072 unique player hand subsets and 2200 unique dealer(S17) hand subsets. The product of these two is 6.76*10^6 unique evaluations. Now, does there remain certain evaluations that repeat? Let's say that we are evaluating player hand [66] vs dealer[T]. We will see dealer hands like [7T], [8T], [9T], [TT], and [AT]. Now, there same hands can be seen with player hand [56], [44], [23], ..., etc. So far, we have reduced this to less than 6.76*10^6 by a few less evaluations. Can we go further?"
The first thing that came to mind is that what is the connection between player hand subsets and dealer hand subsets when computing standing expectation? Well, it is the given deck subset! Right? Instead of recalculating all 2200 dealer hand probabilities 3072 times over, it is possible to reduce that to a much lower number, say, less than 10^6?
Bookmarks