Yes you are right. My program does not take into account A AA3.
Only AA 3.
I check this.
Another back-track for me on this. Further investigation of my algorithm shows that my software indeed only examined A,A vs ? as the initial hand. Because playing split aces is not allowed by the rules, examination of A; A,A ? never occurred.
Somehow Cac knew this psychically.
Your psychic powers have failed you this time
Each time the hand A,A vs ? is examined during the simulation for this game, anywhere from zero to 22 other aces may have been removed. This is the same as things occur during actual play of the game and the results for playing A,A vs ? are the amalgamation of these results. So the contribution of A; A,A vs ? is actually represented (a back-back-track or re-track on my part?) along with all of the other possibilities. Removing an ace from the shoe would mean that anywhere from zero to 21 other aces may have been removed, which does not represent any useful situation.
Just curious. How do the CA algorithms handle the possibility that different numbers of aces may have actually been removed when playing this hand? In general, how do they handle the vast number of remaining deck compositions which are possible when playing any hand?
Computing splits efficiently requires using a fixed strategy. Computing every possible strategy decision encountered in the course of splitting a pair can result in an enormous number of iterations, each of which requires calculating dealer probabilities.
The split strategy known as CDZ- ignores computing post-split strategies altogether by applying pre-split strategies to split hands.
CD means composition dependent
Z means zero memory optimal strategy accounting for cards that have been played
- means pre-splt
I came up with a strategy which Eric Farmer calls CDP1. CDP1 uses the optimal strategy of the first split hand and applies it to all subsequent split hands. It is a fixed strategy because all split hands employ the same strategy. I think what Cacarulo is suggesting is something like this where split strategy indexes could potentially differ from indexes using pre-split strategy for splits. For a lot of decks there probably wouldn't be much difference.
In the course of splitting varying number of pair cards are removed. A fixed strategy allows a very accurate estimate of split EV using a relatively small number of shoe states rather than having to consider all of the possible iterations.Just curious. How do the CA algorithms handle the possibility that different numbers of aces may have actually been removed when playing this hand? In general, how do they handle the vast number of remaining deck compositions which are possible when playing any hand?
The first split algorithm I came up with was recursive and continuously removes pair cards until no more splits are allowed. The theory was that hand1 and hand2 of a split would have equal EV if each could be resplit the same number of times. I proceed as if hand1 EV = hand2 EV and the repair the erroneous EV.
http://www.bjstrat.net/splitAlgorithm.html
Last edited by k_c; 09-30-2022 at 09:56 PM.
Thanks k_c but, unless I'm reading it wrong, your response addresses improving the performance of the computation of basic strategy using some reasonable assumptions about how the split hands will be played. I am familiar with all of this. But computing basic strategy has the property that you know the exact composition of the remaining cards at all times and so it is naturally suited to CA methods.
What I'm wondering about is the CA methods used by Cac, Zen and Gramazeka (and you? and others?) for computing indices. Cac speaks about removing specific cards and Zen references exact EOR numbers, but these only represent the situation off the top of the shoe. However, each true count is represented by a vast number of possible remaining deck compositions multiplied by a variety of penetrations. Cac then goes on to reference an assumed level of penetration (4.5/6 in this case).
I know from experimenting with my own simulation algorithms that the final index numbers are sensitive to all of these factors. For each holding, the expected frequency of occurrence varies with penetration and the expected frequency of occurrence of remaining deck compositions, from "balanced" to "extreme", for lack of better terms, and it necessary to collect samples in numbers which reflect these frequencies. Using simulation makes this easy. I'm wondering how all of this comes together when using CA methods to calculate indices.
If anyone is inclined to share insights, I suggest a new thread in the Advanced Strategies, Theory and Math forum.
I can only speak for myself. I can try to illustrate approximately A-A versus 6, h17. This does not remove an additional ace which would be the case if A-A resulted from splitting aces.
Basically,
1. Compute rank probabilities based upon the running count of a given counting system and number of cards remaining, allowing for cards known to be specifically removed. This can immediately be used to find RC/cards remaining where insurance is +EV. However, for other indexes only player cards are specifically removed but not up card in order to adapt to the way my CA works.
2. Create a shoe for the rank probs/cards remaining and teach my CA to compute using this shoe to get reasonable estimates of EVs for the input counting system. (Here RC = -4 before up card)Code:Count tags {1,-1,-1,-1,-1,-1,0,0,0,1} Player hand composition: 2, 0, 0, 0, 0, 0, 0, 0, 0, 0: Soft 12, 2 cards Decks: 6 (possible input for cards remaining: 1 to 312) Cards remaining before up card (current = 156, no input = no change): Initial running count (full shoe): 0 Running count (before up card is dealt, no input defaults to 0): -4 No subgroup (removals) are defined Number of subsets for above conditions: 37 Prob of running count -4 with above removals from 6 decks: 4.78680e-002 p[1] 0.069183 p[2] 0.079342 p[3] 0.079342 p[4] 0.079342 p[5] 0.079342 p[6] 0.079342 p[7] 0.077406 p[8] 0.077406 p[9] 0.077406 p[10] 0.30189 Press any key to continue:
3. Go through the range of all possible running counts and record RC indexes. I can only do this for one cards remaining value at a time.Code:Number of decks: 6 Count tags {1,-1,-1,-1,-1,-1,0,0,0,1} Player hand composition: 2, 0, 0, 0, 0, 0, 0, 0, 0, 0: Soft 12, 2 cards After player hand is dealt - Cards remaining: 156, Running count: -4 Subgroup removals: No subgroups are defined Shoe comp (A-5): {10.7925, 12.3774, 12.3774, 12.3774, 12.3774} Shoe comp (6-10): {12.3774, 12.0753, 12.0753, 12.0753, 47.0946} After up card is dealt - Cards remaining: 155 Running count (up card 1 to 10): {-5,-3,-3,-3,-3,-3,-4,-4,-4,-5} Up card Stand Hit Double Split 1 Split 2 Split 3 Surr Strat 1 -72.406 -34.583 -71.762 -23.633 split 2 -29.435 8.442 -7.219 46.241 split 3 -25.528 10.667 -0.754 50.887 split 4 -21.226 12.863 5.990 55.808 split 5 -17.099 15.612 12.667 60.977 split 6 -12.900 18.219 18.509 65.142 split 7 -47.170 16.954 -17.892 45.021 split 8 -50.909 10.044 -31.606 33.259 split 9 -53.371 0.711 -44.945 21.492 split 10 -56.674 -11.666 -52.851 7.861 split Overall hand EV vs all upcards: 30.6019 Press c or C for EV conditioned on no dealer blackjack, any other key to exit
4. Interpolate. Compute a more accurate true count that applies to the input cards remaining if desired,Code:____________________________________ h17 ___________________________________ Count tags {1,-1,-1,-1,-1,-1,0,0,0,1} Composition dependent indices for hand, rules, number of decks, and pen Player hand composition: 2, 0, 0, 0, 0, 0, 0, 0, 0, 0: Soft 12, 2 cards Decks: 6 (possible input for cards remaining: 1 to 312) Cards remaining before up card = 156 No subgroups are defined 2 3 4 5 6 7 8 9 T A Stand h h h h h h h h h h Double >=38 >=22 >=11 >=3 >=-3 - - - - - Pair p>=-36 p>=-37 p>=-39 p>=-41 p>=-45 p>=-30 p>=-26 p>=-24 p>=-25 p>=-16 LS - - - - - - - - - - ES - - - - - - - - - - Press any key to continue
I would guess that Cac has multiple methods and Zen uses algebraic approximation using eors. Not sure of what Gramazeka does.Code:Let up card = 6 and display hit and dbl EV RC = -3, hit EV = .18219, dbl EV = .18509, diff = .00290 RC = -4, hit EV = .18017, dbl EV = .17833, diff = -.00184 interpolate: -3.611814 (appox RC where hit EV = dbl EV) TC = 52 * (-3.611614) / 155 (156 - 1 to allow for upcard because this is relative to how my CA computes) TC = -1.2 for 155 cards remaining, RC = -3 after up card of 6
Hope this is helpful.
k_c
It is true, the generation of indexes is a subject to which I have dedicated many years and for this reason I have developedI would guess that Cac has multiple methods and Zen uses algebraic approximation using eors. Not sure of what Gramazeka does.
five perfectly valid methods or algorithms to generate them. Of course, there are some algorithms that are more accurate than
others. Among the most accurate I have are the simulation (or monte-carlo) method that also allows me to generate RA-indices,
and two others based on combinatorial analysis that allow me to obtain the most accurate EM-indices.
The difference between these last two is that one uses a Gaussian approximation to calculate the probability of a RC as a
function of depth, while the other calculates it exactly.
Among the three methods mentioned as the most accurate, penetration is taken into account, which allows generating an index
for a given pen such as 4.5/6. I could even generate an index for a given range, say between 156 and 78 cards.
Now, the point of all this is that my best index for a certain play must coincide with the one obtained by a commercial
program like CVDATA or SBA. With Norman we have done exhaustive comparisons between his and mine and have had no discrepancies
at all.
Going back to the s12v3 topic (S17 and H17), there are some differences like the ones I already mentioned and in which I see
that instead of looking for where the problem may be, the discrepancy focuses on discussing the methodology I use.
I do not intend to enter into this discussion, but instead I can offer my results based on simulation and combinatorial analysis.
Both are coincident.
1) By simulation (4.5/6, TC floored and DR calculated to the nearest half)
a) s12v3 (removing two aces and a three)
S17 ==> index = +8
H17 ==> index = +8Code:9 0.13878383731517 0.95298268109645 0.15605379110787 3.84169861524341 0.01726995379269 20510825 8 0.13314496224470 0.95165358021652 0.13821013936613 3.83753694570823 0.00506517712142 54638958 7 0.12866821974899 0.95065857856909 0.12349410668717 3.83467649711386 -0.00517411306182 73353055 6 0.12352655451792 0.94936534028024 0.10572939340757 3.83057766564323 -0.01779716111035 175473323 5 0.11931293657463 0.94822023537866 0.09073189214944 3.82704646259109 -0.02858104442519 242376517
b) s12v3 (removing three aces and a three)Code:9 0.14031439495973 0.95278975857870 0.16045575933684 3.84210503478041 0.02014136437710 20510825 8 0.13457313735741 0.95144724392438 0.14245798757729 3.83797655877698 0.00788485021987 54638958 7 0.12999822570444 0.95044181868090 0.12761557102155 3.83514551643418 -0.00238265468289 73353055 6 0.12472556298486 0.94914074773634 0.10964141825706 3.83108682565953 -0.01508414472780 175473323 5 0.12039505048255 0.94798496093580 0.09448997899413 3.82762034657013 -0.02590507148842 242376517
S17 ==> index = +8
Code:9 0.14063114000671 0.95317559014766 0.16290075003612 3.84183315214897 0.02226961002942 18174858 8 0.13480241717954 0.95193211784268 0.14439017450518 3.83792959763171 0.00958775732564 48979233 7 0.13064566353950 0.95087697457766 0.12978466867655 3.83467020906411 -0.00086099486294 65611309 6 0.12524759406831 0.94961221583436 0.11244530093101 3.83065818120981 -0.01280229313731 158336790 5 0.12109088604987 0.94849882673876 0.09739390777932 3.82737340452408 -0.02369697827055 218408733
H17 ==> index = +7
2) By CA (4.5/6, TC floored and DR calculated to the nearest half)Code:9 0.14214493450238 0.95299143465110 0.16713792206795 3.84220971630150 0.02499298756557 18174858 8 0.13621889505701 0.95174567147673 0.14852012892893 3.83835549241859 0.01230123387191 48979233 7 0.13193948012834 0.95067934096544 0.13376282433262 3.83513744558884 0.00182334420427 65611309 6 0.12642562098171 0.94939865207574 0.11622557208593 3.83117075949310 -0.01020004889577 158336790 5 0.12216189633773 0.94827271856387 0.10101586002058 3.82790783370370 -0.02114603631715 218408733
a) s12v3 (removing two aces and a three)
S17 ==> index = +8
H17 ==> index = +8Code:+------------+-----------------------+-----------------------+-----------------------+-----------------------+-----------------------+---+---+ | TC | Probability | Standing | Hitting | Doubling | Splitting | 1 | 2 | +------------+-----------------------+-----------------------+-----------------------+-----------------------+-----------------------+---+---+ | 9 | 0.00092539067698442 | -0.12917022205199663 | 0.14011569055336448 | 0.15957879423090041 | 0.79585853709522969 | | P | | 8 | 0.00247047587046877 | -0.14330615145561731 | 0.13426415395847002 | 0.14120409411534213 | 0.76824582210859671 | | P | | 7 | 0.00325427764875370 | -0.15467241478156299 | 0.12981883444506226 | 0.12642391133666261 | 0.74504830762024199 | | P | | 6 | 0.00779282421421669 | -0.16821356731640061 | 0.12460971566985264 | 0.10885443822433380 | 0.71722305067637682 | | P | | 5 | 0.01061278454380245 | -0.18015164631584346 | 0.12026917144858326 | 0.09329868963364041 | 0.69170924592385552 | | P | +------------+-----------------------+-----------------------+-----------------------+-----------------------+-----------------------+---+---+
b) s12v3 (removing three aces and a three)Code:------------+-----------------------+-----------------------+-----------------------+-----------------------+-----------------------+---+---+ | TC | Probability | Standing | Hitting | Doubling | Splitting | 1 | 2 | +------------+-----------------------+-----------------------+-----------------------+-----------------------+-----------------------+---+---+ | 9 | 0.00092539067698442 | -0.12386813320699951 | 0.14172407755382635 | 0.16402194348223290 | 0.79588400213501842 | | P | | 8 | 0.00247047587046877 | -0.13797529638884171 | 0.13576659253706855 | 0.14550253428373039 | 0.76822783179372156 | | P | | 7 | 0.00325427764875370 | -0.14933683438832471 | 0.13121595198772082 | 0.13057780778124078 | 0.74499259553942065 | | P | | 6 | 0.00779282421421669 | -0.16286842424049261 | 0.12588648167164126 | 0.11283928371851466 | 0.71712039312600928 | | P | | 5 | 0.01061278454380245 | -0.17481572041229607 | 0.12142102483124835 | 0.09710879547536223 | 0.69156229635012800 | | P | +------------+-----------------------+-----------------------+-----------------------+-----------------------+-----------------------+---+---+
S17 ==> index = +7
H17 ==> index = +7Code:+------------+-----------------------+-----------------------+-----------------------+-----------------------+-----------------------+---+---+ | TC | Probability | Standing | Hitting | Doubling | Splitting | 1 | 2 | +------------+-----------------------+-----------------------+-----------------------+-----------------------+-----------------------+---+---+ | 9 | 0.00082329091908529 | -0.12566257170554379 | 0.14200507222817793 | 0.16606312627459091 | 0.80573680471920150 | | P | | 8 | 0.00222447010469417 | -0.13981056127953906 | 0.13612228083140540 | 0.14768136602602067 | 0.77811592606438951 | | P | | 7 | 0.00292527393256047 | -0.15113381460414851 | 0.13167269074469948 | 0.13297135589257120 | 0.75501869171403013 | | P | | 6 | 0.00706867819074979 | -0.16469380021947616 | 0.12642803546201123 | 0.11538741587188141 | 0.72717520669471736 | | P | | 5 | 0.00961011705001321 | -0.17660130231233276 | 0.12207341559898841 | 0.09988449383435030 | 0.70174293460253134 | | P | +------------+-----------------------+-----------------------+-----------------------+-----------------------+-----------------------+---+---+
Code:+------------+-----------------------+-----------------------+-----------------------+-----------------------+-----------------------+---+---+ | TC | Probability | Standing | Hitting | Doubling | Splitting | 1 | 2 | +------------+-----------------------+-----------------------+-----------------------+-----------------------+-----------------------+---+---+ | 9 | 0.00082329091908529 | -0.12056852028635975 | 0.14357563899021553 | 0.17034709524296529 | 0.80574153048601871 | | P | | 8 | 0.00222447010469417 | -0.13468802509635666 | 0.13759214609464263 | 0.15182762543794312 | 0.77807967621833485 | | P | | 7 | 0.00292527393256047 | -0.14600618686184486 | 0.13304265662927098 | 0.13698030253569818 | 0.75494713452083662 | | P | | 6 | 0.00706867819074979 | -0.15955609516394698 | 0.12768346310638320 | 0.11923524917851459 | 0.72705940461724217 | | P | | 5 | 0.00961011705001321 | -0.17147178422333267 | 0.12321013574974345 | 0.10356604455407128 | 0.70158562624762077 | | P | +------------+-----------------------+-----------------------+-----------------------+-----------------------+-----------------------+---+---+
Sincerely,
Cac
I don't know if I will explain this very well.
I try to test my methods by trying them on a problem with an obvious answer. Use insurance count (tag non-tens as -1 and tens as +2) (sign of tag relative to what remains in shoe) as the basis for a question.
For this count this is data listing min RC values for single deck where Prob(ten) >= 1/3, which we should agree on.
Using TC as the index metric, obvious answer is buy insurance when TC > 0. Also acceptable, buy insurance when TC >= 0Code:Count tags {-1,-1,-1,-1,-1,-1,-1,-1,-1,2} Decks: 1 Insurance Data (without regard to hand comp) No subgroup (removals) are defined **** Player hand: x-x **** Cards RC TC ref 48 0 0.00 47 1 1.11 46 2 2.26 45 0 0.00 44 1 1.18 43 2 2.42 42 0 0.00 41 1 1.27 40 2 2.60 39 0 0.00 38 1 1.37 37 2 2.81 36 0 0.00 35 1 1.49 34 2 3.06 33 0 0.00 32 1 1.63 31 2 3.35 30 0 0.00 29 1 1.79 28 2 3.71 27 0 0.00 26 1 2.00 25 2 4.16 24 0 0.00 23 1 2.26 22 2 4.73 21 0 0.00 20 1 2.60 19 2 5.47 18 0 0.00 17 1 3.06 16 2 6.50 15 0 0.00 14 1 3.71 13 2 8.00 12 0 0.00 11 1 4.73 10 2 10.40 9 0 0.00 8 1 6.50 7 2 14.86 6 0 0.00 5 1 10.40 4 2 26.00 3 0 0.00 2 1 26.00 1 2 104.00
For any of the above data points I can get this answer by interpolating to find RC when prob(ten) = 1/3, using 27, 26, 25 cards remaining as example.
27 cards (no interpolation necessary since prob(ten) = 1/3
RC = 0, prob(ten) = 1/3
TC = 52*0/27 = 0
26 cards single deck insurance (insurance count)
RC = +1, prob(ten) = 9/26
RC = -2, prob(ten) = 8/26
interpolate: RC = 0, prob(ten) = (8 2/3)/26
TC = 52*0/26 = 0
25 cards single deck insurance (insurance count)
RC = +2, prob(ten) = 9/25
RC = -1, prob(ten) = 8/25
interpolate: RC = 0, prob(ten) = (8 1/3)/25
TC = 52*0/25 = 0
This is my question:
How would your method of finding TC index work on this data set?
Thanks,
k_c
I'll give you an idea of ??what I do. Maybe this can help you.
Let's use as an example a deck of 52 cards and the unbalanced count of tens (1 1 1 1 1 1 1 1 1 -2).
1) Remove an ace from the pack (which is the ace the dealer receives)
2) Let's assume that we are going to play until there are 10 cards left (CL)
3) Two nested loops are needed, one loop that goes through all the possible RC from a minimum to a maximum.
It doesn't matter if you don't know the limits, it can be between -50 and +50. Then the ones that do not correspond
will be discarded (they are the ones whose probability is equal to zero).
The other loop will go through the different depths between 51 (52 -1) cards and 11 cards (10 have been left out).
4) Within these two nested loops you are going to calculate, through a combinatorial analysis, the probability and the
expected value for each combination of RC and depth and the values ??obtained are going to be accumulated into two arrays.
The index of the array is going to be the TC which in turn you get from doing (RC / depth * 52).
array1 (TC) += probability * EV
array2 (TC) += probability
5) You are going to get the index from array1. To do this, you have to go through that array from the lowest value of TC
until you find the TC in which the expected value becomes positive. That point corresponds to the searched index.
It's actually a little more elaborate but for now I don't want to complicate it for you.
The other array is used to know the frequency of each TC.
Hope this helps.
Sincerely,
Cac
I store my data in a list.
If I construct a list with a specific pen and rc for the insurance count it will contain either 0 or 1 entries (elems) depending upon rc. If elems = 0, probRC = 0.
This is what I presently do:
What I used to do was construct a list with all rc possible to use. For more complicated counts too much data crashes program.Code:list = new subsetList (inputCount, decks, pen, rc, specificRem); list->getProbRC(rc, probRC, pRank); long elems = list->elems; delete list; list = NULL;
For the insurance count where pen = 26 there are 17 entries in the list and sum of probRC of each entry = 1. For any count sum of probRC of entries in list = 1 regardless of number of entries.
What I could do is something like this:
I can relate to probRC, but probTC? (relative to simple insurance count)Code:list = new subsetList (inputCount, decks, pen, specificRem); for (rc = minValue; rc <= maxValue; ++rc) list->getProbRC(rc, probRC, pRank); long elems = list->elems;
k_c
Please don't construe my questions as an attack on your methodology. My intent was not to question the validity of your methods (and those of others), but rather to ask questions about these methods in an effort to understand them better. With a better understanding of what we all do, a more focused investigation of the source of any differences can be made.
I appreciate that we all have worked hard on our software over many years and that we do not necessarily want to divulge the details of algorithms that we have created and tested over the course of that time. I also appreciate the details which have been posted nonetheless. There's a lot here to consider and I will be reading each post carefully over the course of the next few days.
Last edited by Gronbog; 10-03-2022 at 10:01 AM.
Bookmarks