Page 1 of 2 12 LastLast
Results 1 to 13 of 20

Thread: Theoretial Speedup of Dealer Conditional Probabilites

  1. #1


    Did you find this post helpful? Yes | No

    Theoretial Speedup of Dealer Conditional Probabilites

    Hey guys!

    So, anyway. Over the last few years, I have been trying to conceptualize a way to speed up computing the dealer's conditional probabilities for each up-card for any rules set. Currently, the given CA's provided by MGP, Eric, k_c, and iCountNTrack offer significant speed-ups over any naive implementation, by caching dealer probs within a table and referencing them via some quasi-dynamic programming. Furthermore, with Eric's discovery of a faster implementation via his Steiner Tree method, any large subset of dealer hand probabilities can be computed in less than a second!

    However, there is still an issue: namely, a near majority of all CA computation is done under finding dealer probabilities...to the tune of around 90% of all computations! That indicates to me that this is were one should focus on finding a newer algorithm to speed things along (and maybe make CA a more integral part in simulating any blackjack game.) I have asked Eric several questions, generally to pick is mathy brain, to see if I could get a better idea behind the problem. If I recall correctly, Eric even conjectured that one could speed things up considerably by way of finding matching shoe subsets between both player and dealer hands. This, I think, is were one could find the general speed-up for finding dealer probs. However, the problem remains: "What is the given method that enables us to visit each shoe subset *once* and only once to compute all dealer conditional probabilities, if such method exists?"

    That is, if one were to fully enumerate all 33203125 single deck subsets, how can we compute all 33203125 dealer probabilities by visiting each shoe subset once? There have ben several ideas that I have had (none that I am convinced would work):

    Represent each Player, Dealer, and Shoe set as a collection of tri-partite graphs and finding an association where each subset is hit once.

    Constructing a Shoe stack and traversing each possible in-order shoe subset once, accumulating conditional probabilities for each dealer hand for each shoe subset

    Implementing a lookup table and storing each dealer *hand* as a conditional probability and, rather than recomputing redundant probabilities, just search and accumulate for that hand subset.

    ----------------------------------------------------------------------

    The problem with the first two is trivial: we revisit *all* shoe subsets more than once, failing our objective of only computing conditional probabilities once for each shoe/dealer subset. Similar to how the naive and current algorithms work.

    The last one is more of an issue of difficulty in building the needed data structures to ensure that we 1) Don't revisit shoe subsets, and 2.) ease of tracking pointers.

    To see the problem at hand, consider the following: Let us consider the dealer hand {6, 7, 8}. What is the conditional probability of this hand occurring in a single deck game, right off the top? Got it? Okay what is the conditional probability of {6, 7, 8} in a single deck game given each card A to T has been drawn? When thinking about *how* we are computing this, consider visualizing both the tree for the single dealer hand *and* the given shoe subset tree. The set of vertices for the dealer hand are [{0}, {6}, {6, 7}, {6, 7, 8}] and for the shoe tree, we have [{0}, {A}, {2}, ... , {6, 7, 8 ,T}] Now, think about how manty times we are revisiting each subset in *both* tress? What is their association? Notice any redundant computation? Think about how we are computing {6, 7, 8} for a full shoe and how it relates to computing {6, 7, 8} with an Ace missing? Also consider computing {6, 7, 8} with an Ace missing with how similar it is for a missing 2, 3, 4, 5, 9, and Ten.

    There is *something* there, but what is it?

  2. #2


    Did you find this post helpful? Yes | No
    Quote Originally Posted by dogman_1234 View Post
    Hey guys!

    So, anyway. Over the last few years, I have been trying to conceptualize a way to speed up computing the dealer's conditional probabilities for each up-card for any rules set. Currently, the given CA's provided by MGP, Eric, k_c, and iCountNTrack offer significant speed-ups over any naive implementation, by caching dealer probs within a table and referencing them via some quasi-dynamic programming. Furthermore, with Eric's discovery of a faster implementation via his Steiner Tree method, any large subset of dealer hand probabilities can be computed in less than a second!

    However, there is still an issue: namely, a near majority of all CA computation is done under finding dealer probabilities...to the tune of around 90% of all computations! That indicates to me that this is were one should focus on finding a newer algorithm to speed things along (and maybe make CA a more integral part in simulating any blackjack game.) I have asked Eric several questions, generally to pick is mathy brain, to see if I could get a better idea behind the problem. If I recall correctly, Eric even conjectured that one could speed things up considerably by way of finding matching shoe subsets between both player and dealer hands. This, I think, is were one could find the general speed-up for finding dealer probs. However, the problem remains: "What is the given method that enables us to visit each shoe subset *once* and only once to compute all dealer conditional probabilities, if such method exists?"

    That is, if one were to fully enumerate all 33203125 single deck subsets, how can we compute all 33203125 dealer probabilities by visiting each shoe subset once? There have ben several ideas that I have had (none that I am convinced would work):

    Represent each Player, Dealer, and Shoe set as a collection of tri-partite graphs and finding an association where each subset is hit once.

    Constructing a Shoe stack and traversing each possible in-order shoe subset once, accumulating conditional probabilities for each dealer hand for each shoe subset

    Implementing a lookup table and storing each dealer *hand* as a conditional probability and, rather than recomputing redundant probabilities, just search and accumulate for that hand subset.

    ----------------------------------------------------------------------

    The problem with the first two is trivial: we revisit *all* shoe subsets more than once, failing our objective of only computing conditional probabilities once for each shoe/dealer subset. Similar to how the naive and current algorithms work.

    The last one is more of an issue of difficulty in building the needed data structures to ensure that we 1) Don't revisit shoe subsets, and 2.) ease of tracking pointers.

    To see the problem at hand, consider the following: Let us consider the dealer hand {6, 7, 8}. What is the conditional probability of this hand occurring in a single deck game, right off the top? Got it? Okay what is the conditional probability of {6, 7, 8} in a single deck game given each card A to T has been drawn? When thinking about *how* we are computing this, consider visualizing both the tree for the single dealer hand *and* the given shoe subset tree. The set of vertices for the dealer hand are [{0}, {6}, {6, 7}, {6, 7, 8}] and for the shoe tree, we have [{0}, {A}, {2}, ... , {6, 7, 8 ,T}] Now, think about how manty times we are revisiting each subset in *both* tress? What is their association? Notice any redundant computation? Think about how we are computing {6, 7, 8} for a full shoe and how it relates to computing {6, 7, 8} with an Ace missing? Also consider computing {6, 7, 8} with an Ace missing with how similar it is for a missing 2, 3, 4, 5, 9, and Ten.

    There is *something* there, but what is it?
    Hello,

    I recently was contacted by someone who wanted to be able to generate dealer probabiliites in an Excel spreadsheet using VBA. I wound up translating my c++ code into VBA. If you want you can contact me and I could make it available to you. It does nothing but compute dealer probabilities.

    If rules allow for hitting of split aces there are a maximum of 26912 compositions needed to compute overall EV for 3 allowed splits. If 1 card is allowed to split aces, there are less. It's over 25000 but I don't remember exactly how many. I just use 26912. More is OK but too few causes access violation.

    k_c
    Last edited by k_c; 04-11-2022 at 07:34 PM.

  3. #3


    Did you find this post helpful? Yes | No
    Thanks k_c.

    Well, actually, that's not my problem. My problem is that, concerning each of those 26912 shoe subsets, when computing for their dealer conditional probabilites, there are *many* redundant computations being performed. As with my example in the OP.

    The question is: how can we compute each of those 26912 shoe subsets (as well as *all* shoe subsets, regardless of the number of decks) by visiting each shoe subset only once.

  4. #4


    Did you find this post helpful? Yes | No
    Quote Originally Posted by dogman_1234 View Post
    Thanks k_c.

    Well, actually, that's not my problem. My problem is that, concerning each of those 26912 shoe subsets, when computing for their dealer conditional probabilites, there are *many* redundant computations being performed. As with my example in the OP.

    The question is: how can we compute each of those 26912 shoe subsets (as well as *all* shoe subsets, regardless of the number of decks) by visiting each shoe subset only once.
    What I used to do was to store compositions in a binary search tree. If in the process of storing a composition a duplicate was encountered I could use what was already computed. The search tree was not reset over the course of program execution. Now I use a list. It is not that time consuming to reset as necessary.

    k_c

  5. #5


    Did you find this post helpful? Yes | No
    Quote Originally Posted by k_c View Post
    What I used to do was to store compositions in a binary search tree. If in the process of storing a composition a duplicate was encountered I could use what was already computed. The search tree was not reset over the course of program execution. Now I use a list. It is not that time consuming to reset as necessary.

    k_c
    Would it be possible to go into detail as to how this algorithm worked? That way I can see we are on the same page.

    Also, why a binary tree rather than a k-ary tree? And what are you storing in your list?

  6. #6


    Did you find this post helpful? Yes | No
    Quote Originally Posted by dogman_1234 View Post
    Would it be possible to go into detail as to how this algorithm worked? That way I can see we are on the same page.

    Also, why a binary tree rather than a k-ary tree? And what are you storing in your list?
    I misspoke since my old method reset the binary search tree if number of decks changed.

    I create my own data structures from scratch. Your mention of a k-ary tree is the first time I've heard of it.

    The difference between the binary tree and list is that the tree starts out empty and removal states are added whereas list enumerates all possible removal states for overall. (However for a specific individual hand I create and delete a one time use list based upon the hand comp.) For either case a dealer object is needed to compute dealer probabilities by going through the (possible) entries in the tree or list. For example, you asked about a player hand of 6,7,8.

    1. deal list of relevant player hands (in this case there is only 1)
    2. link to hands in tree or list
    3. compute dealer probs
    4. compute stand EV for hard 21
    5. hit EV = -1, double EV = -2 (for hitting hard 21)
    6. save and link EVs etc. in a player hand list to compute hit, double for hand tot <21. Keep a pointer to the tree or list in each player hand object.

    That's the basic idea. Order is important. Dealer object is constant and that's what is in Excel VBA code.

    k_c

  7. #7


    Did you find this post helpful? Yes | No
    Quote Originally Posted by k_c View Post
    I misspoke since my old method reset the binary search tree if number of decks changed.

    I create my own data structures from scratch. Your mention of a k-ary tree is the first time I've heard of it.
    It's actually what I would use in a BJCA. We use a 10-vector to represent the child nodes of the current subset (whether it be a Shoe, Player, or Dealer tree.)

    Does your current still use a binary tree or is it all just a list? Is it like an association array, where each index in the list contains a pointer to the next index in the list? If so, you may actually be using a k-ary tree already, if my assumption is correct.

    The difference between the binary tree and list is that the tree starts out empty and removal states are added whereas list enumerates all possible removal states for overall. (However for a specific individual hand I create and delete a one time use list based upon the hand comp.) For either case a dealer object is needed to compute dealer probabilities by going through the (possible) entries in the tree or list. For example, you asked about a player hand of 6,7,8.

    1. deal list of relevant player hands (in this case there is only 1)
    2. link to hands in tree or list
    3. compute dealer probs
    4. compute stand EV for hard 21
    5. hit EV = -1, double EV = -2 (for hitting hard 21)
    6. save and link EVs etc. in a player hand list to compute hit, double for hand tot <21. Keep a pointer to the tree or list in each player hand object.

    That's the basic idea. Order is important. Dealer object is constant and that's what is in Excel VBA code.

    k_c
    Sorry, I was asking about a {6, 7, 8} *dealer* tree, relative to a short shoe tree. Apologies if I added confusion.

    I can compute the player EV quite easy. That's not the issue, and not the issue of this post. It is about finding a better algorithm to speed up computing *dealer* conditional probabilities...not that I can't.

    Hope that helps!

  8. #8


    Did you find this post helpful? Yes | No
    In my CA implementation, i don't actually even compute dealer's hands probabilities. Instead i compute the probabilities of the various possible outcomes. For example, let's say the player has a hard 19, and the dealer has is showing a 6. For standing, the possible outcomes are win one unit, win 0 unit(push), lose one unit. Now let's look at a few possible dealer hands.

    6,A (win one unit)
    6,2,A (win zero unit)
    6,2,2,A (lose one unit)

    For the first hand, the outcome is to win one unit, so you compute the probability of drawing a slug that has BOTH the player's hand composition and dealer hand composition. In this case you will compute the probability of a slug with one Ace, one Six, one 9, one Ten from whatever deck/shoe composition you had. That computed probability is added to the variable accumulating the probability to win one unit.

    Similarly for the other two hands but you computed probabilities must be added to the variables accumulating the matching outcome.

    Once all the dealer hands are played out you will have the total probability of winning one unit, winning 0 unit, and losing one unit. And the EV and standard deviation can be easily computed.

    So in a nutshell, my approach is to look at all possible slugs that have both dealer and player cards. I tried computing all these probabilities and storing them so they are only used once, I tried using
    std::map<std::vector<>, double> and a Trie but wasn't really impressed with speed up factor.
    Chance favors the prepared mind

  9. #9


    Did you find this post helpful? Yes | No
    Quote Originally Posted by iCountNTrack View Post
    In my CA implementation, i don't actually even compute dealer's hands probabilities. Instead i compute the probabilities of the various possible outcomes. For example, let's say the player has a hard 19, and the dealer has is showing a 6. For standing, the possible outcomes are win one unit, win 0 unit(push), lose one unit. Now let's look at a few possible dealer hands.

    6,A (win one unit)
    6,2,A (win zero unit)
    6,2,2,A (lose one unit)

    For the first hand, the outcome is to win one unit, so you compute the probability of drawing a slug that has BOTH the player's hand composition and dealer hand composition. In this case you will compute the probability of a slug with one Ace, one Six, one 9, one Ten from whatever deck/shoe composition you had. That computed probability is added to the variable accumulating the probability to win one unit.

    Similarly for the other two hands but you computed probabilities must be added to the variables accumulating the matching outcome.

    Once all the dealer hands are played out you will have the total probability of winning one unit, winning 0 unit, and losing one unit. And the EV and standard deviation can be easily computed.

    So in a nutshell, my approach is to look at all possible slugs that have both dealer and player cards. I tried computing all these probabilities and storing them so they are only used once, I tried using
    std::map<std::vector<>, double> and a Trie but wasn't really impressed with speed up factor.
    Thanks iCnT.

    Why do you think there was no noticeable speedup in computing conditional probabilites for your algorithm? How many player+dealer total subsets are you computing?

  10. #10


    Did you find this post helpful? Yes | No
    Quote Originally Posted by dogman_1234 View Post
    It's actually what I would use in a BJCA. We use a 10-vector to represent the child nodes of the current subset (whether it be a Shoe, Player, or Dealer tree.)

    Does your current still use a binary tree or is it all just a list? Is it like an association array, where each index in the list contains a pointer to the next index in the list? If so, you may actually be using a k-ary tree already, if my assumption is correct.



    Sorry, I was asking about a {6, 7, 8} *dealer* tree, relative to a short shoe tree. Apologies if I added confusion.

    I can compute the player EV quite easy. That's not the issue, and not the issue of this post. It is about finding a better algorithm to speed up computing *dealer* conditional probabilities...not that I can't.

    Hope that helps!
    Hi,

    The only conditional dealer probabilities I worry about are on the condition that dealer has checked for blackjack and doesn't have it and this isn't even really necessary. The only reason I did this is because this is the traditional condition for which to express values. Other than that I can list all dealer probabilities versus each up card including overall probs for any resulting composition from 1 to 41297762 decks in a fraction of a second even using VBA. (You might consider up card probs as conditional versus overall.) Any unconditional player EV and correct strategy can be computed using these.

    k_c
    Last edited by k_c; 04-12-2022 at 07:59 AM.

  11. #11


    Did you find this post helpful? Yes | No
    Quote Originally Posted by k_c View Post
    Hi,

    The only conditional dealer probabilities I worry about are on the condition that dealer has checked for blackjack and doesn't have it and this isn't even really necessary. The only reason I did this is because this is the traditional condition for which to express values. Other than that I can list all dealer probabilities versus each up card including overall probs for any resulting composition from 1 to 41297762 decks in a fraction of a second even using VBA. (You might consider up card probs as conditional versus overall.) Any unconditional player EV and correct strategy can be computed using these.

    k_c
    I'm sorry. Again, I am not asking about how to compute dealer probabilities, nor am I asking about how to condition probabilities wrt dealer naturals.

    What I am asking for is: does there exist a *faster* more efficient method to compute dealer probabilities, than what is offered by all our CA's?

    That's what I want to know! Can we speed up our calculations, since 90 percent of our total compute time is done computing dealer probabilities? I outlined in the OP several things that indicate redundant computations. Can we take advantage of any optimizations to spped up the CA?7

  12. #12


    Did you find this post helpful? Yes | No
    Quote Originally Posted by dogman_1234 View Post
    I'm sorry. Again, I am not asking about how to compute dealer probabilities, nor am I asking about how to condition probabilities wrt dealer naturals.

    What I am asking for is: does there exist a *faster* more efficient method to compute dealer probabilities, than what is offered by all our CA's?

    That's what I want to know! Can we speed up our calculations, since 90 percent of our total compute time is done computing dealer probabilities? I outlined in the OP several things that indicate redundant computations. Can we take advantage of any optimizations to spped up the CA?7

    Hi again,

    There's no need to be sorry. My algorithm is my algorithm and it is what it is. I feel that it is efficient. If I knew of any reasonable modification to compute dealer probs even more efficiently I would. I tried to outline some of the things outside of dealer probs to limit additional calculation. Basically I kind of gave up on improving dealer probs calc itself.

    For me my days off are coming to an end. It's back to work tomorrow.

    k_c

  13. #13


    Did you find this post helpful? Yes | No
    Take care, mate!

Page 1 of 2 12 LastLast

Similar Threads

  1. A7 vs dealer Ace?
    By nwkfern69 in forum General Blackjack Forum
    Replies: 9
    Last Post: 06-17-2016, 03:52 PM
  2. Replies: 0
    Last Post: 03-29-2015, 08:44 AM
  3. What did the dealer say?
    By KoolAid90 in forum General Blackjack Forum
    Replies: 14
    Last Post: 05-18-2014, 10:43 PM

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  

About Blackjack: The Forum

BJTF is an advantage player site based on the principles of comity. That is, civil and considerate behavior for the mutual benefit of all involved. The goal of advantage play is the legal extraction of funds from gaming establishments by gaining a mathematic advantage and developing the skills required to use that advantage. To maximize our success, it is important to understand that we are all on the same side. Personal conflicts simply get in the way of our goals.