See the top rated post in this thread. Click here

Page 2 of 13 FirstFirst 123412 ... LastLast
Results 14 to 26 of 158

Thread: Proving CA algorithms incorrect (or at least inexact)

  1. #14


    Did you find this post helpful? Yes | No
    Quote Originally Posted by Cacarulo View Post
    An important point when calculating EORs is that their sum must be exactly zero. But for it to be zero, the strategy must be fixed (this means that we are going to apply exactly the same strategy for each of the removed cards).
    Right; at this point it no longer feels like spam to link to the article motivating this post. Quoting from there, we're only considering "the average expected return after removing a single card, but playing the same fixed 'full-shoe' strategy."

    Quote Originally Posted by Cacarulo View Post
    Without resorting to arbitrary precision, it’s possible to achieve 10 or more exact decimals that add up to zero. Four decimals would already be sufficient, as we must remember that EORs are linear estimates."
    This sounds interesting. Can you explain what is meant here, by "four decimals would already be sufficient"? Is this level of agreement based on past empirical calculation for some specific rules and number of decks? (I'm comparing with the simple example in the linked article, where if we play that game with six decks instead of just one, then there is agreement to four significant digits... but that disagreement (beyond four or five digits) is all due to an intentional bug in the CA algorithm, not even due to limited double precision. That is, if someone presented me with a candidate pair-splitting algorithm whose double-precision average EOR was zero only to within 1e-6 or even 1e-7, it's unclear why I shouldn't be suspicious of errors?)

    Quote Originally Posted by Cacarulo View Post
    In my case, I use three types of strategies: TD, 2-card CD, and OP (CDP1). With the first two, I can obtain precise EORs that can also be simulated, for example, with CVData. With the third strategy, it’s not possible to get EORs that sum to zero. It’s probably possible with CDZ-.
    I'd like to learn more about this as well. What is meant by "it's not possible to get EORs that sum to zero [for CDP1]"? We can measure empirically that it is (using the arbitrary-precision approach, to eliminate rounding errors as a possible source of discrepancy); and even just as a thought experiment, the "different strategy" pre- and post-split doesn't affect the applicability of the theorem, CDP1 is still "fixed" in the sense of being a function mapping a prefix of an arrangement (i.e., shuffling) of a possibly-depleted shoe to an outcome of the round.

  2. #15


    Did you find this post helpful? Yes | No
    Quote Originally Posted by Zenfighter View Post
    I might be missing the point of your comment

    Exactly. Just an excerpt, what he wrote here online from a private mail that I dropped him, many hours before you gave him your own figure. The final part went like this:

    6dks, s17, das, spl1, ls
    TD EV = - 0.3857506628

    Note too, that Farmer’s final EV is a composition-dependent number. In this case we get:
    EV = -0.3842840985, for the spl1 case. A tiny Verbesserung.

    I had to look up "Verbesserung."

    Strange, indeed. [Farm]er having to look after [besser] for a better understanding.

    Just my two cents.

    Zenfighter
    I still don't get it.

  3. #16


    Did you find this post helpful? Yes | No
    Quote Originally Posted by ericfarmer View Post
    Right; at this point it no longer feels like spam to link to the article motivating this post. Quoting from there, we're only considering "the average expected return after removing a single card, but playing the same fixed 'full-shoe' strategy."
    Yes, the same fixed "full-shoe" strategy.

    This sounds interesting. Can you explain what is meant here, by "four decimals would already be sufficient"? Is this level of agreement based on past empirical calculation for some specific rules and number of decks? (I'm comparing with the simple example in the linked article, where if we play that game with six decks instead of just one, then there is agreement to four significant digits... but that disagreement (beyond four or five digits) is all due to an intentional bug in the CA algorithm, not even due to limited double precision. That is, if someone presented me with a candidate pair-splitting algorithm whose double-precision average EOR was zero only to within 1e-6 or even 1e-7, it's unclear why I shouldn't be suspicious of errors?)
    Maybe a couple of examples will explain it better. Let's consider 6D, S17, DOA, DAS, SPA1, SPL3, LS.
    First, I will calculate the EORs using atotal-dependent strategy, meaning that for H/S decisions like T2 vs 4, 93 vs 4, 84 vs 4, 75 vs 4, they all will behave as a total of 12 vs 4.
    For the second example, I will recalculate the EORs, but this time I will use a strategy dependent on the composition of the player’s first two cards. For three or more cards, I will use the total-dependent strategy. The only changes between the first and the second are that in the latter, T2 vs 4 => H (instead of S) and 87 vs T => H (instead of R).

    Code:
    1) TD-EORs
    
    EoR [A] = -0.092812947058
    EoR [2] =  0.069793252937
    EoR [3] =  0.084016043043
    EoR [4] =  0.114628494475
    EoR [5] =  0.146376815919
    EoR [6] =  0.075502946395
    EoR [7] =  0.040436959680
    EoR [8] = -0.011476595397
    EoR [9] = -0.042505038691
    EoR [T] = -0.095989982826
    Mean    = -0.331703578487
    SS      =  0.101240441160
    CHS     =  0.000000000000
    
    2) CD-EORs
    
    EoR [A] = -0.093030633780
    EoR [2] =  0.069366171884
    EoR [3] =  0.082733583717
    EoR [4] =  0.113174590475
    EoR [5] =  0.144334931951
    EoR [6] =  0.072854616924
    EoR [7] =  0.039833411945
    EoR [8] = -0.011700638665
    EoR [9] = -0.042715768874
    EoR [T] = -0.093712566394
    Mean    = -0.331346988003
    SS      =  0.097936489179
    CHS     =  0.000000000000

    As you can see in these two examples, simply using double precision I can get 12 correct decimals, and the sum of the EORs is exactly zero in each case. If I wanted more decimal precision, then the use of arbitrary precision would be necessary.
    My comment about using 4 decimals or 12 decimals is that, since EORs are linear estimates, you won’t notice any advantage in your estimation of the edge.

    I'd like to learn more about this as well. What is meant by "it's not possible to get EORs that sum to zero [for CDP1]"? We can measure empirically that it is (using the arbitrary-precision approach, to eliminate rounding errors as a possible source of discrepancy); and even just as a thought experiment, the "different strategy" pre- and post-split doesn't affect the applicability of the theorem, CDP1 is still "fixed" in the sense of being a function mapping a prefix of an arrangement (i.e., shuffling) of a possibly-depleted shoe to an outcome of the round.
    Actually, it is possible, but it’s very cumbersome and impractical. To do so, you would have to save the strategy generated with the full shoe and use that same strategy with each removed card. The generated strategy would be impossible to memorize, which is why I say it’s not practical. I don’t think it’s a matter of arbitrary precision.

    Sincerely,
    Cac
    Luck is what happens when preparation meets opportunity.

  4. #17


    Did you find this post helpful? Yes | No
    Quote Originally Posted by Cacarulo View Post
    I’d be curious to see the expected value that ICountNTrack calculated,
    to determine which strategy comes closest. I also imagine that ICountNTrack’s method is likely feasible only with SPL1.

    Quote Originally Posted by ericfarmer View Post
    Originally Posted by Cacarulo
    I also imagine that ICountNTrack’s method is likely feasible only with SPL1.
    ICountNTrack will have to chime in, but I believe this is correct, although I seem to recall that their code can also crank out EVs for SPL3 for some sufficiently small depleted shoes.
    Sorry, I am joining the party late and Eric good to see you around with high quality posts!

    i will comment on the above. My CA actually computes SPL3 (split 3 times to 4 hands) expectation value and standard deviations using full-composition dependent optimum strategy where strategy is recomputed based on all the cards drawn on the other split hands. This is obviously very expensive computationally even more so because I adopt an algorithm that computes the probabilities of outcome of the round and then compute the EV and SD. That being said, my CA does not compute overall pre-deal EV and it's only limited to calculating playing strategies (player with a least 2 cards) and dealer with at least one card. Sample output is shown below. I should also say that for deeply dealt pitch games, I start to see big differences in computed EV vs fixed post-split strategies.



    https://code.google.com/archive/p/bl...rial-analyzer/

    Finally, I really like Eric's approach of presenting EV's as a fraction, this will allow to accurately compare the different algorithms and ensure they are all giving the same answer without the worrying about the limitations of double precision and rounding
    Attached Images Attached Images
    Last edited by iCountNTrack; 11-03-2024 at 05:28 PM.
    Chance favors the prepared mind

  5. #18


    1 out of 1 members found this post helpful. Did you find this post helpful? Yes | No
    I'm hoping, especially if you all reread BJA3, pp. 389-390, that we can agree that this is a very interesting THEORETICAL discussion with absolutely no practical value for the human BS player whatsoever.

    Don

  6. #19


    Did you find this post helpful? Yes | No
    Quote Originally Posted by Cacarulo View Post
    It’s true that neither CDZ- nor CDP1 is fully optimal, though they provide excellent approximations to the exact value. I’d be curious to see the expected value that ICountNTrack calculated,
    to determine which strategy comes closest. I also imagine that ICountNTrack’s method is likely feasible only with SPL1.
    Cac
    I wrote a program whose only purpose is to compute optimal split expected values for splitting once intended as a service to the blackjack computing community. It can be downloaded at the bottom here http://www.bjstrat.net/software.html

    It takes 7-8 minutes to compute 2-2 for 6 decks. My computer is not that fast. It may do better on a faster computer but there still should be a wait. Do not press any key while waiting since my user interface allows for pressing of a single key to move to the next screen after output is displayed. If you prematurely press a key the keystroke remains in a buffer. When computation completes output is displayed and immediately dismissed by the keystroke in the buffer.

    User needs to pre-select whether output is displayed unconditionally or on condition of no dealer blackjack (which only affects output for up cards of ten or ace.) It would be better to post-select this option after output since it doesn't require another split calculation but that's the way it presently works.

    k_c
    "Perfection is the enemy of success."
    -Elon Musk-

  7. #20


    Did you find this post helpful? Yes | No
    Quote Originally Posted by ericfarmer View Post
    I still don't get it.
    Seems like being crystal clear for everybody, but not for you, Eric?

    6dks, s17, das, spl1, ls
    EV TD -0.3857506628%
    EV CD -0.3842840985%
    Difference 0.0014665643%

    That is; the second option yields a 1.5 thousand of a percent reduction of the negative expectation for these specific rules. A tiny improvement.

    Jetzt ist alles klar. Nicht war?

    Zenfighter

  8. #21


    1 out of 1 members found this post helpful. Did you find this post helpful? Yes | No
    Quote Originally Posted by DSchles View Post
    I'm hoping, especially if you all reread BJA3, pp. 389-390, that we can agree that this is a very interesting THEORETICAL discussion with absolutely no practical value for the human BS player whatsoever.

    Don
    You're right. You may have missed the similar comment in the post originating this thread: "This is mostly just a parlor trick of purely academic interest (especially for the probably-unrealistic specific rules in this example)." If you're not a CA developer, there is nothing of interest here.

    However, also from the top post: ".. but it turns out to have some practical utility for analysis." If you're a CA developer, then this idea of average EOR==0 is of very practical interest, since it gives us a pretty simple "unit test" of the math under the hood. I've experienced that utility firsthand, at least twice: once as I was trying to implement fast-but-exact pair splits. Implement a change, test average EOR, compare with zero; if it's off, then you know you have a bug, even if you have no idea where it's at. The second was while implementing the exact calculation of the entire distribution (and thus variance) of outcomes of a round, for the analysis that you were involved in several years ago.

  9. #22


    Did you find this post helpful? Yes | No
    Quote Originally Posted by Cacarulo View Post
    Maybe a couple of examples will explain it better. Let's consider 6D, S17, DOA, DAS, SPA1, SPL3, LS.
    First, I will calculate the EORs using atotal-dependent strategy, meaning that for H/S decisions like T2 vs 4, 93 vs 4, 84 vs 4, 75 vs 4, they all will behave as a total of 12 vs 4.
    For the second example, I will recalculate the EORs, but this time I will use a strategy dependent on the composition of the player’s first two cards. For three or more cards, I will use the total-dependent strategy. The only changes between the first and the second are that in the latter, T2 vs 4 => H (instead of S) and 87 vs T => H (instead of R).

    Code:
    1) TD-EORs
    
    EoR [A] = -0.092812947058
    EoR [2] =  0.069793252937
    EoR [3] =  0.084016043043
    EoR [4] =  0.114628494475
    EoR [5] =  0.146376815919
    EoR [6] =  0.075502946395
    EoR [7] =  0.040436959680
    EoR [8] = -0.011476595397
    EoR [9] = -0.042505038691
    EoR [T] = -0.095989982826
    Mean    = -0.331703578487
    SS      =  0.101240441160
    CHS     =  0.000000000000
    
    2) CD-EORs
    
    EoR [A] = -0.093030633780
    EoR [2] =  0.069366171884
    EoR [3] =  0.082733583717
    EoR [4] =  0.113174590475
    EoR [5] =  0.144334931951
    EoR [6] =  0.072854616924
    EoR [7] =  0.039833411945
    EoR [8] = -0.011700638665
    EoR [9] = -0.042715768874
    EoR [T] = -0.093712566394
    Mean    = -0.331346988003
    SS      =  0.097936489179
    CHS     =  0.000000000000

    As you can see in these two examples, simply using double precision I can get 12 correct decimals, and the sum of the EORs is exactly zero in each case. If I wanted more decimal precision, then the use of arbitrary precision would be necessary.
    Thanks for these examples, this is very helpful. If I understand the description of TD in particular, you're using the strategy described in Table B10 of BJA3, correct? I am almost, but not quite, able to reproduce your values: my CA is able to evaluate "user-defined" strategies like this, including e.g. standing on multi-card hard 16 vs. T, but the additional departure of standing *only* after splits would require some code changes. Leaving out that additional departure (i.e., I think the following is "almost" your TD strategy, except that hard 16 vs. T post-split is hit with 2 cards, stand with 3+ cards) has only a small effect; following is the exact expected return and EORs (in fraction, not percentage, of initial wager):

    Code:
    E[X] = -7347769581814513817766698807563202595964/2215167626902928405093200851144936079126875
    
    EOR[A] = -811200833345198412983928163283105851343/874011716737209846907521424261267296526250
    EOR[2] = 13719468501548752857749982590651293131403031/19657397521136586666797064353060162766171888750
    EOR[3] = 660607488875634224558917995367714966800127/786295900845463466671882574122406510646875550
    EOR[4] = 18777140316753690879378570835279739836915964/16381164600947155555664220294216802305143240625
    EOR[5] = 3888281490917116217617340335271365702861079/2656405070423863063080684372035157130563768750
    EOR[6] = 42797883125837886449191199447714781408889/56682230453104344483267198249885129083540625
    EOR[7] = 39743536040566670440241604251875546215872057/98286987605682933333985321765300813830859443750
    EOR[8] = -322291245645322470232943939841030978605531/2808199645876655238113866336151451823738841250
    EOR[9] = -773634316055270571866465880292379703210151/1820129400105239506184913366024089145015915625
    EOR[T] = -4492590055762117480462106986344363416380399/4680332743127758730189777226919086372898068750
    For comparison, following are the same calculations done in double precision, including all necessary digits to unambiguously "round-trip" the underlying 64-bit values:

    Code:
    std::cout << std::setprecision(17);
    
    E[X] = -0.0033170264374474123
    
    EOR[A] = -0.00092813496410931035
    EOR[2] = 0.00069792903596727074
    EOR[3] = 0.00084015125624524456
    EOR[4] = 0.0011462640645019395
    EOR[5] = 0.0014637381678756084
    EOR[6] = 0.00075504938291479778
    EOR[7] = 0.00040436213387715775
    EOR[8] = -0.00011476792475138137
    EOR[9] = -0.00042504357987414865
    EOR[T] = -0.00095988689316105907
    You can verify that these values, calculated with limited double precision and compared with the exact values above, are accurate to only roughly 12 digits, as you similarly note in your example.

    My question is, how did you know that your values are accurate to ~12 digits, and *only* to 12 digits (i.e., that the low-order ~4 digits or so were *inaccurate*), without *exact* values to compare against? Did you just do the average EOR calculation with successively fewer digits until the result was zero (at least to within an ulp-- note that your average EOR *isn't exactly* zero, it's -1/1.3e13)?

    In other words, when double precision affords us roughly 16 (log10(2^53)) digits of precision, what justifies being okay with only 12 digits of accuracy, i.e. being willing to chalk up 4 digits of error *solely* to floating-point rounding?

    Consider, for example, my CA from over 20 years ago. I had just made updates at the time to compute exact EVs for SPL1, but still with an only approximate-but-fast algorithm for SPL3... that passed this "consistent with average EOR==0" test accurate only to roughly 8 most significant digits. The lower 8 digits were in error... but why not claim that those lower 8 digits of inaccuracy were similarly due solely to cumulative rounding error, and not to more fundamental algorithm flaws? (I'm not claiming this, merely presenting this as a devil's advocate to try to illustrate my point of the potential benefit of being able to *eliminate* cumulative rounding error as a confounder.)

  10. #23


    Did you find this post helpful? Yes | No
    Quote Originally Posted by ericfarmer View Post
    Thanks for these examples, this is very helpful. If I understand the description of TD in particular, you're using the strategy described in Table B10 of BJA3, correct? I am almost, but not quite, able to reproduce your values: my CA is able to evaluate "user-defined" strategies like this, including e.g. standing on multi-card hard 16 vs. T, but the additional departure of standing *only* after splits would require some code changes. Leaving out that additional departure (i.e., I think the following is "almost" your TD strategy, except that hard 16 vs. T post-split is hit with 2 cards, stand with 3+ cards) has only a small effect; following is the exact expected return and EORs (in fraction, not percentage, of initial wager):

    Code:
    E[X] = -7347769581814513817766698807563202595964/2215167626902928405093200851144936079126875
    
    EOR[A] = -811200833345198412983928163283105851343/874011716737209846907521424261267296526250
    EOR[2] = 13719468501548752857749982590651293131403031/19657397521136586666797064353060162766171888750
    EOR[3] = 660607488875634224558917995367714966800127/786295900845463466671882574122406510646875550
    EOR[4] = 18777140316753690879378570835279739836915964/16381164600947155555664220294216802305143240625
    EOR[5] = 3888281490917116217617340335271365702861079/2656405070423863063080684372035157130563768750
    EOR[6] = 42797883125837886449191199447714781408889/56682230453104344483267198249885129083540625
    EOR[7] = 39743536040566670440241604251875546215872057/98286987605682933333985321765300813830859443750
    EOR[8] = -322291245645322470232943939841030978605531/2808199645876655238113866336151451823738841250
    EOR[9] = -773634316055270571866465880292379703210151/1820129400105239506184913366024089145015915625
    EOR[T] = -4492590055762117480462106986344363416380399/4680332743127758730189777226919086372898068750
    For comparison, following are the same calculations done in double precision, including all necessary digits to unambiguously "round-trip" the underlying 64-bit values:

    Code:
    std::cout << std::setprecision(17);
    
    E[X] = -0.0033170264374474123
    
    EOR[A] = -0.00092813496410931035
    EOR[2] = 0.00069792903596727074
    EOR[3] = 0.00084015125624524456
    EOR[4] = 0.0011462640645019395
    EOR[5] = 0.0014637381678756084
    EOR[6] = 0.00075504938291479778
    EOR[7] = 0.00040436213387715775
    EOR[8] = -0.00011476792475138137
    EOR[9] = -0.00042504357987414865
    EOR[T] = -0.00095988689316105907
    You can verify that these values, calculated with limited double precision and compared with the exact values above, are accurate to only roughly 12 digits, as you similarly note in your example.

    My question is, how did you know that your values are accurate to ~12 digits, and *only* to 12 digits (i.e., that the low-order ~4 digits or so were *inaccurate*), without *exact* values to compare against? Did you just do the average EOR calculation with successively fewer digits until the result was zero (at least to within an ulp-- note that your average EOR *isn't exactly* zero, it's -1/1.3e13)?

    In other words, when double precision affords us roughly 16 (log10(2^53)) digits of precision, what justifies being okay with only 12 digits of accuracy, i.e. being willing to chalk up 4 digits of error *solely* to floating-point rounding?

    Consider, for example, my CA from over 20 years ago. I had just made updates at the time to compute exact EVs for SPL1, but still with an only approximate-but-fast algorithm for SPL3... that passed this "consistent with average EOR==0" test accurate only to roughly 8 most significant digits. The lower 8 digits were in error... but why not claim that those lower 8 digits of inaccuracy were similarly due solely to cumulative rounding error, and not to more fundamental algorithm flaws? (I'm not claiming this, merely presenting this as a devil's advocate to try to illustrate my point of the potential benefit of being able to *eliminate* cumulative rounding error as a confounder.)
    Yes, the strategy comes from Table B10. The crucial detail lies in how we treat 16 vs T: I surrender with a two-card 16, but I stand with any other 16, even after splitting.
    The EV might only improve slightly if you hit with two cards after splitting. This scenario is quite extreme, as it would involve splitting eights repeatedly until you have four
    hands and then receiving eights again. I could potentially adjust the code to handle this, but I’m not sure it would be worth the effort.

    Actually, I’m aware of the accumulated rounding errors that occur with double precision. When testing with 13 or more decimals, the sum stops being zero precisely
    due to precision issues. I'm confident that this wouldn't happen with arbitrary precision, and on that, we completely agree.
    But do we really need more than 12 decimals? Or more than 6 for linear estimators?

    Sincerely,
    Cac
    Luck is what happens when preparation meets opportunity.

  11. #24
    Senior Member Gramazeka's Avatar
    Join Date
    Dec 2011
    Location
    Ukraine
    Posts
    1,546


    Did you find this post helpful? Yes | No
    Quote Originally Posted by Cacarulo View Post
    Or more than 6 for linear estimators?

    Sincerely,
    Cac
    My experience suggests that 4 is quite enough.
    "Don't Cast Your Pearls Before Swine" (Jesus)

  12. #25
    Senior Member Gramazeka's Avatar
    Join Date
    Dec 2011
    Location
    Ukraine
    Posts
    1,546


    Did you find this post helpful? Yes | No
    Tell Zenfighter that if he writes more often on the forum, I will give him a computer assembly on the Intel 13600 and we can bypass the iCountNTrack computer in work tasks.

    https://cpu.userbenchmark.com/Compar...50X/4134vs4086

    p.s. Don't get lost my friend. BJ's story is also written on your behalf! )))
    Last edited by Gramazeka; 11-04-2024 at 07:58 PM.
    "Don't Cast Your Pearls Before Swine" (Jesus)

  13. #26


    Did you find this post helpful? Yes | No
    Quote Originally Posted by Cacarulo View Post
    Yes, the strategy comes from Table B10. The crucial detail lies in how we treat 16 vs T: I surrender with a two-card 16, but I stand with any other 16, even after splitting.
    Thanks, this helps clarify. (This does make the corresponding entry in Table B10 seem confusing, though; the TD entry for H16 vs. T reads "Rh^1", with Note 1 reading "Stand if 16 is multi-card or the result of a pair-split." From the accompanying prose description of how to read the table, I would interpret this as "surrender if you can, otherwise hit, mod the exceptions in the note.")

    At any rate, now that I understand the strategy being implemented, I can reproduce the corresponding expected return:

    -7347790287920686510933096249615294813564/2215167626902928405093200851144936079126875

    or, when computed in double precision, -0.0033170357848692072 (which is accurate to 13 digits). For reference, this gist is the source code that computes these values.

    Quote Originally Posted by Cacarulo View Post
    Actually, I’m aware of the accumulated rounding errors that occur with double precision. When testing with 13 or more decimals, the sum stops being zero precisely
    due to precision issues.
    This doesn't justify calling those first 12 digits accurate, though. Again, I'm not saying they aren't; they are. But given a calculation of average_EOR, the first floor(-log10(abs(average_EOR)))-1 digits aren't necessarily accurate. (For an extreme example, my old CA that approximates resplits yielded an average EOR that was "zero to within 6 digits"... but the expected return is only accurate to 2-3 digits.

    Quote Originally Posted by Cacarulo View Post
    But do we really need more than 12 decimals? Or more than 6 for linear estimators?
    I feel like I was very unclear in my initial post here. I'm not suggesting that we all start reporting arbitrary precision results from our CAs, or start reporting 12 or 17 or however many digits of precision, etc. My intent was simply to point out an additional software testing approach, that has proved useful to me, and to at least two other developers, as a means of checking for bugs and/or algorithm flaws. I've had-- and found-- both in my code in the past.

Page 2 of 13 FirstFirst 123412 ... LastLast

Similar Threads

  1. Incorrect Strategy Errors CVBJ
    By jmeezy in forum Software
    Replies: 1
    Last Post: 04-20-2020, 12:04 PM
  2. Help on Uk casino algorithms.
    By smallville1979 in forum Software
    Replies: 2
    Last Post: 02-06-2018, 11:34 PM

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  

About Blackjack: The Forum

BJTF is an advantage player site based on the principles of comity. That is, civil and considerate behavior for the mutual benefit of all involved. The goal of advantage play is the legal extraction of funds from gaming establishments by gaining a mathematic advantage and developing the skills required to use that advantage. To maximize our success, it is important to understand that we are all on the same side. Personal conflicts simply get in the way of our goals.