Yeah, that would do it. The code has a bit of a hair trigger anyhow. But, very low variance is definitely going to get a warning.
I have done a lot of work with this and will tell you that normalizing your results prior to subjecting them to the litmus test Dr Reid proscribes is necessary. While I have not checked out Norm's code, I do believe the formula he included in his book has an error for the XmR graphing. This is the reason I am searching for a copy Dynamic Blackjack to resolve the situation.
I have an excel file that does the math and creates the graphs and identifies the signals according to Dr. Reid. In my version I normalized the sessions by advantage and by session time (both of which I have for the session under review). The normalization allows for both variable sessions times and different games. I believe the signals generated are legitimate (after the formula correction) albeit not conclusive. They do not identify a problem but rather an abnormality in play that requires more detailed investigation. It is still much better than other guidance I have found in this matter.
It is still a work in process to tighten the analysis in search of guidance regarding abnormal play. Will keep you posted.
PM me if you have more detailed interest.
Stealth
Luck is nothing more than probability taken personally!
After I was accused of not putting the cash out, I was going to do the same exact thing by replicating RRs methodology in Excel. Owing to being lazy and having to go through my entire DB to fix it, I abandoned the project. Now underneath your first post on page 1, you'll see a link from shark that explains what I think you want to know. Give it a shot and post what you find.
Shark's posted link is a the excerpt from Dr. Reid that was included in Norm's Modern Blackjack. It is that document that I developed my excel file from and where I found what I believe to be an error.
Let’s add the “Sigma 1” and “Sigma 2” lines to our XmR chart. Sigma 1 on the X-chart is calculated by the formula Sigma 1 = X-avg +/- .89 * mR and Sigma 2 = X-avg +/- 1.78 *mR.
For reasons beyond the scope of this text, we do not add Sigma 1 and Sigma 2 lines to the mR Chart.
Figure 12.5 displays a complete XmR chart. RED TEXT FROM SHARKS LINK. The formulas for Sigma 1 and 2 should have been x-avg +/- .89 * mR_avg rather than mR for both Sigma 1 and 2.
As Norm pointed out in his book, the sessions need to be normalized to prevent results from being misrepresented. George if you like and if you have data that can be evaluated, PM me and I will get you the excel file and maybe we can bring some more clarity to the subject. We will need to talk about normalization of the results and a couple of other things in the excel file.
I also would like to understand why these calculations provide a measuring stick of sorts for advantage play that generates signals that can be defined. For example: Signal 2: 2 out of 3 consecutive values beyond Sigma 2. So, why 2 out of 3, why not 4 out of 6 or 7 out of 10. What is the basis for these measures? Sorry but I am just trying to understand.
Thanks for listening.
Stealth
Luck is nothing more than probability taken personally!
The following is from Modern Blackjack (page 620 in the second edition, 512 in the first edition):
Experience
I used this procedure some years ago to look at one team’s results. One problem we found is dealing with trials that were unusually long. Generally, players are advised to keep sessions to about an hour. However, there are circumstances where this simply is not possible — for example, flying to another country on the casino’s dime. They will expect you to play long sessions. We handled this by normalizing long sessions to one hour.
A second consideration is the differing game conditions. If a player is playing different rules, penetrations, and betting ramps, the results must be normalized before they are compared. Fortunately, the team had assigned a factor to each set of conditions. The factor was calculated by creating a SCORE for a standard set of conditions, and then calculating the factor for each set of conditions by dividing the SCORE for that set of conditions by the standard SCORE. So, if a game had a factor of 1.2, then the SCORE was 1.2 times the standard set of conditions. Thus, the win rate was 1.2 times the standard game. To normalize results, each of the actual results was divided by the factor for that game. In this manner, the strength of the game was removed from the results making comparison of results reasonable.
Edit: Incidentally, this is the reason that CVData results provide a Performance Factor.
Last edited by Norm; 06-25-2013 at 06:53 AM.
"I don't think outside the box; I think of what I can do with the box." - Henri Matisse
Can't open my copy. Not too long ago, I bought a new computer 64 bit Dell. All of Norm's software wouldn't work so Norm kindly sent me the entire CV suite. Norm, I'm in your debt. Meanwhile I copied all of my files from the old computer to the new one. Today I tried to open RRs program and it's asking me for a passcode. I guess if you copy it to another computer, it'll automatically generate another random passcode so I guess I'm SOL too.
Bookmarks