Page 1 of 3 123 LastLast
Results 1 to 13 of 31

Thread: MGP: Comp speed that doesn't make sense...

  1. #1
    MGP
    Guest

    MGP: Comp speed that doesn't make sense...

    Hi,

    I added some features to my ca awhile back and the program slowed down a lot. I thought it was because of the way I changed the probabilities, so I finally got around to changing them back to something very similar.

    The thing that I really don't understand is the following. The program uses fixed sized arrays that are always declared. The arrays are only erased/initialized for the upcards asked. When I calculate the strategies one upcard at a time, the net calculation time is only 43% of the time it takes if I just run all the upcards together. It's only 82% of the time in the original program so overall, the program is a lot slower, although it's the same time with either if you do the upcards individually. I could understand it if I declared the arrays based on the number of upcards - but since they're always the same - why is it so much slower? Does anyone have any idea? And if so, is there a way to avoid this problem and make it just as fast if it runs straight through?

    Thanks in advance,
    MGP

  2. #2
    Norm Wattenberger
    Guest

    Norm Wattenberger: Re: Comp speed that doesn't make sense...

    Might have to do with the cache. If you pull RAM into chip cache in sequential order, values for the current and next operation may be pulled in with one RAM access. Could also be that one method uses pipelines more effectively. Program optimization is quite odd these days.

  3. #3
    MGP
    Guest

    MGP: Re: Comp speed that doesn't make sense...

    > Might have to do with the cache. If you pull
    > RAM into chip cache in sequential order,
    > values for the current and next operation
    > may be pulled in with one RAM access. Could
    > also be that one method uses pipelines more
    > effectively. Program optimization is quite
    > odd these days.

    Thanks Norm, I'm not sure I understand though; well, actually I dont really understand what that means... It's exactly the same program, but the minimum and maximum upcards are just a paraemter that you enter to limit the calcs if you want to. There's simply a for-next loop that cycles through the upcards. It's a lot faster when the min/max are one card at a time vs cycling through all 10 at once. So would the program optimize it and access the RAM differently based on that fact?

    But mentioning the cache leads to a possibility - with all the upcards, many more values in the arrays are non-zero, could that explain it? (Or did what you say above mean that, hehe).

    Thanks,
    MGP

  4. #4
    Norm Wattenberger
    Guest

    Norm Wattenberger: Re: Comp speed that doesn't make sense...

    Perhaps. Memory is pullled into the cache in sequential chunks larger than you actually use in one operation. Works well if you make use of memory locations sequentially since the succeeding operations find the needed data already in the cache.

    > Thanks Norm, I'm not sure I understand
    > though; well, actually I dont really
    > understand what that means... It's exactly
    > the same program, but the minimum and
    > maximum upcards are just a paraemter that
    > you enter to limit the calcs if you want to.
    > There's simply a for-next loop that cycles
    > through the upcards. It's a lot faster when
    > the min/max are one card at a time vs
    > cycling through all 10 at once. So would the
    > program optimize it and access the RAM
    > differently based on that fact?

    > But mentioning the cache leads to a
    > possibility - with all the upcards, many
    > more values in the arrays are non-zero,
    > could that explain it? (Or did what you say
    > above mean that, hehe).

    > Thanks,
    > MGP

  5. #5
    Saboteur
    Guest

    Saboteur: Do you use "Redim Preserve" ?

    I assume you have ten up-card values. Are you saying that the combined total of processor time for the ten runs in which you specify just one up-card adds up to far less than when you execute one run specifying the entire range of ten up-cards? If that's the situation, then it appears that the only difference would seem to be the amount of time it takes to erase/initialize the array nine (or possibly ten) times.

    Do you "hard-code" a particular size for your array? You may trim some processor usage using Redim Preserve each time you increase the size of your array. I don't know how "expensive" that is (but I'll bet Norm knows). I'm talking now about the increasing size of your array while you're "inside" the examination of each up-card, not Redim-ing it when going from one up-card to the next. The benefit would come when you'll "erase" a smaller array each time you start a new up-card. I have no idea if the benefit will outweight the cost of all those Redims.

    Alternately, you could just create a three-dimensional array at start-up (i.e., Form Load). Your third subscript would be the up-card value. Specifying only one up-card means that the rest of the array wouldn't be used (but so what?). The array would already be sitting there waiting to be used on those occasions when you specify a range of up-cards (including the entire range). There would be no need to erase or re-initialize anything.

    That might be a better method if your array isn't enormous.

  6. #6
    MGP
    Guest

    MGP: Re: Do you use "Redim Preserve" ?

    Thank you for the reply,

    > I assume you have ten up-card values. Are
    > you saying that the combined total of
    > processor time for the ten runs in which you
    > specify just one up-card adds up to far less
    > than when you execute one run specifying the
    > entire range of ten up-cards?

    Yes, exactly.

    > If that's the
    > situation, then it appears that the only
    > difference would seem to be the amount of
    > time it takes to erase/initialize the array
    > nine (or possibly ten) times.

    Unfortunately that doesn't eplain it - I have a timing variable that only times the calculation time for calculating the stand values that doesn't include initialization time, non-stand calculation time, printing time, etc.

    > Do you "hard-code" a particular
    > size for your array?

    Yes. The it's actually an array of 3083 structures that include storage arrays for dealerprobs, split/double/surrender/etc values for both the original hands and post-split hands. The entire array of structures pushes the 64k max limit for fixed structures. All 10 upcards are always included.

    > You may trim some
    > processor usage using Redim Preserve each
    > time you increase the size of your array. I
    > don't know how "expensive" that is
    > (but I'll bet Norm knows). I'm talking now
    > about the increasing size of your array
    > while you're "inside" the
    > examination of each up-card, not Redim-ing
    > it when going from one up-card to the next.
    > The benefit would come when you'll
    > "erase" a smaller array each time
    > you start a new up-card. I have no idea if
    > the benefit will outweight the cost of all
    > those Redims.

    I'm not ready to learn how to use redim statements yet. I have an idea to reduce the post-split data size, but it's going to take a bit to do the millions of replace statements...

    > Alternately, you could just create a
    > three-dimensional array at start-up (i.e.,
    > Form Load). Your third subscript would be
    > the up-card value. Specifying only one
    > up-card means that the rest of the array
    > wouldn't be used (but so what?). The array
    > would already be sitting there waiting to be
    > used on those occasions when you specify a
    > range of up-cards (including the entire
    > range). There would be no need to erase or
    > re-initialize anything.

    > That might be a better method if your array
    > isn't enormous.

    Thanks, I'm not exactly sure what youre suggesting here but it sounds like it would increase the size of the structure and I never erase or re-initialize anything anyway.

    It's still not clear what's causing this weird phenomena if it's not the size of the non-zero data.

    Thanks again,
    MGP

  7. #7
    Saboteur
    Guest

    Saboteur: Okay

    I see now where you'd already mentioned that you use a fixed-size array. I'm puzzled when you say that you don't erase or initialize anything, because you'd also mentioned that in your first post.

    Does your For-Next loop have any code which references a code-module elsewhere? Most likely a CA program would have to, I'd imagine.

    Do you have Option Explicit set in your program? If not, try it. Does it flag any variables as being undeclared?

    If you want, you can post your code for the For-Next process and any associated procedures here so we can take a look at it.

    I'm pretty much a rookie at this sort of thing myself (in case it wasn't obvious), but programming puzzles intrigue me.

  8. #8
    MGP
    Guest

    MGP: Re: Okay

    > I see now where you'd already mentioned that
    > you use a fixed-size array. I'm puzzled when
    > you say that you don't erase or initialize
    > anything, because you'd also mentioned that
    > in your first post.

    Sorry, I do initialize erase at the beginning of the macro, but not anytime during when I'm timing the calculations.

    > Does your For-Next loop have any code which
    > references a code-module elsewhere? Most
    > likely a CA program would have to, I'd
    > imagine.

    Nope, just another procedure within the same module. But that wouldn't explain it either since whether I do them separately or together it would be called the same number of times.

    > Do you have Option Explicit set in your
    > program?

    Yep.

    > If not, try it. Does it flag any
    > variables as being undeclared?

    Nope.

    > If you want, you can post your code for the
    > For-Next process and any associated
    > procedures here so we can take a look at it.

    It's really very straightforward. It's a nested loop that cycles through the player's hands and upcard and dealer's hands. The only difference between the methods is that the stand routine is either run 10 times (and then the times are added up), or once. The upcard for-next simply goes from minupcard to maxupcard.

    > I'm pretty much a rookie at this sort of
    > thing myself (in case it wasn't obvious),
    > but programming puzzles intrigue me.

    It's puzzling me too

    Thanks again,
    MGP

  9. #9
    Keith Collins
    Guest

    Keith Collins: Re: Okay

    > It's really very straightforward. It's a
    > nested loop that cycles through the player's
    > hands and upcard and dealer's hands. The
    > only difference between the methods is that
    > the stand routine is either run 10 times
    > (and then the times are added up), or once.
    > The upcard for-next simply goes from
    > minupcard to maxupcard.

    Maybe you are calculating something more than once when you use the loop when it only needs to be calculated once. When you don't use the loop, maybe this is sidestepped. Just a possible guess.

    Keith Collins

  10. #10
    MGP
    Guest

    MGP: Re: Okay

    > Maybe you are calculating something more
    > than once when you use the loop when it only
    > needs to be calculated once. When you don't
    > use the loop, maybe this is sidestepped.
    > Just a possible guess.

    Thanks, but when I do them one at a time, I just go through the loop once for each upcard and minupcard=maxupcard vs having minupcard=1 and maxupcard=10. Like I said nothing else changes at all except for the number of times through the loop, and that's why I don't understand it at all.

    Thanks again,
    MGP

  11. #11
    Norm Wattenberger
    Guest

    Norm Wattenberger: Are you saying that it is slower

    if you run through the loop only once? This makes sense for very small loops.

  12. #12
    MGP
    Guest

    MGP: Re: Are you saying that it is slower

    > if you run through the loop only once?

    Yep. If I loop through upcards 1 to 10, it take over 2 times longer for all the calculations than if I run through the loop 10 separate times.

    > This makes sense for very small loops.

    Why does this make sense?

    Thanks again,
    MGP

  13. #13
    Norm Wattenberger
    Guest

    Norm Wattenberger: Re: Are you saying that it is slower

    Not sure I understand what you are doing. But, if you have a loop that fits in the instruction cache it will run very quickly.

Page 1 of 3 123 LastLast

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  

About Blackjack: The Forum

BJTF is an advantage player site based on the principles of comity. That is, civil and considerate behavior for the mutual benefit of all involved. The goal of advantage play is the legal extraction of funds from gaming establishments by gaining a mathematic advantage and developing the skills required to use that advantage. To maximize our success, it is important to understand that we are all on the same side. Personal conflicts simply get in the way of our goals.