Page 2 of 3 FirstFirst 123 LastLast
Results 14 to 26 of 31

Thread: MGP: Comp speed that doesn't make sense...

  1. #14
    MGP
    Guest

    MGP: The numbers and the code

    > Not sure I understand what you are doing.
    > But, if you have a loop that fits in the
    > instruction cache it will run very quickly.

    Ok, below is what the highly abbreviated code looks like:

     
    for player hands
    for dealer hands
    for upcard=minupcard to maxupcard
    Calculate StandEV
    next upcard
    next dealer hands
    next player hands

    .
    The time it takes for each upcard is listed below as well as the sum of all the times and the first column is the time it takes if you just run upcards 1-10 all at once. When running an upcard by itself, e.g. 3 - then minupcard=maxupcard=3. When all at once minupcard=1 and maxupcard=10.

    The times are really slow I know, but remember it's VBA and I'm a terrible programmer. I also just truncated the decimals for this post so the sum may be off by a second or two:
     
    All at once Sum Individual
    1 2 3 4 5 6 7 8 9 10
    Stand time 396 s 123 s 18 s 25 s 21 s 16 s 12 s 9 s 7 s 5 s 4 s 2 s

    .
    Hopefully that explains it better. But you can see the huge difference in time - from 2 minutes to almost 7 minutes.

    Very very weird...

    Thanks,
    MGP

  2. #15
    Keith Collins
    Guest

    Keith Collins: Re: The numbers and the code

    > Ok, below is what the highly abbreviated
    > code looks like:
    > for player hands
    > for dealer hands
    > for upcard=minupcard to maxupcard
    > Calculate StandEV
    > next upcard
    > next dealer hands
    > next player hands
    > .

    If your code works something like
      
    for x = 1 to someNum
    PlayerHand(x) = ....
    for y = 1 to anotherNum
    DealerHand(y) = ....
    for Upcard = Min to Max
    Calc StandEV
    next UpCard
    next y
    next x

    < .
    where you are just defining the player and dealer hands, you would be unnecessarily redefining them for each iteration of the UpCard for loop as well. Again, I'm just guessing, but with as much difference as you are describing something like this is happening. The solution is to reorganize the for loops so nothing such as redefinition occurs.

    Hope this helps,
    Keith Collins

  3. #16
    Saboteur
    Guest

    Saboteur: File Opens and Closes

    Do your Open File and Close File operations occur while the clock is running? In other words, does Open- and Close- time count as part of the Calc StandEV process?

    Are you opening and closing an Excel worksheet during the Calc StandEV process? If that's the case, is there a default amount of cells (both horizontally and vertically) that are "created" and "available" when a new worksheet is opened? Does your "A thru 10" mega-run have to constantly go out and add more cells to your worksheet so the combined data of all ten up-cards will fit onto the worksheet?

    I've never written macros before, so all of the above is just a guess. But if your mega-run of all ten values creates ~10 times as much output, I can't help but wonder if the extra time comes from making room for it.

  4. #17
    MGP
    Guest

    MGP: Re: File Opens and Closes

    Saboteur:
    > Do your Open File and Close File operations
    > occur while the clock is running? In other
    > words, does Open- and Close- time count as
    > part of the Calc StandEV process?

    > Are you opening and closing an Excel
    > worksheet during the Calc StandEV process?
    > If that's the case, is there a default
    > amount of cells (both horizontally and
    > vertically) that are "created" and
    > "available" when a new worksheet
    > is opened? Does your "A thru 10"
    > mega-run have to constantly go out and add
    > more cells to your worksheet so the combined
    > data of all ten up-cards will fit onto the
    > worksheet?

    > I've never written macros before, so all of
    > the above is just a guess. But if your
    > mega-run of all ten values creates ~10 times
    > as much output, I can't help but wonder if
    > the extra time comes from making room for
    > it.

    All the data is stored/created in memory. All the inputs happen at the very beginning of the program and all outputs at the very end. Neither are included in the calculations being timed. The worksheet is of a fixed size and stays open the whole time.

    Keith:
    > where you are just defining the player
    > and dealer hands, you would be unnecessarily
    > redefining them for each iteration of the
    > UpCard for loop as well. Again, I'm just
    > guessing, but with as much difference as you
    > are describing something like this is
    > happening. The solution is to reorganize
    > the for loops so nothing such as
    > redefinition occurs.

    Nope - the hands are defined and stored in memory before you reach this point. The upcard loop is the innermost one so even if they were being defined they would not be redefined more than once. The only time the hands are accessed are during the upcard loop and the outer loops are just indices that describe which hands to use.

    Thanks for the ideas though. This really is a weird problem isn't it...?

    Sincerely,
    MGP

  5. #18
    paranoid android
    Guest

    paranoid android: Re: File Opens and Closes

    Does the slow method use more memory than the faster method? If you're using a lot of memory, you may be running out of physical RAM forcing the OS to start swapping. From what I gather, it doesn't sound like this is your problem, but I don't know if I understood you correctly, so I thought I'd throw that out.

  6. #19
    MGP
    Guest

    MGP: Re: File Opens and Closes

    > Does the slow method use more memory than
    > the faster method? If you're using a lot of
    > memory, you may be running out of physical
    > RAM forcing the OS to start swapping. From
    > what I gather, it doesn't sound like this is
    > your problem, but I don't know if I
    > understood you correctly, so I thought I'd
    > throw that out.

    I'm not sure. I allocate and clear the same size array with both methods prior to any calculations. When I do calculations for all 10 at once though the array is filled with many more non-zero numbers. I was wondering myself if this is what the problem is. Can that explain it?

    Thanks,
    MGP

  7. #20
    paranoid android
    Guest

    paranoid android: Re: File Opens and Closes

    > I'm not sure. I allocate and clear the same
    > size array with both methods prior to any
    > calculations. When I do calculations for all
    > 10 at once though the array is filled with
    > many more non-zero numbers. I was wondering
    > myself if this is what the problem is. Can
    > that explain it?

    It won't matter what numbers fill the array. Zeros aren't faster than non-zeros. However, if your array is large, and you are only accessing a portion of it with the one method versus all of it for the other, this could possibly explain it. Perhaps your array is too big to fit in its entirety into RAM, therefore, some of it gets swapped to the hard drive. When you're only accessing a portion of it, perhaps all of that portion can fit into RAM, so it's still fast. When you have to access all of it, swapping with virtual memory on your hard drive may take place, causing a big slowdown. As I believe Norm pointed out, you could get a similar effect when most of your data can fit in the cache (which is high speed memory attached to your CPU) versus when it cannot.

    How big is your array (structure_size * number_of_elements)?

    If you think this may be your problem, you may be able to verify it by drastically reducing the size of your array and testing your code on a subset of that (if that's possible). If you see the same proportional slowdown, then your problem is probably something else you overlooked in your code.

  8. #21
    Saboteur
    Guest

    Saboteur: I like that thought

    Yeah, it's definitely something that causes more work when the process is large than it takes when the process is small (duh).

    What values do you enter as minupcard and maxupcard? You don't use an "A" or a "T", do you?

  9. #22
    Keith Collins
    Guest

    Keith Collins: Re: File Opens and Closes

    > I'm not sure. I allocate and clear the same
    > size array with both methods prior to any
    > calculations. When I do calculations for all
    > 10 at once though the array is filled with
    > many more non-zero numbers. I was wondering
    > myself if this is what the problem is. Can
    > that explain it?

    If you're convinced that both methods work the same except that one uses an aggregate, then it seems that maybe what is happening is what Paranoid Android suggests. I don't think the fact that the data structure is filled with more non-zero numbers would give the effect you have described. However, if the data structure is extremely large it might force the use of virtual memory, which is much slower than RAM. If that turns out to be the case, you might consider breaking up the data into multiple smaller structures, taking care to ensure that only the structure you are currently using occupies RAM.

    Good luck,
    Keith Collins

  10. #23
    Keith Collins
    Guest

    Keith Collins: Another thought

    > I'm not sure. I allocate and clear the same
    > size array with both methods prior to any
    > calculations. When I do calculations for all
    > 10 at once though the array is filled with
    > many more non-zero numbers. I was wondering
    > myself if this is what the problem is. Can
    > that explain it?

    The fact that there are more non-zero numbers in the array wouldn't have an effect, but accessing more elements in the aggregate method would, particularly if it caused the use of virtual memory. VBA uses safe arrays, so any unaccessed elements would be 0. Hope I'm not confusing things.

    Keith Collins

  11. #24
    MGP
    Guest

    MGP: A scientific study

    > It won't matter what numbers fill the array.
    > Zeros aren't faster than non-zeros. However,
    > if your array is large, and you are only
    > accessing a portion of it with the one
    > method versus all of it for the other, this
    > could possibly explain it. Perhaps your
    > array is too big to fit in its entirety into
    > RAM, therefore, some of it gets swapped to
    > the hard drive. When you're only accessing a
    > portion of it, perhaps all of that portion
    > can fit into RAM, so it's still fast. When
    > you have to access all of it, swapping with
    > virtual memory on your hard drive may take
    > place, causing a big slowdown. As I believe
    > Norm pointed out, you could get a similar
    > effect when most of your data can fit in the
    > cache (which is high speed memory attached
    > to your CPU) versus when it cannot.

    Ok, what you and Norm are saying makes sense if the program can somehow figure out that the entire array doesn't need to be accessed - right?

    > How big is your array (structure_size *
    > number_of_elements)?

    The structure is about 64k. The array is 3085 elements each 64k so the net size is about 197,440k.

    > If you think this may be your problem, you
    > may be able to verify it by drastically
    > reducing the size of your array and testing
    > your code on a subset of that (if that's
    > possible). If you see the same proportional
    > slowdown, then your problem is probably
    > something else you overlooked in your code.

    I was about to do that and then realized it wouldn't answer the question since it only really slows down when the large amount of data is used. It's already faster with subsets even without rewriting anything.

    Since I don't know of any program that can tell me if virtual vs physical memory is being addressed, I did a scientific study. I ran the program once with only an upcard of 10 and listened very carefully - I didn't hear the hard disk running. Then I ran it from upcards 1 to 10 and unfortunately, I still didn't hear the hard disk running more than it's normal background running.

    Then I moved the upcard loop to be the first one. It cut the time about 25 seconds, but it still didn't cut it down all the way. This does suggest to me though that what I think Norm was talking about is a partial factor.

    I also went back and looked at an older version of the program that I had and it didn't have this problem. The main difference between the two versions was that I added a lot more large, global variables for data regarding counting/insurance calculations. (These are always declared but not used for the times we were discussing). So regardless of my experiment, I think you guys must be right that somehow the program uses the information more efficiently when doing the upcards individually and that somehow the amount of data being accessed is spilling over into a slower memory process when all the upcards are calculated together.

    Since I'm assuming there's really no way to control this, I guess I'm stuck with the slower calcs as long as I want to keep all the functions of my CA together. If everything above makes sense, then at least I have a much better understanding of why it got so much slower. If it doesn't then I'm still lost, lol.

    Thank you everybody very much for your help.

    Sincerely,
    MGP

  12. #25
    Norm Wattenberger
    Guest

    Norm Wattenberger: Re: A scientific study

    To find out if virtual memory is in use, click CTRL-ALT-DEL and see if the CPU time is 100% on performace.

    With this large an array, it is quite likely that different methods of working through the data will make different uses of the cache. That can dramatically affect CPU time. If you can, you might try moving the inner FOR/NEXT loop to the outside.

  13. #26
    Saboteur
    Guest

    Saboteur: Just curious...

    When running the all-inclusive version, have you tried displaying the "incremental" values for ProcessorTime as the upcard changes? If the extra time is spread out through all upcards proportionally, you probably won't learn anything, but if the extra time comes from just one or two upcards, it could help you zero in on the problem.

    Admittedly, it's a longshot! I'm just not believing that memory-allocation can be that expensive.

Page 2 of 3 FirstFirst 123 LastLast

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  

About Blackjack: The Forum

BJTF is an advantage player site based on the principles of comity. That is, civil and considerate behavior for the mutual benefit of all involved. The goal of advantage play is the legal extraction of funds from gaming establishments by gaining a mathematic advantage and developing the skills required to use that advantage. To maximize our success, it is important to understand that we are all on the same side. Personal conflicts simply get in the way of our goals.