> I'm not sure. I allocate and clear the same
> size array with both methods prior to any
> calculations. When I do calculations for all
> 10 at once though the array is filled with
> many more non-zero numbers. I was wondering
> myself if this is what the problem is. Can
> that explain it?

It won't matter what numbers fill the array. Zeros aren't faster than non-zeros. However, if your array is large, and you are only accessing a portion of it with the one method versus all of it for the other, this could possibly explain it. Perhaps your array is too big to fit in its entirety into RAM, therefore, some of it gets swapped to the hard drive. When you're only accessing a portion of it, perhaps all of that portion can fit into RAM, so it's still fast. When you have to access all of it, swapping with virtual memory on your hard drive may take place, causing a big slowdown. As I believe Norm pointed out, you could get a similar effect when most of your data can fit in the cache (which is high speed memory attached to your CPU) versus when it cannot.

How big is your array (structure_size * number_of_elements)?

If you think this may be your problem, you may be able to verify it by drastically reducing the size of your array and testing your code on a subset of that (if that's possible). If you see the same proportional slowdown, then your problem is probably something else you overlooked in your code.