heh. actually building the table is really fast.
well, it was pretty easy for me to test. I'll call building a table method A and picking as you go method B.
ITER is how many times to run the test for average results
LIST is how long the pick list is (assume all items in list are selectable)
release build, rand function completely inline (multiply with carry), times given in msec (so smaller is better)
for (ITER = 500,000, LIST = 200)
A = 451
B = 2644
for (ITER = 200, LIST = 500,000)
A = 2244
B = 2483
for (ITER = 20, LIST = 5,000,000)
A = 4967
B = 5498
so interestingly enough, it looks a lot like B is catching up to A as list sizes increase somewhat anyway.
{edit} hmm, come to think of it, i'm not entirely sure why in the third run, it takes longer for method B than in the previous two runs. you'd think that in all three cases, B is doing exactly 100M things, but I suppose at some point just iterating the list imposes some sort of overhead.
so i would say that with more items, method B is definitely better as the memory requirements for A become prohibitive and their run times become closer. B is actually more elegant as well, so I like it for that :) but in terms of actual implementation, A is probably more applicable to what D2 is doing.
actually this sort of computation isn't computationally expensive anyway. show me a faster way to do the pathfinding & collision detection and you'll really have my attention ;)
well, it was pretty easy for me to test. I'll call building a table method A and picking as you go method B.
ITER is how many times to run the test for average results
LIST is how long the pick list is (assume all items in list are selectable)
release build, rand function completely inline (multiply with carry), times given in msec (so smaller is better)
for (ITER = 500,000, LIST = 200)
A = 451
B = 2644
for (ITER = 200, LIST = 500,000)
A = 2244
B = 2483
for (ITER = 20, LIST = 5,000,000)
A = 4967
B = 5498
so interestingly enough, it looks a lot like B is catching up to A as list sizes increase somewhat anyway.
{edit} hmm, come to think of it, i'm not entirely sure why in the third run, it takes longer for method B than in the previous two runs. you'd think that in all three cases, B is doing exactly 100M things, but I suppose at some point just iterating the list imposes some sort of overhead.
so i would say that with more items, method B is definitely better as the memory requirements for A become prohibitive and their run times become closer. B is actually more elegant as well, so I like it for that :) but in terms of actual implementation, A is probably more applicable to what D2 is doing.
actually this sort of computation isn't computationally expensive anyway. show me a faster way to do the pathfinding & collision detection and you'll really have my attention ;)