Re: Optimizing basic operations

new topic     » goto parent     » topic index » view thread      » older message » newer message

Good point. I'll focus on index sorts. Each group of integers currently has about 25 elements, possibly more if I can make an efficient sort. Think of trillions of these data structures coming in sequentially. From that blitz of data, I'm interested in the top 100 scores... So, one option is to make the top 100 list and then sort each new eval that comes in. This is probably not efficient. The other is to wait until all eval scores are in, but this is probably not efficient and there isn't enough memory. Looking at the performance data I posted early, there has to be some kind of "sweet spot" between 1 and 1 trillion. Let's call that number "n". So, I'll wait until my "bucket" has n elements, append the bucket to my current top 100 list, create list with only the eval and the index, sort, and then pull out the new top 100 by using the index. The evals and data that don't make the top 100 are rejected. In fact, in this case, I might test to see if the new evals are at least greater than the lowest eval on my top 100 list before putting it into the bucket. That's probably cheaper than sorting it later. (Now you see why I was asking about optimization at all levels).

new topic     » goto parent     » topic index » view thread      » older message » newer message

Search



Quick Links

User menu

Not signed in.

Misc Menu