Re: Optimizing basic operations

new topic     » goto parent     » topic index » view thread      » older message » newer message

My first goal was to look at the sorting process. I noticed that shell_sort works best in general, but there are still many situations where it does not. For example, if the list to be sorted is already sorted, bubble_sort comes out on top. If the list is completely sorted except for the last entry (the insertion), then, not surprisingly, insertion_sort is faster. Additionally, there appears to be worst-case scenarios which can find each of the sorts' weak points. If I remember correctly, in the allsorts.ex program, size variables were set to nitems = 500 and length of list = 17 (using the repeat statement). But the results can be deceiving. because just changing these values (and not the type of data) will alter the relative ranking of the performance of each sort, that is, some will do better than others depending on the situation. I now realize that my original question about optimization had a hidden meaning. There is low level optimization such as n += 1 appears faster than n = n + 1 and high level optimization (situation based). The bucket_sort works best in certain restricted situation because it avoids the comparison operator. Is there a performance penalty for using ">", compare() or equal()? Is there a performance penalty if I use a loop to extract elements of a list instead of splicing into it? (Yes, I think so.) While there may be many ways to skin a cat, some are faster than others. Is there a resource guide that addresses some of these issues?

new topic     » goto parent     » topic index » view thread      » older message » newer message

Search



Quick Links

User menu

Not signed in.

Misc Menu