1. Benchmarks revisited

After fixing the memory leak, I've re-run the binary trees benchmark. I'm on the same machine, but different operating system. I ran with a parameter of 16, like before, though the official stuff has moved to 20 (they've got more powerful equipment than I do).

Language Version Time (s) Alioth Scale My Scale
g++ 4.3.3 0.73 1 1
gcc 4.3.3 2.98 3.2 4.08
euc r2089 7.11 9.74
exu 3.1.1 20.15 27
eui r2089 23.31 31
ruby 1.9.0 34.17 43 46
python 2.6.2 50.99 123 69
perl 5.10.0 99.22 236 135
ruby 1.8.7 105.77 187 144


I reran the C/C and translated euphoria with 20:

Language Version Time (s) Alioth Scale My Scale
g++ 4.3.3 16.65 1 1
gcc 4.3.3 58.21 3.2 3.5
euc r2089 150.36 9.03


The gcc estimate got closer to theirs, but it's interesting to see. The C code used boost::object_pool, which meant that they weren't going to the OS for memory all the time. exu 3.1.1 does something similar, which I suspect is the difference between 3.1.1 and the current 4.0 build.

Either way, this shows that in this test, at least, ruby is getting closer to euphoria, though perl is pretty far down.

Matt

PS: The last benchmarks, for reference:

Language Version Time (s) Scale
Java 6 1
C OpenWatcom 1.8 7 4.5
euphoria euc r2079 11 6.51
D 8
euphoria 3.1.1 18 10.65
python psycho 14
euphoria eui r2079 31 18
ruby 1.9 19
JavaScript SpiderMonkey 19
python IronPython 25
python 32
ruby 60
perl 5.8.8 140 83
new topic     » topic index » view message » categorize

2. Re: Benchmarks revisited

Could you then please update your website? Python is clearly not slower by a factor of 30. Oh, and please post your EU code and Python code so that I can compare.

new topic     » goto parent     » topic index » view message » categorize

3. Re: Benchmarks revisited

Critic said...

Oh, and please post your EU code and Python code so that I can compare.

I got the python code from the shootout. The euphoria code (as I said before) came from the Jason Gade's shootout submission.

Matt

new topic     » goto parent     » topic index » view message » categorize

4. Re: Benchmarks revisited

Critic said...

Could you then please update your website? Python is clearly not slower by a factor of 30. Oh, and please post your EU code and Python code so that I can compare.

Critic,

If you don't have the Eu code nor the Python code, how do you know it's not slower by a factor of 30?! Silly you.

Jeremy

new topic     » goto parent     » topic index » view message » categorize

5. Re: Benchmarks revisited

jeremy said...
Critic said...

Could you then please update your website? Python is clearly not slower by a factor of 30. Oh, and please post your EU code and Python code so that I can compare.

Critic,

If you don't have the Eu code nor the Python code, how do you know it's not slower by a factor of 30?! Silly you.

Jeremy

Well, if one picked numbers out of this benchmark very optimistically, euc's best number is 9.03 while Python's worst number is 69.

69/9.03 = 7.6

Even if we round up, Python is only slower than Eu by a factor of 8. (And this is comparing intepreted Python with translated Eu.)

new topic     » goto parent     » topic index » view message » categorize

6. Re: Benchmarks revisited

jimcbrown said...

Well, if one picked numbers out of this benchmark very optimistically, euc's best number is 9.03 while Python's worst number is 69.

69/9.03 = 7.6

Even if we round up, Python is only slower than Eu by a factor of 8. (And this is comparing intepreted Python with translated Eu.)

Oh.. Gues I should have read better smile He's comparing the manual to our bench results, not our bench results to what he thinks. Opps. Sorry Critic, I was wrong here in accusing you.

Jeremy

new topic     » goto parent     » topic index » view message » categorize

7. Re: Benchmarks revisited

However, I don't think we should update the manual until we are ready to release 4.0, as right now very little regard has been given to optimization so things are not very well optimized right now. These numbers will probably be in quite a state of flux during the beta stages when we turn focus from new features to bug fixing, testing, optimization, etc...

Jeremy

new topic     » goto parent     » topic index » view message » categorize

8. Re: Benchmarks revisited

Matt, what version of Python did you use? As I understand, the new 3.0 is a bit slower than older versions. We should compare our latest with their latest (even if their latest was faster). BTW... is 3.0 slower? I know it was during the dev stages, but I'm sure they turned focus to fixing some speed issues during their betas also.

Jeremy

new topic     » goto parent     » topic index » view message » categorize

9. Re: Benchmarks revisited

jeremy said...

Matt, what version of Python did you use? As I understand, the new 3.0 is a bit slower than older versions. We should compare our latest with their latest (even if their latest was faster). BTW... is 3.0 slower? I know it was during the dev stages, but I'm sure they turned focus to fixing some speed issues during their betas also.

I used python 2.6.2 (see the second column in the table) as it's what was on the system to begin with. I installed ruby, but didn't bother getting the latest python.

Actually, given the benchmarks I reported, python 3 was faster than python 2. I guess my table was somewhat incorrect, since I ran python 2.6.2, but marked it down as 3. Whatever.

This is largely a test of allocation and deallocation. As sequences were assigned as elements of other sequences, it also requires a fair amount of reference counts, which was more or less the point of doing this particular benchmark. So, in this, euphoria certainly wasn't 30 times faster than python, but a lot of time was presumably spent inside malloc() and free()--I didn't profile, but it's a pretty reasonable assumption.

I don't know enough about how python does garbage collection to make any comments about that.

What I thought was interesting was that the C code beat the C code by a lot. I ran on a dual core machine, and the C code used multiple threads. Using time to capture the running times, the user time for the C code was about twice the real time. The C code used a single thread, but because it cached the objects, it didn't have to spend so much time allocating and deallocating, which seems kinda like cheating:

Alioth Shootout said...

Programs that use custom memory pool or free list implementations will be listed as interesting alternative implementations.

Which is definitely what boost::object_pool looks like to me, though it's not listed at the bottom with the others.

Matt

new topic     » goto parent     » topic index » view message » categorize

Search



Quick Links

User menu

Not signed in.

Misc Menu