Re: Benchmarks revisited
- Posted by mattlewis (admin) May 27, 2009
- 1181 views
Matt, what version of Python did you use? As I understand, the new 3.0 is a bit slower than older versions. We should compare our latest with their latest (even if their latest was faster). BTW... is 3.0 slower? I know it was during the dev stages, but I'm sure they turned focus to fixing some speed issues during their betas also.
I used python 2.6.2 (see the second column in the table) as it's what was on the system to begin with. I installed ruby, but didn't bother getting the latest python.
Actually, given the benchmarks I reported, python 3 was faster than python 2. I guess my table was somewhat incorrect, since I ran python 2.6.2, but marked it down as 3. Whatever.
This is largely a test of allocation and deallocation. As sequences were assigned as elements of other sequences, it also requires a fair amount of reference counts, which was more or less the point of doing this particular benchmark. So, in this, euphoria certainly wasn't 30 times faster than python, but a lot of time was presumably spent inside malloc() and free()--I didn't profile, but it's a pretty reasonable assumption.
I don't know enough about how python does garbage collection to make any comments about that.
What I thought was interesting was that the C code beat the C code by a lot. I ran on a dual core machine, and the C code used multiple threads. Using time to capture the running times, the user time for the C code was about twice the real time. The C code used a single thread, but because it cached the objects, it didn't have to spend so much time allocating and deallocating, which seems kinda like cheating:
Programs that use custom memory pool or free list implementations will be listed as interesting alternative implementations.
Which is definitely what boost::object_pool looks like to me, though it's not listed at the bottom with the others.
Matt