Re: De-allocating memory - time issue

new topic     » goto parent     » topic index » view thread      » older message » newer message

rforno writes:
> I have some additional questions:
>
> 1) Why does the timing for each main iteration increases linearly as the
> iteration number grows, in the following program?
>
> constant COLUMNS = 1000
> constant ROWS = 100
> constant PLANES = 50
> atom t0, t1
> sequence t
> t0 = time()
> t = repeat(repeat(repeat(1.13218966051, COLUMNS ), ROWS), PLANES)
> printf(1, "initialization time = %f\n", time() - t0)
> for i = 1 to PLANES do
>    t1 = time()
>    for j = 1 to ROWS do
>    for k = 1 to COLUMNS do
>        t[i][j][k] = (i * j * k) * 1.31764903217
>    end for
>    end for
>    printf(1, "i = %d, time = %f\n", {i, time() - t1})
> end for
> printf(1, "total time = %f\n", time() - t0)

I confirmed that over 99% of the time is spent inside
Watcom C's (DOS) malloc() routine. It appears as
if Watcom's malloc() is searching through all of the
millions of allocated blocks (mostly floating-point numbers),
so the time increases linearly as more blocks are allocated.
I know Watcom's free() routine simply marks a block as "free"
by setting a bit. That's all it does. So maybe Watcom does not
maintain a linked list of free blocks like most other malloc's,
but rather, searches *all* the blocks looking for one that's free.

When I run this with Euphoria for DOS as compiled by *DJGPP*,
the program runs really fast, with no linear increase in time.

When I run it with exw, it runs really fast, but there's still some
small linear increase.

> 2) I attempted to improve the performance by the following means, without
> succeeding. Its timing also increases linearly with the iteration count.
> Please explain why it doesn't work:

I assume malloc is the culprit here too.

> 3) Finally, I succeeded in improving radically the performace with the
> following approach (in my PC it runs in 92 seconds, against hours with the
> other methods):

You slowly initialize everything, and then you start
replacing values, one by one. The replacements are fast because
for each item that you allocate, another one of the same size
is freed. I have a "storage cache" that ensures that small blocks
will be allocated and freed very quickly in this case. malloc
probably isn't even called.

Every implementation of malloc is different.
Everyone approaches the problem thinking he
can do it better than everyone else.
I have no intention of rewriting Watcom's malloc,
even though the source code is out there somewhere.

> 4) But that is not all! If you substitute constant COLUMNS = 999
> in place of constant COLUMNS = 1000, instead of decreasing,
> the execution time grows enormously!

I couldn't duplicate this one, so I can't explain it.

Regards,
   Rob Craig
   Rapid Deployment Software
   http://www.RapidEuphoria.com

new topic     » goto parent     » topic index » view thread      » older message » newer message

Search



Quick Links

User menu

Not signed in.

Misc Menu