Re: De-allocating memory - time issue

new topic     » goto parent     » topic index » view thread      » older message » newer message

And in case it might be helpful, I did Ricardo's tests, but with only 100 in
the inner loop, to reduce the length of time required, and it took 355 times
longer to do the multiplication than the simple put value in sequence; that
would suggest it would have taken almost 2.5 hours to complete the
multiplication test if he'd let it go on.

Dan Moyer

----- Original Message -----
From: <rforno at tutopia.com>
To: "EUforum" <EUforum at topica.com>
Subject: RE: De-allocating memory - time issue


>
> Andy:
> I tried it as follows, and it lasted 23 seconds:
>
> sequence t
> t = repeat(repeat(repeat(1.13218966051, 3000), 100), 50)
> for i = 1 to 50 do
>     for j = 1 to 100 do
>  for k = 1 to 3000 do
>      t[i][j][k] = 1.31764903217
>  end for
>     end for
> end for
>
> But if I perform a floating point calculation like:
> t[i][j][k] = (i * j * k) * 1.31764903217
> it lasts a long while (I stopped it before getting the timing because I
was
> tired of waiting...)
> Maybe Rob can explain the difference. Of course, there should be a
> difference, but not so big, I think.
> Regards.
>
> ----- Original Message -----
> From: Andy Serpa <renegade at earthling.net>
> To: EUforum <EUforum at topica.com>
> Sent: Saturday, September 21, 2002 10:03 AM
> Subject: RE: De-allocating memory - time issue
>
>
> > Try it with floating point values.
> >
> >
> > rforno at tutopia.com wrote:
> > > Andy:
> > > I tried your example using the following program. For me, it lasted 21
> > > seconds. I own an AMD 500 with 256 Mb RAM.
> > > So, I suspect your timings are a consequence of using virtual RAM.
Other
> > > cause of slowness would be changing the size of the sequence's
elements,
> > > say
> > > intializing them with 0 and afterwards replacing some data with
> > > non-Euphoria-integer data.
> > > Regards.
> > >
> > > sequence t
> > > t = repeat(repeat(repeat(0, 3000), 100), 50)
> > > for i = 1 to 50 do
> > >     for j = 1 to 100 do
> > >  for k = 1 to 3000 do
> > >      t[i][j][k] = i * j * k
> > >  end for
> > >     end for
> > > end for
> > >
> > > ----- Original Message -----
> > > From: Andy Serpa <renegade at earthling.net>
> > > To: EUforum <EUforum at topica.com>
> > > Sent: Friday, September 20, 2002 3:43 AM
> > > Subject: RE: De-allocating memory - time issue
> > >
> > >
> > > > Well, if you ain't got the RAM, you ain't got the RAM and nothing is
> > > > going to help too much, but I recently wrote some code to simulate
> > > > multi-dimensional sequences in direct memory.
> > > >
> > > > I had a 3-dimensional sequence with dimensions something like 50 x
100
> x
> > > > 3000 giving me room to store 15 millions atoms. (Which I had enough
> RAM
> > > > for, so disk-swapping was not the issue.) Just trying to fill it
took
> 10
> > > > minutes or so, even if I pre-allocated it.  And trying to grab or
> update
> > > > values was incredibly slow.
> > > >
> > > > Seeing as it was of fixed size once created (although the elements
> > > > needed to be updated all the time) I figured I could just poke &
peek
> > > > the values directly into memory.  But I needed the ability to
> reference
> > > > an index in 3 dimensions (e.g. {25,2,1500}) so I wrote some
functions
> > > > that calculated the correct index as poked or peeked it for me.
> > > >
> > > > Doing it this way is obviously slower than using a normal sequence
of
> > > > normal size, and the emulated sequence needs to be of fixed size,
but
> > > > for giant multi-dimensional sequences there is no comparison as my
> code
> > > > runs the same speed no matter how big the sequence you're emulating
> (as
> > > > long as you've got the memory).
> > > >
> > > > If anyone is interested, I can prepare the code for public
consumption
> > > > and upload it...
> > > >
> > > > -- Andy
> > > >
> > > >
> > > > petelomax at blueyonder.co.uk wrote:
> > > > > I have a *HUGE* object of some 10 million entries. (I now know the
> > > > > exact size 'cos I coded a status window to let me know)
> > > > > [please read this all the way thru before responding]
> > > > >
> > > > > Given that I have 48MB of ram, that 10 million bears resemblance
to
> me
> > > > > running out of physical memory (4 bytes per integer, yep), and the
> ole
> > > > > PC begins to disk thrash.
> > > > >
> > > > > So, I code a "Cancel" button, press it and the program waddles on
> > > > > longer than expected.
> > > > >
> > > > > So I trace it to the line
> > > > >
> > > > > stack={}
> > > > >
> > > > > That line alone takes 3 minutes to execute.
> > > > >
> > > > > Now, I kind of understand there are difficulties abound in the
> general
> > > > > case deallocating ten million entries, most of which are copies of
> > > > > copies of copies (however many it takes to get to ten million).
> > > > >
> > > > > stack is an array of approx repeat({},55000) of which only the
> middle
> > > > > 3200 have been updated, however each such that stack[i] is set to
> > > > > stack[i+/-1]&{x,y}.
> > > > >
> > > > > So I guess reference counts are off the scale.
> > > > >
> > > > > The annoying point here, then, assuming it *is* the problem, is
that
> > > > > the ref count handling is, in this unusual case, wholly
irrelevant.
> > > > >
> > > > > I definately doubt there is any way to improve this, it really is
> not
> > > > > any kind of major problem in any sense, but on the off-chance, I
am
> > > > > just asking if anyone has any suggestions.
> > > > >
> > > > > Thanks
> > > > > Pete
> > > > >
> > > > >
> > <snip>
> >
> >
>
>
>

new topic     » goto parent     » topic index » view thread      » older message » newer message

Search



Quick Links

User menu

Not signed in.

Misc Menu