Re: Berkeley DB -- anyone care?

new topic     » goto parent     » topic index » view thread      » older message » newer message

Hi Andy,

    It produces the common floating pointing inaccuracy. which is very very
small. i use atom_to_float32()/float32_to_atom(). other than
atom_to_float64. Thus for every floating point number it uses 4 bytes other
than 8 thus producing more compressed objects

Please keep on giving us updates of this DB and how its going. I know i'm
going to be needing it.

Jordah
----- Original Message -----
From: "Andy Serpa" <ac at onehorseshy.com>
To: "EUforum" <EUforum at topica.com>
Sent: Monday, January 06, 2003 7:08 PM
Subject: RE: Berkeley DB -- anyone care?


>
>
> jordah at btopenworld.com wrote:
> > The code in binary.e might seem slower because of the overhead involved
> > in
> > call_proc(). The original routine was clearly tested on #euphoria and
> > clearly faster producing more compressed objects.
> >
>
> In my test, the code I'm using was faster but did produce slightly
> bigger objects -- I could try it again.  In practice it would make no
> difference as they were very close.  The compress/decompress stuff is no
> bottleneck except when used for key comparison when it is called a huge
> number of times, and even then the difference between one routine & the
> other wouldn't make any difference (2 seconds over 100,000 calls or
> something) -- the time is lost in just calling the routine period,
> peeking the object, etc.
>
> If your code doesn't preserve accuracy the point is moot, because that
> would make it unsuitable for a database anyway...
>
> -- Andy
>
>
>
> TOPICA - Start your own email discussion group. FREE!
>

new topic     » goto parent     » topic index » view thread      » older message » newer message

Search



Quick Links

User menu

Not signed in.

Misc Menu