Re: EDS - replacing record data slows down database
- Posted by Tone Škoda <tskoda at email.si> Aug 20, 2005
- 474 views
my mistake: using db_dump() i see now that there is one big 95 mb free block. it looks like i deleted a big table before i used db_compress(). in database of another program i have 1350 free blocks, most of them from 10 to 25 bytes of size. this shouldn't slow down database. for this program i was suspecting EDS was slowing it down, because sometimes it would take long to do a task, but now i see something else must be the problem. Robert Craig wrote: > > Tone Škoda wrote: > > I tried to create benchmark test but was having problem creating highly > > fragmented > > database where file size (or speed) difference between non-fragmented and > > fragmented > > file would be noticable. i guess test should be made with some existing > > program. > > but i had an .edb file which was so fragmented that it went from 95 mb to 8 > > kb when > > compressed, and it got that fragmented in only a week or so. > > You must have a very interesting pattern of allocating and > freeing space in the database. Maybe you allocate 1Mb, then > free it, then allocate 1K (say), and EDS allocates the 1K from the > space formerly used by the 1Mb, so it only has .999Mb left, > which means it can't be used for your next 1Mb allocation, etc. > Or maybe you allocate/free 1Mb, 1.01 Mb, 1.02 Mb, so you can't use > a previously-freed block, and EDS can't merge the > previously-freed blocks because there are small in-use blocks in between. > This sort of thing somethimes happens with C's malloc and other > allocation systems. I'd be interested to know what > allocation/deallocation pattern you have. Some databases that > I've been updating daily for years, stabilize with just a few percent > of space on the free list. They allocate/free a variety of different > sizes, in a fairly random pattern. > > Regards, > Rob Craig > Rapid Deployment Software > <a href="http://www.RapidEuphoria.com">http://www.RapidEuphoria.com</a> >