Sequences and long files
- Posted by Michael Sabal <mjs at OSA.ATT.NE.JP> May 07, 1998
- 673 views
A couple weeks ago, someone posted a question about how to handle = super-huge database files (which wouldn't take too long these days), = being able to access them in a reasonable amount of time, and still be = able to use Euphoria sequences allowing dynamic memory allocation. = These two ideas almost seem to be an antithesis. Random access of files = requires fixed record lengths, but variable length records preclude = random access. So, I starting thinking (a dangerous pasttime, I know :). I tend to = live in theory, so I'll leave the reality to the smart guys on the list = (i.e., I'm not going to attempt to code this off the top of my head!). = What makes sense is having two databases: the first would be a sorted = version of the complete data, sorted from most recent to oldest (the = most recent data is more likely to be needed first, except in a = warehouse situation). Then the database could be read, say 1000 records = at a time held in memory, sequentially. This is common sense and is = usually what happens anyway. The slow part is when this file needs = updating. Instead of writing changes to the main database file (which = means writing the database file every time), changes would be kept in = the second database file, a much smaller file whose access time is = miniscule. Then, during an idle time like waiting for user input, copy = the changes file into the main database file and save the whole thing. = This means adding a time check in the wait_for_input routine for like 3 = minutes, but only if the change file exists. Hence, variable length records, mostly rapid access time, and not too = difficult to program, I would think. Now let me move out of the way and = find my fire coat.... Serving Jesus Christ, Michael J. Sabal mjs at osa.att.ne.jp http://home.att.ne.jp/gold/mjs/