Re: Storing Data for Web Site
- Posted by Matt Lewis <matthewwalkerlewis at gmail.com> Sep 10, 2005
- 643 views
Greg Haberek wrote: > > > You'd think the limit would be 4GB, but I recall that some of the > > C library routines will fail after 2GB. > > > > Windows/Linux/FreeBSD have newer file routines that go beyond > > 4-byte file offsets. I would need to start using those. > > EDS would also have to be adjusted to use greater than > > 4-byte offsets. > > Actually, at one point, I was looking into implementing variable-byte > offsets. That way you could squeeze an extra byte to two out of every > record. One wouldn't need to bloat the database with 8-byte offsets > for *every* record. One could also make use of only 2-byte offsets. > Typical 4 byte offsets would now be 5 bytes, but 8-byte offsets > (actually 9 bytes) would only be used when necessary. > That seems less efficient than it would need to be. Why not use a similar system to how Rob implements in compress/decompress, and 'only' allow up to 7-byte offsets (or whatever). Frankly, it's hard to imagine needing more than that big of a file (5-bytes would give you what, like 512gb-- "No one should ever need more than 640GB" :). That should give plenty of room to grow. It's actually pretty easy (on Windows, I've never tried on Linux) to use large files. SetFilePointer actually uses two 4-byte integers to move around in a file: http://msdn.microsoft.com/library/default.asp?url=/library/en-us/fileio/fs/createfile.asp I imagine there's a similar API on Linux. You'd likely break DOS compatibility, but it'd probably be pretty rare that anyone would use such big files, so there's probably not a whole lot of utility being lost. Matt Lewis