Re: block char set

new topic     » topic index » view thread      » older message » newer message

Hello Norm,

    I understand that you are working with 65,000 fonts.  Each character is
to
be 32x32.  Below is my understanding of what probably is happening and what
should be happening.

32x32 = 1024
I assume that currently each character is 32 bytes by 32 bytes.
and when stored in Euphoria sequences it gets 4 times worse.
those 1024 bytes for one character becomes 4096 bytes of memory.
HD = Hard Drive space used
RAM = Memory used

HD  >>     1024 x 65,000 =  66,560,000 bytes or Roughly  65 MB
RAM >> 4 x 1024 x 65,000 = 266,240,000 bytes or Roughly 260 MB

I suggest the following actions.

1. Fonts are simple 0 or 1.  Background or Foreground.
   Background = 0
   Foreground = 1

2. 1 byte equals 8 bits.  Knowing this allows use to reduce your
   problem by 7/8.  32x32 = 1024/8 = 128 bytes.

HD  >>     128 x 65,000 =  8,320,000 bytes or Roughly  8 MB
RAM >> 4 x 128 x 65,000 = 33,280,000 bytes or Roughly 33 MB

I'm not done yet.

3. Store the fonts directly in memory instead of in seqeunces thus
   removing the 4 bytes per value overhead.

now RAM = HD = 8 Megabytes

4. Gotta have routines to use and/or retrieve these special fonts.

   Hollow Horse Software created routines for storing 2D bitmap
   type data in memory.  It is part of the Mode19 graphics engine
   package.  Of course with your fixed 32x32 design you can simply
   use poke() to put it into memory and peek() to read it into a
   sequence.

   Conversion to/from compact bit form and expanded bitmap byte
   form can be done using int_to_bits(), bits_to_int()

   Give me a holler if you want help with any of this.


PS: I can build all the above mentioned with little effort.



<PRE><FONT FACE=COURIER>
+-----------------------+--------------+     +-----------------+
| Hollow Horse Software | ICQ: 9638898 |     | lhilley at cdc.net |
+-----------------------+--------------+-----+-----------------+-----+
| Lucius  L. Hilley III | AIM: LLHIII  | http://www.cdc.net/~lhilley |
+-----------------------+--------------+-----------------------------+
</FONT></PRE>

>hello Mathew,
>
>Yes, do increase it to 65,000 character capability.
>
>I don't really understand how your method works; could you please explain
>it a bit further for us?  Pardon me, but it seems that your character-to-
>storage remains pretty constant at about 1:4, whether it is a large group
>of them or not, so there would only remain the accessing time factor to
>worry about.  Also, why would you have to incur a time penalty for the
>larger size set, unless your branching is always started from the top,
>rather than jumping in at the appropriate nexus (branch top)?
>
>But of all things pertaining to this I am curious, so please don't take
>this as criticism and just explain how this will take place, so I can sleep
>at night.  Seriously, go ahead but don't box yourself in with regards to
>the larger size sets.  I am working on that magnatude of data all of the
>time now, it doesn't seem strange to me.
>
>Advocating for Eu Action,
>Norm

new topic     » topic index » view thread      » older message » newer message

Search



Quick Links

User menu

Not signed in.

Misc Menu