Re: Profiling Under Windows & String Size
Thank you Rod,
This was exactly what I was pruposing.
Lucius L. Hilley III
lhilley at cdc.net lucius at ComputerCafeUSA.com
+----------+--------------+--------------+----------+
| Hollow | ICQ: 9638898 | AIM: LLHIII | Computer |
| Horse +--------------+--------------+ Cafe' |
| Software | http://www.cdc.net/~lhilley | USA |
+----------+-------+---------------------+----------+
| http://www.ComputerCafeUSA.com |
+--------------------------------+
----- Original Message -----
From: Roderick Jackson <rjackson at CSIWEB.COM>
To: <EUPHORIA at LISTSERV.MUOHIO.EDU>
Sent: Thursday, August 05, 1999 2:39 PM
Subject: Re: Profiling Under Windows & String Size
> ---------------------- Information from the mail
header -----------------------
> Sender: Euphoria Programming for MS-DOS
<EUPHORIA at LISTSERV.MUOHIO.EDU>
> Poster: Roderick Jackson <rjackson at CSIWEB.COM>
> Subject: Re: Profiling Under Windows & String Size
> --------------------------------------------------------------------------
-----
>
> Kat wrote:
> ><snip>
> >
> >> According to other postings on this list, Unicode is tragically
> >> insufficient for its most notable goal: handling global character sets.
> >> Norm's Chinese characters alone (for his Eu project) number--what,
around
> >> 45,000? And then Japanese takes the same number... already the 65,536
> >> characters of Unicode are blown away, by only two languages.
Apparently,
> >> Unicode simply CANNOT do what it is trying to, at least not without
severe
> >> compromise. That being the case, why try to make use of it? If
anything, a
> >> 3-byte scheme (or 4-byte) would make more sense (Super Unicode?), but
then
> >> those of us with languages that do just fine under ASCII might start to
> >> balk.
> >
> >And so.. Eu uses 4 bytes per char, and reducing to one byte per char
doesn't
> >make a lot of sense if you want Eu used outside the usa in the future.
> >Anyways, it's just my opinion. Maybe the rest of the world will convert
to
> >english.
>
> !!!
>
> Well, if you choose to look at it that way, I guess Euphoria is *already*
> Unicode-compatible. All it needs are routines to output to screen (file,
> etc.) using character sets besides ASCII--hence, Norm's work...
>
> But just to clarify, I don't think the request was for turning ALL
> sequences into lists of one-byte characters. Consider: if we're using
> literals like
>
> "Hello, you've encounter a Killer App error"
>
> in our code, then I can understand the desire to reduce memory usage
> by 75% (and increase I/O speed) by having them compacted, C-style. I'm
> not sure it's a good idea, but considering you could still construct
> your own sequences of 4-byte integers, it shouldn't affect Eu's ability
> to handle routines based on any Unicode-derived setup.
>
>
> Rod
>
|
Not Categorized, Please Help
|
|