1. OCR Part I...

According to Information Theory (very loosely defined herewithin), at a
point where Objects ('Things') begin to lose their distinctness-- their
individual definement-- this this the threshold beneath whcihc these and
all other Things cannot be identified.  This activity can also be called
Noise.  To the ancient Greeks* this lower portion was known as Limbo; to
the early Christians**, this was called Purgatory.  We will refer to it as
Noise.

Are we still talking about OCR here.  Yes, we most certainly are.  The
point is that Information Theory covers many large areas of what we use
computers for: Encryption, and its counterpart Decryption; Modulation --
Demodulation [what a modem does]; Signal-To-Noise Ratio [what makes
telephones and satellite communications, Video and CD players work].  All
of these things are totally dependent upon two things-- Signal-To-Noise and
Identity.

I will haphazardly define Indentity here as not just something which is
'TRUE' (in the sense that AND, OR, NOT, NOR and other refinements of
Boolean Logic are True or False) qualifies Things as such, but also how
this rigid, binary set of choices transforms into our own analog world of
shades of Truth-- what we call Discernment.  And Discernment will get into
the territory of OCR.

[Note: those of you out there who still have the mistaken idea that the
study of Philosophy and Language are useless-- throw that outdated concept
out and rearrange your thinking processes.  Much as in the old dictum,
'Software Runs Hardware', and not the other way round as we usually
believe, 'Thinking Runs Engineering'; one cannot create Sense without
Thought.  Already, those of you who disagree with conceptualization of mine
and are shaking your heads, the actual fact of the matter is that you have
to use something more than just Cold Logic to do so.]

What does all of this have to do with Optical Character Recognition (OCR)?
Plenty, but not everything.

OCR is a child of its parent: Pattern Recognition.  Much as we want to
persist in thinking about our computers as having mental capabilities...
they don't.  Well, you reply, animals can recognize patterns, don't they?
Yes, that is true-- but only like us, only when it interests them.  Only
when it is important to them.  Computers, on the other hand, don't 'care'
about anything.  So there is still a gigantic gap between us and them.  But
computers can count (actually, they don't 'count'; they 'compute', which is
a different thing), and this is very important.

How does this great canyon of digital darkness get bridged?  Indeed, at a
completely logical level there are no such things as 'Patterns', and so
also there can be no such thing as 'Recognition'-- because one pattern is
as good (its importance) as any and all of the other possible.

Hopeless?  Never hopeless unless helpless. To illustrate the answere to all
of this I beg your indulgence in my using a parable or story to continue
with an explanation:

There is a group of helpers called Recognition.  They stand on a long
assembly line of strings of data.  The first of these tireless workers,
much in the spirit of Sherlock Holmes, picks over all of the items which it
has been ORDERED (we cannot talk about 'Trained' yet), to reject items as
being 'Impossible' and cull them from the line.  Good!  Further down the
line, stand several other workers, who have been Ordered to divide this
flowing stream of what are now 'Possible' items into two branches...
'Probable' and 'Don't Know'.  This is a very active task, and that is why
there are the several workers standing there at the bifurcation ('split in
two') point, and applying their specialized talents of Sorting to this part
of the job.  And to make certain that all of these workers are doing their
jobs correctly, standing right behind them are quality Control Officers who
check the first worker's rejects as truly being such, and nothing more, and
also check the second group of workers's actions to make certain that they
are correct.

It is this 'Partitioning' of the Probables and Don't Knows that needs prick
up our attention here (and also all of you who have so been able to put up
with my rambling-- to paraphrase Chaucer, this is The Code-Mangler's Tale;
thank you!), and onto Part II, wherein, Things Get a Bit More Attention.

Norm Goundry


*(perhaps read the great Greek Classic, 'The Nature of Things')
**(as defined in the New Testament)

new topic     » topic index » view message » categorize

2. Re: OCR Part I...

Although this is a very interesting subject, on which I will respond at some
other time, perhaps.
However, I would like to respond immidiately on the OCR ( & compression ) part.

I will start with the basic concept of compression.
Compression means liking one type of data more at the cost of another.
What cost ?
With lossless, the cost of the other type of data, would be that is larger.
With lossy, the cost of the other type od data, would be the decrease in
quality/significance.

So, why does compression work ?
Because most of all files are in some way related to our human world full of
pattern-based, math-based and other type of
relationships.
Certain things really are more common than others.

My point is, for example, our human voice, can be much different in certain
aspects, but eventually is based upon some
relationships of tone and volume and order. This is relationship due to nature.
Due to the way our voice works. The best
definition of noise would be that part of the data we couldn't care less about.
The procces would be to seperate the noice from
the data we do want. How do we do this ? On the very same way lossy compression
works. In theory the two algorithms would be
identical, they would both discard an amount of data. (the OCR would discard
even more, though)

However, discarding such noise, is more than just 'rounding' the values.
Unfortunately we work with absolute values, while
patters are by definition relative. Now, here's a good figurative explenation of
what kind of 'noise' needs to be 'floored' /
'discarded'..

Say a person was driving a car, from point X to point Y.
We don't care from where to where, but what angles, which velocities, etc.
Think vectors.

I suggest, looking at how tone and volume change as in a vector. ( we'll call
this vectorA )
Afterwhich I would round (floor) the vector - values to minimize the number of
vectors that represent the 'route' our voice took
from point X to Y.
Now we have a new less 'noisy' image of our voice data.
This time instead of splitting up the voice data in tone and volume, use the
whole data in a vector based upon time & data.
Again floor the vector values.

I never tried it, but I also figured the above should work. Off course, not all
'flooring' should be done so early in the
interpretation procces.
Some of it, I guess, should be done, to make the sentence right. But this is
more normal wildcard stuff. If you can get as far,
as to recognize different mouth movement (something, the above should), you're
pretty far.

And yes, I know, some highly trained scientist and professionals are dealing
with this issues and that it isn't that simple as I
make it look.
But, I'm am not able to know what I don't know, so any approximation into when I
do and don't know would be purely guessed and
therefor useless anyway. (Hmm, I don't want to defend ignorance though.. hmm,
difficult .. )

Ralf

new topic     » goto parent     » topic index » view message » categorize

3. Re: OCR Part I...

I'm lost or I don't fully understand your ideas or theory.

Here are some questions :

How are you going to input the data ?
  ( scanner ? or from some other data source )

What form will the data input be in ?
  ( scalible, fixed, fuzzy, sharp, light, dark, hand written, etc )

You have not described and defined the basic problem to the list.
( or I missed it )
How can the list help you reach a clear view of a solution ?

Thanks Bernie

new topic     » goto parent     » topic index » view message » categorize

4. Re: OCR Part I...

Lucius, thanks for the good work on bucket sort. Art

        Norm Coundry:

        Very interesting, tell us more. Art Adamson

At 02:18 PM 5/16/99 -0400, you wrote:
>According to Information Theory (very loosely defined herewithin), at a
>point where Objects ('Things') begin to lose their distinctness-- their

new topic     » goto parent     » topic index » view message » categorize

Search



Quick Links

User menu

Not signed in.

Misc Menu