Re: Robert...EDS - questons/comments

new topic     » goto parent     » topic index » view thread      » older message » newer message

Darn it, Kat, you went and made my brain work (such as it is) this 
morning.

Many many years ago, when I was taking some Operations Research classes, 
we talked about search methods, especially search that may or may not 
have an end point. I don't remember much unfortunately.

You probably know more about this than I do, and if not, *somebody* here 
does, since I'm sure it's in some 200 level undergrad course somewhere. 
Anyway, one thing we talked about is narrowing the scope of the search 
with assumptions. For example, if the word starts with a consonant as 
spelled, assume it really starts with a consonant. Secondly, assume that 
beginning consonants are phonetic clones or adjacent on the keyboard. 
Right there you've slashed your search quite a bit. 

Another option might be to use heuristics. Replace letter at random from 
a set of rules like those above, compare it to a dictionary of real 
words, and if it fits, go on. If not, try another letter. This can be 
optimized by deciding the letter to be replaced based on a table of 
probabilities of letter consecutiveness and word length. For example, 
the most common English word is "the". A combination that is 3 letters 
and has "he" at the end is highly likely to be a "t", and there are a 
limited number of other possibilities like "then", "they", "thee", 
"she", "he", etc.

Of course, I'm only adding to the processor overhead, but it seems to me 
that there is some breakeven point where the smaller dictionary size 
makes up for the fact that the processor has to choose dictionaries and 
do multiple guesses. Like I said, I'm sure you know more about this than 
I do though.

It's been over 10 years on this stuff, so somebody tell me if I'm just 
wrong.

new topic     » goto parent     » topic index » view thread      » older message » newer message

Search



Quick Links

User menu

Not signed in.

Misc Menu