RE: The A.I. Project II

new topic     » goto parent     » topic index » view thread      » older message » newer message

On 7 Nov 2002, at 18:43, C. K. Lester wrote:

> 
> > http://www.emsl.pnl.gov:2080/proj/neuron/neural/what.html
> 
> Site: "Another way of classifying ANN types is by their method of 
> learning (or training), as some ANNs employ supervised training while 
> others are referred to as unsupervised or self-organizing."
> 
> Me: "unsupervised or self-organizing" - obviously they don't know what 
> they're talking about. No intelligent being that we know of is able to 
> go from instinct to intelligence on its own, simply because it must come 
> from an instinct-only stage, during which it is at its most vulnerable.

It works in reverse too. 

> Site: "Supervised training is analogous to a student guided by an 
> instructor."
> 
> Me: The presupposition here is that the student has LEARNED HOW TO 
> LEARN. With AI, though, you cannot just start out with an advanced 
> learner. It's IMPOSSIBLE. An "advanced learner" is way too complex for 
> you to create in the first place. When you try and create an "advanced 
> learner," you've simply created an expert system with all your own 
> biases. The AI entity has to start out much simpler, with the tools 
> (hardware+instincts) to learn (just like the only intelligent creatures 
> we know must do).

I disagree. The parts you don't kow are the quantity and degree of weightings 
of incentives and disincentives, and those unknowns in knowledge which 
may become guesses in action. The rest is pattern matching:
"my stomach hurts" 
-- has this happened before?
-- match this to incentives pattern map
-- match up a pattern of actions to what we are in now
"want to cure this, need food"
-- match "food" to the actual item irl
-- match "food" to what's in the environment
-- match a pattern of movement to ingest the food
-- match these patterns to patterns of outcomes
-- (do not walk up to live elephant and begin eating)
-- expect pattern to continue post-ingestion
etc

> Site: "Unsupervised algorithms essentially perform clustering of the 
> data into similar groups based on the measured attributes or features 
> serving as inputs to the algorithms. This is analogous to a student who 
> derives the lesson totally on his or her own."

That's a pattern matching or modification. Truely unexpected pattern 
matching leads people to run naked thru the streets shouting "Eureka!" if it's 
good, and become snipers if it's bad.
 
> Me: What beginning intelligent creature have you ever known was able to 
> learn on its own from the start? Sure, LATER, after it became an 
> advanced learner did it learn how to acquire knowledge itself... but it 
> couldn't start like that.

Only if a learning activity became a function of enabling an instinct, resulting
in positive feedback. I have discovered thru personal experience and machine 
programming that if there is no feedback, there is no such function 
generated, no exercizing of memory expansion, no change in incentive 
weighting, etc. Hence you cannot have the Ai generate it's own node/neuron 
weighting. Even if it did get some learning, you'll never be able to predict 
what the outcome is. 

Given one person on a deserted island, with a pocket knife, infected with 
AIDS, what incentive do they have to build an electron microscope to 
research the mechanism of antibodies on HIV, and develop a cure and live 
happily ever after? The Ai hasto check the weighting. If it has OCD and is 
really bored, it might try to find a way to generate electricity and build the 
EM. Otherwise it might just try to get comfy and *seemingly* violate self 
preservation instincts with euthanasia. We can't predetermine this in our 
hardcoding, else it won't be an Ai. If you hardcode or build hardware that 
stresses *your* values, you have a slave.

Kat

new topic     » goto parent     » topic index » view thread      » older message » newer message

Search



Quick Links

User menu

Not signed in.

Misc Menu