RE: The A.I. Project

new topic     » goto parent     » topic index » view thread      » older message » newer message

ck,
I've been sifting through the a.i thoughts weve gotten so far and one
thing that seems persistent in what you have been arguing is the simulation 
vs. emulation arguments,or perhaps better termed pre-programmed vs. instinct 
only programming. I need to know how your
theories would play out put in a neural net programming perspective.
Would this "core instinct programming" approach mesh well with
neural nets.In other words would your a.i. programming approach
involve the building of neural nets?
  I have already seen a kind of pattern emerging from the ideas
presented thus far.
  1.The physiology of the a.i(including neural nets,evolving or static)
  2.The content of its basic drive behavior(basic instincts)
You are describing what we should program(or not program) the subject,
Im just trying to acertain if it is possible to program this using
neural nets.
In one context,the use of the prime motivator,"or survival instinct",
was to be depicted as a number,like 10,if we use neural nets,we'll not
be able to program attributes like the survival instinct in the form
of numbers,but rather as a series of built up neurons which together
form a neural network.How do you think we should go at neural network
programming in the light of say,building up a prime motivator?






>From: "C. K. Lester" <cklester at yahoo.com>
>Reply-To: EUforum at topica.com
>To: EUforum <EUforum at topica.com>
>Subject: RE: The A.I. Project
>Date: Thu,  7 Nov 2002 14:46:46 +0000
>
>
> > I would replace survival with "good-bad".
>
>Let's say "positive-negative," just to avoid potential semantic
>confusions.
>
> > If finding food and avoiding poison is most
> > important thing in their life then they wont get bored.
>
>They're not motivated to do either UNLESS you're just going to have this
>be instinct. Finding food will be important IF there's a purpose to it.
>Avoiding poison? Why put poison in the environment at all?
>
>This also goes back to the question: will we EMULATE an AI entity or
>will we create an AI entity?
>
> > I'm not sure if feelings are really needed in AI
> > and if AI creature will have them.
>
>A survival instinct will have some motivation component; otherwise, the
>creature would just exist to die. The creature can't be "apathetic," so
>to speak, or it won't be motivated enough to live. This begs the
>question, does one build "fear" into the survival instinct? (Fear of
>death?) Which will keep it alive (possibly) until such time as it learns
>about and understands what death is.
>
> > I can't just make AI creature and world and
> > enemies and leave it running all night and then
> > next morning I will have real AI :)
>
>Of course not over night, but in the long run this is exactly what you
>should expect. I'm not saying that it necessarily has to be hands-off,
>because you'll want to interact with it, maybe to let it know what's
>positive behavior and what's negative behavior (think about raising a
>child... this is how you will develop an AI entity).
>
> > > Now, since your "enemies" are really just poison pills,
> > > the "survival instinct" of your Pacman will simply be
> > > a "maintain a high number" instinct.
> >
> > What's wrong with that?
>
>No problem- "maintain a high number" is just a less-fatal version of
>"avoid death." I guess I was thinking that "fear of death" would be a
>great motivator.
>
> > I want to achieve intelligence, I'm not
> > interested in what will AI creature feel...
>
>But what if those feelings are necessary for intelligence? I hate to
>keep repeating myself, but "fear of death" might be REQUIRED just to get
>the creature functioning to survive. You can't really start an AI entity
>(can we get an officially approved name for our entity?!) with a "fear
>of death" because I don't think that's an instinct.
>
> > > All I see for your Pacman is it remembering the pattern of
> > > the maze and maybe locations of bite-sized healthy bits.
> >
> > Remembering pattern is quite important.
>
>But it's just a robot with a database at that point. Not really what we
>ultimately want.
>
>As I think about it, human intelligence requires the hardware of a human
>brain... something we don't have and won't have for a very long time.
>The problem is, as we go down the scale of brains (ultimately ending
>with a worm?), we have to question where does intelligence end and
>instinct begin? I mean, how low can we go before we stop seeing
>intelligence and start seeing only instinct?
>
>Would anyone say a worm is intelligent? If not, then we're just creating
>a lifeform (as opposed to an intelligent lifeform). If worms are
>intelligent, then how does that intelligence function on a hardware
>level? Can we answer these questions?!
>
>
>

new topic     » goto parent     » topic index » view thread      » older message » newer message

Search



Quick Links

User menu

Not signed in.

Misc Menu