1. RE: The A.I. Project

> I would replace survival with "good-bad".

Let's say "positive-negative," just to avoid potential semantic 
confusions.

> If finding food and avoiding poison is most
> important thing in their life then they wont get bored.

They're not motivated to do either UNLESS you're just going to have this 
be instinct. Finding food will be important IF there's a purpose to it. 
Avoiding poison? Why put poison in the environment at all?

This also goes back to the question: will we EMULATE an AI entity or 
will we create an AI entity?

> I'm not sure if feelings are really needed in AI
> and if AI creature will have them.

A survival instinct will have some motivation component; otherwise, the 
creature would just exist to die. The creature can't be "apathetic," so 
to speak, or it won't be motivated enough to live. This begs the 
question, does one build "fear" into the survival instinct? (Fear of 
death?) Which will keep it alive (possibly) until such time as it learns 
about and understands what death is.

> I can't just make AI creature and world and
> enemies and leave it running all night and then
> next morning I will have real AI :)

Of course not over night, but in the long run this is exactly what you 
should expect. I'm not saying that it necessarily has to be hands-off, 
because you'll want to interact with it, maybe to let it know what's 
positive behavior and what's negative behavior (think about raising a 
child... this is how you will develop an AI entity).

> > Now, since your "enemies" are really just poison pills,
> > the "survival instinct" of your Pacman will simply be
> > a "maintain a high number" instinct.
> 
> What's wrong with that?

No problem- "maintain a high number" is just a less-fatal version of 
"avoid death." I guess I was thinking that "fear of death" would be a 
great motivator.

> I want to achieve intelligence, I'm not
> interested in what will AI creature feel...

But what if those feelings are necessary for intelligence? I hate to 
keep repeating myself, but "fear of death" might be REQUIRED just to get 
the creature functioning to survive. You can't really start an AI entity 
(can we get an officially approved name for our entity?!) with a "fear 
of death" because I don't think that's an instinct.

> > All I see for your Pacman is it remembering the pattern of
> > the maze and maybe locations of bite-sized healthy bits.
> 
> Remembering pattern is quite important.

But it's just a robot with a database at that point. Not really what we 
ultimately want.

As I think about it, human intelligence requires the hardware of a human 
brain... something we don't have and won't have for a very long time. 
The problem is, as we go down the scale of brains (ultimately ending 
with a worm?), we have to question where does intelligence end and 
instinct begin? I mean, how low can we go before we stop seeing 
intelligence and start seeing only instinct?

Would anyone say a worm is intelligent? If not, then we're just creating 
a lifeform (as opposed to an intelligent lifeform). If worms are 
intelligent, then how does that intelligence function on a hardware 
level? Can we answer these questions?!

new topic     » topic index » view message » categorize

2. RE: The A.I. Project

ck,
I've been sifting through the a.i thoughts weve gotten so far and one
thing that seems persistent in what you have been arguing is the simulation 
vs. emulation arguments,or perhaps better termed pre-programmed vs. instinct 
only programming. I need to know how your
theories would play out put in a neural net programming perspective.
Would this "core instinct programming" approach mesh well with
neural nets.In other words would your a.i. programming approach
involve the building of neural nets?
  I have already seen a kind of pattern emerging from the ideas
presented thus far.
  1.The physiology of the a.i(including neural nets,evolving or static)
  2.The content of its basic drive behavior(basic instincts)
You are describing what we should program(or not program) the subject,
Im just trying to acertain if it is possible to program this using
neural nets.
In one context,the use of the prime motivator,"or survival instinct",
was to be depicted as a number,like 10,if we use neural nets,we'll not
be able to program attributes like the survival instinct in the form
of numbers,but rather as a series of built up neurons which together
form a neural network.How do you think we should go at neural network
programming in the light of say,building up a prime motivator?






>From: "C. K. Lester" <cklester at yahoo.com>
>Reply-To: EUforum at topica.com
>To: EUforum <EUforum at topica.com>
>Subject: RE: The A.I. Project
>Date: Thu,  7 Nov 2002 14:46:46 +0000
>
>
> > I would replace survival with "good-bad".
>
>Let's say "positive-negative," just to avoid potential semantic
>confusions.
>
> > If finding food and avoiding poison is most
> > important thing in their life then they wont get bored.
>
>They're not motivated to do either UNLESS you're just going to have this
>be instinct. Finding food will be important IF there's a purpose to it.
>Avoiding poison? Why put poison in the environment at all?
>
>This also goes back to the question: will we EMULATE an AI entity or
>will we create an AI entity?
>
> > I'm not sure if feelings are really needed in AI
> > and if AI creature will have them.
>
>A survival instinct will have some motivation component; otherwise, the
>creature would just exist to die. The creature can't be "apathetic," so
>to speak, or it won't be motivated enough to live. This begs the
>question, does one build "fear" into the survival instinct? (Fear of
>death?) Which will keep it alive (possibly) until such time as it learns
>about and understands what death is.
>
> > I can't just make AI creature and world and
> > enemies and leave it running all night and then
> > next morning I will have real AI :)
>
>Of course not over night, but in the long run this is exactly what you
>should expect. I'm not saying that it necessarily has to be hands-off,
>because you'll want to interact with it, maybe to let it know what's
>positive behavior and what's negative behavior (think about raising a
>child... this is how you will develop an AI entity).
>
> > > Now, since your "enemies" are really just poison pills,
> > > the "survival instinct" of your Pacman will simply be
> > > a "maintain a high number" instinct.
> >
> > What's wrong with that?
>
>No problem- "maintain a high number" is just a less-fatal version of
>"avoid death." I guess I was thinking that "fear of death" would be a
>great motivator.
>
> > I want to achieve intelligence, I'm not
> > interested in what will AI creature feel...
>
>But what if those feelings are necessary for intelligence? I hate to
>keep repeating myself, but "fear of death" might be REQUIRED just to get
>the creature functioning to survive. You can't really start an AI entity
>(can we get an officially approved name for our entity?!) with a "fear
>of death" because I don't think that's an instinct.
>
> > > All I see for your Pacman is it remembering the pattern of
> > > the maze and maybe locations of bite-sized healthy bits.
> >
> > Remembering pattern is quite important.
>
>But it's just a robot with a database at that point. Not really what we
>ultimately want.
>
>As I think about it, human intelligence requires the hardware of a human
>brain... something we don't have and won't have for a very long time.
>The problem is, as we go down the scale of brains (ultimately ending
>with a worm?), we have to question where does intelligence end and
>instinct begin? I mean, how low can we go before we stop seeing
>intelligence and start seeing only instinct?
>
>Would anyone say a worm is intelligent? If not, then we're just creating
>a lifeform (as opposed to an intelligent lifeform). If worms are
>intelligent, then how does that intelligence function on a hardware
>level? Can we answer these questions?!
>
>
>

new topic     » goto parent     » topic index » view message » categorize

3. RE: The A.I. Project

Some thoughts about Jdube's original message.

First Jdube seems to assume that an ordinary programming  language is 
suitable for logical (AI) programming. I disagree. I think he is a bit 
optimistic here.

> ... so I would establish a programmed loop of thoughts....

That is easy to program. Remember that the CPU and Windows are running 
loops all the time. These loops are are sending messages ... if that 
count? This is not exactly what we usually term as "consciousness".

> ... then as i learns new things....


Introducing learning comes a bit fast here. 

What is learning? If learning is to update data in some database, then 
learning is easy to program. I would suggest that a large program that 
is learning must have some ability to change itself .... producing 
program code that it can execute.

So already at the very start of this discussion I disagree. 

Euphoria isn't suitable for logical programming. But Euphoria could 
probably be made suitable (if one is optimistic) ....

1) A small interpretor like Euphoria can be control by another program. 
Source can be generated, loaded and executed. If the program halts? ... 
well, then it had to be aborted (not posssible today ... I don't know).

2)  Euphoria is so small today that it can be changed to support new 
functionality ..... too late to do the same with C.

3) For a program to be changeable I suppose the program syntax has to be 
very simple. Euphoria is maybe the simplest language that is....still 
powerful. The simpler program code is... the more realistic it will be 
for a program to change the code.

The simplest program code I can think about is rules (if A the B). The 
simplicity of rules is such that it doesn't matter if the rules are put 
into a database, not the program itself. Rules can be evaluated in that 
case too.

Conclusion.
Trying to simulate worms or things like that will fail because Euphoria 
hasn't been adapted to logical programming. Those who think this is just 
a matter of ordinary programming cannot even create a decent demo expert 
system with Euphoria ... I am afraid.

Rom

new topic     » goto parent     » topic index » view message » categorize

4. RE: The A.I. Project

Rom,
Thanks for the critique.I'll try to show why I think consciousness
is amenable to ordinary loop program flow.Im not adequately sure
you proved that it COULD NOT be done,and Im definately not sure that
TONIGHT I can prove that it can be done.
  How do our thoughts come about? What is consciousness? Have you ever
laid in bed,not able to sleep and listened to your mind hum away? Each
thought coming from a previous thought, maybe not even verbalized,just
a series of images,floating from one picture to the next.You mull them
over and over,like a loop, and each time it goes around you try another
option,untill you finally come up with something.
  An a.i. doesn't "know" anything,just like like us it keeps repeating
the problem in its "head",matching different solutions with every
iteration.When it thinks it might have the solution it trys to make
it work,if it works,great,if not,now it knows by experience that that
approach failed,so if its an optimist,it trys again. But not all a.i
would be optimistic,if its failed too many times before,it might quit
and revert back to its basic lifestyle,which is also just a big
programmed loop,we wake,work and sleep,over and over...
  Its funny that you had listed some good reasons why Euphoria WOULD
be a good language to program this with,yet your initial premise and
your conclusion stated that it would not.
   The question of artificial intelligence modifying its own programming
or creating its own programming has already been debated.We can't
modify our own programming,so an a.i. definately cannot modify ITS
programming.If you could modify your own core programming YOU wouldn't
be YOU,and thats the whole premise behind pure a.i,it exists,in
whatever state,if it refuses to learn,eat,move or do anything,IT IS
STILL AN ENTITY!It exists in the loops and cycles of its own 
consciousness.We invent the algoritms,the a.i. determines its own
course.
   If you remove its enemies,you may find that it sees itself as an
enemy,there are no guarantees,if you know how it will react you have
a robot.We only know that it will react,because for every action there
is an equal and opposite reaction.Even if you remove its stimuli,it
will react to itself.
  The computer itself is the most complicated machine in existence,
yet it operates very simply at its lowest level,I suggest that
a.i. programming is equally simple...at its lowest levels. We just
need to find these lower levels and distance ourselves from the
overall complexity. We've already made progress towards something
to start coding,we cant possibly plan everything ahead of time.
We should
just do it and find out when we get there






>From: Rom <kjehas at frisurf.no>
>Reply-To: EUforum at topica.com
>To: EUforum <EUforum at topica.com>
>Subject: RE: The A.I. Project
>Date: Fri,  8 Nov 2002 23:50:22 +0000
>
>
>Some thoughts about Jdube's original message.
>
>First Jdube seems to assume that an ordinary programming  language is
>suitable for logical (AI) programming. I disagree. I think he is a bit
>optimistic here.
>
> > ... so I would establish a programmed loop of thoughts....
>
>That is easy to program. Remember that the CPU and Windows are running
>loops all the time. These loops are are sending messages ... if that
>count? This is not exactly what we usually term as "consciousness".
>
> > ... then as i learns new things....
>
>
>Introducing learning comes a bit fast here.
>
>What is learning? If learning is to update data in some database, then
>learning is easy to program. I would suggest that a large program that
>is learning must have some ability to change itself .... producing
>program code that it can execute.
>
>So already at the very start of this discussion I disagree.
>
>Euphoria isn't suitable for logical programming. But Euphoria could
>probably be made suitable (if one is optimistic) ....
>
>1) A small interpretor like Euphoria can be control by another program.
>Source can be generated, loaded and executed. If the program halts? ...
>well, then it had to be aborted (not posssible today ... I don't know).
>
>2)  Euphoria is so small today that it can be changed to support new
>functionality ..... too late to do the same with C.
>
>3) For a program to be changeable I suppose the program syntax has to be
>very simple. Euphoria is maybe the simplest language that is....still
>powerful. The simpler program code is... the more realistic it will be
>for a program to change the code.
>
>The simplest program code I can think about is rules (if A the B). The
>simplicity of rules is such that it doesn't matter if the rules are put
>into a database, not the program itself. Rules can be evaluated in that
>case too.
>
>Conclusion.
>Trying to simulate worms or things like that will fail because Euphoria
>hasn't been adapted to logical programming. Those who think this is just
>a matter of ordinary programming cannot even create a decent demo expert
>system with Euphoria ... I am afraid.
>
>Rom
>
>
>

new topic     » goto parent     » topic index » view message » categorize

5. RE: The A.I. Project

This is a multi-part message in MIME format.

------=_NextPart_000_002F_01C287AF.F2BA08E0
	charset="iso-8859-1"

answer to "dubetyrant"

> What is consciousness?=20

I don't feel that is complete mystery. I am more uncertain about =
feelings.

When I am optimistic about logical reasoning I imagine software that can =
solve problems, like ourselves.=20

A problem may be deciding what to do when you don't what to do. I =
suppose what I myself do is to use what I know and create a list of =
possible actions. Then I decide to make a try (it is very easy to =
convince myself this option will work... because I hope so much it will =
work).

[In case of stock market analysis (which I am doing a lot) it isn't =
difficult create 10000's of possible candlestick patterns and backtest =
each of these patterns (it takes many nights). That will produce a =
result list sorted by predicting power. If the problem is to earn money =
in the stock market then a very good technical indicator is the =
solution. The candlestick patterns my software can find overnight often =
surprice me because new, unknown patterns seems to be equal good as the =
100-200 candlesticks found in literature (which are probably 50% wrong =
because the speed of the markets has changed a lot since the invention =
of candlestick patterns 60 years ago?).]

Also, a problem may be a conflict in my mind. I don't feel it is that =
difficult to put a conflict into a logical database, for example. An =
expert system is about selecting an output given input data. If only one =
output is admissively (like doing something), then if input causes =
several different outputs (different actions to be selected as =
action).... then there is a conflict in the system (just one type of =
conflict). Can a program initiate a procedure to eleminate this conflict =
so that a single output is selected given this input case? I think so. =
Then the logical database is changed a bit (that may affect similar =
cases too).

> so an a.i. definately cannot modify ITS programming.

Some way to escape reprogramming may be to create a program where =
program logic is represented by data (datadriven). Is a Euphoria program =
data or program? Maybe Euphoria interpretator is program only? When data =
defines what the program will do ... then ..... maybe there is no need =
for real reprogramming ?... only data has to be modified?

I will not say anything is impossible...... rather that the solution is =
in the main program i.e. Euphoria interpretator. You cannot just applied =
Euphoria on a problem with all kind of conceptual challenges. You have =
know Euphoria so well that you can make the necessary changes to the =
core (which wasn't planned).

Rom


------=_NextPart_000_002F_01C287AF.F2BA08E0
Content-Type: text/html;
	charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=3DContent-Type content=3D"text/html; =
charset=3Diso-8859-1">
<META content=3D"MSHTML 6.00.2600.0" name=3DGENERATOR>
<STYLE></STYLE>
</HEAD>
<BODY bgColor=3D#ffffff>
<DIV><FONT face=3DArial size=3D2>
<P>answer to "dubetyrant"</P>
<P>&gt; What is consciousness? </P>
<P>I don't feel that is complete mystery. I am more uncertain about=20
feelings.</P>
<P>When I am optimistic about logical reasoning I imagine software that =
can=20
solve problems, like ourselves. </P>
<P>A problem may be deciding what to do when you don't what to do. I =
suppose=20
what I myself do is to use what I know and create a list of possible =
actions.=20
Then I decide to make a try (it is very easy to convince myself this =
option will=20
work... because I hope so much it will work).</P>
<P>[In case of stock market analysis (which I am doing a lot) it isn't =
difficult=20
create 10000's of possible candlestick patterns and backtest each of =
these=20
patterns (it takes many nights). That will produce a result list sorted =
by=20
predicting power. If the problem is to earn money in the stock market =
then a=20
very good technical indicator is the solution. The candlestick patterns =
my=20
software can find overnight often surprice me because new, unknown =
patterns=20
seems to be equal good as the 100-200 candlesticks found in literature =
(which=20
are probably 50% wrong because the speed of the markets has changed a =
lot since=20
the invention of candlestick patterns 60 years ago?).]</P>
<P>Also, a problem may be a conflict in my mind. I don't feel it is that =

difficult to put a conflict into a logical database, for example. An =
expert=20
system is about selecting an output given input data. If only one output =
is=20
admissively (like doing something), then if input causes several =
different=20
outputs (different actions to be selected as action).... then there is a =

conflict in the system (just one type of conflict). Can a program =
initiate a=20
procedure to eleminate this conflict so that a single output is selected =
given=20
this input case? I think so. Then the logical database is changed a bit =
(that=20
may affect similar cases too).</P>
<P>&gt; so an a.i. definately cannot modify ITS programming.</P>
<P>Some way to escape reprogramming may be to create a program where =
program=20
logic is represented by data (datadriven). Is a Euphoria program data or =

program? Maybe Euphoria interpretator is program only? When data defines =
what=20
the program will do ... then ..... maybe there is no need for real =
reprogramming=20
?... only data has to be modified?</P>
<P>I will not say anything is impossible...... rather that the solution =
is in=20
the main program i.e. Euphoria interpretator. You cannot just applied =
Euphoria=20
on a problem with all kind of conceptual challenges. You have know =
Euphoria so=20
well that you can make the necessary changes to the core (which wasn't=20
planned).</P>

------=_NextPart_000_002F_01C287AF.F2BA08E0--

new topic     » goto parent     » topic index » view message » categorize

6. RE: The A.I. Project

The problem with AI is that people assume it just pops up, when, in reality,
there is a Designer.

We need to focus on this issue with a Designer in mind (us being the
designer in this case)...

>
>
>
>

new topic     » goto parent     » topic index » view message » categorize

Search



Quick Links

User menu

Not signed in.

Misc Menu