1. RE: AI

Sabal.Mike at notations.com wrote:

> 1) Intelligence may be defined as the ability to change behaviors
> (i.e., learn) based on interactions with one's environment.
> 
> 2) Thus, a truly intelligent program must be able to modify its
> own programming; or at least be able to create and execute its
> own programs to accomplish the changed behaviors it deems
> necessary.

Just to clarify...

Can you "modify" your programming? NO! It JUST HAPPENS (the wonders of 
our hardware, eh?). In other words, we need not be CONSCIOUS of the 
change because it happens automagically/automatically/autonomically... 
:)

So, the AI entity need not be able to modify its own programming... It 
just has to happen as a function of its hardware (whether emulated (most 
likely) or not).

As an aside, Orson Scott Card is my favorite author.

> PS: A realistic understanding of the true depravation of the human
> soul / spirit / mind would also be prudent in this endeavor.
> No child of mine will have unfettered access to the internet!

Hehe... That's true. ;)

new topic     » topic index » view message » categorize

2. RE: AI

On 7 Nov 2002, at 19:01, Chris Bensler wrote:

> 
> AI does not require the ability to dynamically program itself. That's 
> what the neural net is for.

Exsqueeze me? You are separating the pattern matching ability of the neural 
net from the Ai? And most electronic neural nets are programmable for new 
data patterns, or learning. The laser holograms are not learning once the 
"film" is developed, obviously. Neural nets are used mostly where large 
numbers of patterns must be matched quickly, like dictionaries of words, 
photo identification, etc. The usa military has successfully used holographic 
"neural nets" to almost instantly identify tanks and planes in different 
settings, for targetting weaponry.

> Can you learn to fly? No, because you have no basic skills/properties 
> that would provide that sort of abilty. Our anatomy won't allow it. We 
> have no wings. No hollow bones to make use light enought to be able to 
> carry our weight without needing a chest the size of a small car. You 
> can TRY an learn to fly, but will you ever succeed?

Learning to fly is not the same as using the physical abilities you inately 
possess in biology. You don't possess the inate ability to send me text thru 
solid copper threads and fiber optics 1000's of miles long do you? No. You 
did find a way to learn about hardware separate from yourself to do it. It's not
human instinct to use email, you are not born with such a natal brain 
function, you learned a new function and how/when/why to execute it, with a 
suitable incentive to use it. But if you could not notice the sequence of 
events, the pattern to use, the pattern of events leading to email happening, 
and execute a non-instinctual sequence of events, you'd appear to be less 
intelligent than CK's worm.

> You cannot reprogam how you see/taste/touch/hear/smell. These are our 
> basic sensory tools, that we use to get input from our environment. 

Wrong. I assure you, if you lose kinesthetic abilities from partial spinal cord 
damage, you can train your biology to use other input to compensate to 
some degree, and with surprising benefits in the right circumstances. The 
new function becomes a default function too.
 
> Think of genes as our basic, unmodifiable properties. These are the 
> properties that make us a unique species. Through inheritance, our genes 
> can be modified, but that is more or less a random act (not talking 
> about chromosone pairing). You can never modify your own genes.

<cough> gene therapy </cough>
 
> Think of instinct as our basic set of tools/actions/reactions that we 
> have at our disposal. You cannot create new instincts for yourself. They 
> would be calcultaed, conscious decisions, not subconscious, primitive 
> reactions.

But why limit the Ai to this? Shortcircuiting instinct would be an obvious plus 
to the Ai, altho you'd lose a slave.

> Think of our consciousness as a neural net. It learns how to react to 
> our sensory perception, and how to more efficiently use the tools that 
> we have at our disposal. When the neural net learns, it modifies 
> existing links, to relevant interaction, with the apporpriate reaction. 
> That does not mean that if we jump off a cliff, we can learn to fly. We 
> don't have to basic tools needed for that task.

This is a matter of pattern matching the needs of flying against what's 
available. Got canvas and other cheap airplane parts? Are they in the pattern 
of a tent or a plane? You going to jump off the cliff with which one? Part of 
being intelligent is using and modifying the environment, not only using what 
you were born with.

Kat

new topic     » goto parent     » topic index » view message » categorize

3. RE: AI

> From: Kat [mailto:kat at kogeijin.com]

> Eu can't inantely recode a sequence holding a new function, 
> and run it. Even 
> if you write it out to a new Eu program, and call memsharing 
> code, you 
> didn't/cannot allow in your initial code to store and recall 
> and use the 
> variables returned by the new "thread". At least, not without 
> a preprocessor 
> to scan the whole file and write new code to know the names 
> of all the vars. 
> Then you get into shutting down and restarting the code to use the 
> preprocessor. Mirc can make new vars and use them, and store small 
> programs in vars, and execute them. Or write them out and 
> include them 
> while running, and unload them. Eu should also.

OK, I'll finally bite.  A lot of this discussion seems to me to be bogged
down in semantics, i.e., what does it mean for an AI to reprogram itself?  

Let's take the "I can't reprogram myself to fly argument."  What does it
mean for me "to fly"?  If you mean with only what is a part of myself, then,
no, I can't.  However, I can use an airplane.  What does that mean?  Well,
it means I use my hands and feet to push pedals and pull sticks and etc.,
and use my eyes to watch the dials and the outside world.  These are all
natural actions for these body parts of mine.  I've just figured out how to
coordinate them with great precision, given the input I've gotten("Gee,
here's a plane"), combined with my desires ("I want to fly").  Did I
"reprogram" my hands and feet?  No.  They're not doing anything unusual.
Did I reprogram my brain?  Well, maybe, but that could be simply the
equivalent of reweighting a neural net.

So, what might that look like.  Well, I might have a set of input nodes that
correspond to my eyes, another to ears, and so forth.  A set of output nodes
might correspond to what I tell my hands to do, etc.  Now, this is all
really complicated at the details level.  What you might end up with is a
generic neural net app with a database of interconnected nets.  I suppose
this could simulate some short term memory and feedback loops.

Every time I've wanted to play around in this area, I've always been stumped
because I could never find a meaningful, simple enough environment/domain in
which to place an AI.  I think, however, that a profitable approach would be
to start with a well defined set of inputs and outputs (i.e., senses and
actions) that are appropriate for whatever the environment will be, and then
allow the AI to grow by responding to its environment.  Yes, you'll have to
instill some sort of instint (eat when hungry, scared of loud noises,
attracted to the opposite sex, etc).  Of course, when you've distilled
things into a neural net, I'm not sure how you would translate these things.

Matt Lewis

new topic     » goto parent     » topic index » view message » categorize

4. RE: AI

On 8 Nov 2002, at 7:25, Matthew Lewis wrote:

> 
> 
> > From: Kat [mailto:kat at kogeijin.com]
> 
> > Eu can't inantely recode a sequence holding a new function, 
> > and run it. Even 
> > if you write it out to a new Eu program, and call memsharing 
> > code, you 
> > didn't/cannot allow in your initial code to store and recall 
> > and use the 
> > variables returned by the new "thread". At least, not without 
> > a preprocessor 
> > to scan the whole file and write new code to know the names 
> > of all the vars. 
> > Then you get into shutting down and restarting the code to use the 
> > preprocessor. Mirc can make new vars and use them, and store small 
> > programs in vars, and execute them. Or write them out and 
> > include them 
> > while running, and unload them. Eu should also.
> 
> OK, I'll finally bite.  A lot of this discussion seems to me to be bogged
> down in semantics, i.e., what does it mean for an AI to reprogram itself?  
> 
> Let's take the "I can't reprogram myself to fly argument."  What does it
> mean for me "to fly"?  If you mean with only what is a part of myself, then,
> no,
> I can't.  However, I can use an airplane.  What does that mean?  Well, it
> means
> I use my hands and feet to push pedals and pull sticks and etc., and use my
> eyes
> to watch the dials and the outside world.  These are all natural actions for
> these body parts of mine.  I've just figured out how to coordinate them with
> great precision, given the input I've gotten("Gee, here's a plane"), combined
> with my desires ("I want to fly").  Did I "reprogram" my hands and feet?  No. 
> They're not doing anything unusual. Did I reprogram my brain?  Well, maybe,
> but
> that could be simply the equivalent of reweighting a neural net.

The human brain has compartments, shown to be specialized, but 
specialized by what, we don't know. For instance, the occipital lobes will 
become audio processors when sight is permanently removed, and the 
auditory section will help in sight if the sound is permanently removed. This 
has been shown by MRI and CAT scans, and radioactive glucose uptake 
studies, to show activity. This is what lead me to believe in a few smaller 
neural nets, mediated, and called into action, by a central processor. The 
neural nets are naturally reprogrammable, and the central processor needs 
to know in what order to reprogram which, in what order to call what function. 
As you learn, you are either stacking up if-then statements, case 
statements, or new functions. Either way, it's new code, and you were not 
programmed to know of the new code beforehand, so you could not allow for 
it's dynamic inclusion.

> So, what might that look like.  Well, I might have a set of input nodes that
> correspond to my eyes, another to ears, and so forth.  A set of output nodes
> might correspond to what I tell my hands to do, etc.  Now, this is all really
> complicated at the details level.  What you might end up with is a generic
> neural net app with a database of interconnected nets.  I suppose this could
> simulate some short term memory and feedback loops.
> 
> Every time I've wanted to play around in this area, I've always been stumped
> because I could never find a meaningful, simple enough environment/domain in
> which to place an AI.  I think, however, that a profitable approach would be
> to
> start with a well defined set of inputs and outputs (i.e., senses and actions)
> that are appropriate for whatever the environment will be, and then allow the
> AI
> to grow by responding to its environment.  Yes, you'll have to instill some
> sort
> of instint (eat when hungry, scared of loud noises, attracted to the opposite
> sex, etc).  Of course, when you've distilled things into a neural net, I'm not
> sure how you would translate these things.

I'd translate them less anthropomorphically. 

Kat

new topic     » goto parent     » topic index » view message » categorize

5. RE: AI

> From: Kat [mailto:kat at kogeijin.com]

> The human brain has compartments, shown to be specialized, but 
> specialized by what, we don't know. For instance, the 
> occipital lobes will 
> become audio processors when sight is permanently removed, and the 
> auditory section will help in sight if the sound is 
> permanently removed. This 
> has been shown by MRI and CAT scans, and radioactive glucose uptake 
> studies, to show activity. This is what lead me to believe in 
> a few smaller 
> neural nets, mediated, and called into action, by a central 
> processor. The 
> neural nets are naturally reprogrammable, and the central 
> processor needs 
> to know in what order to reprogram which, in what order to 
> call what function. 
> As you learn, you are either stacking up if-then statements, case 
> statements, or new functions. Either way, it's new code, and 
> you were not 
> programmed to know of the new code beforehand, so you could 
> not allow for 
> it's dynamic inclusion.

Why hardcode if-then statements?  I think you start with a 'database' of
neural nets, perhaps with one top level net, and allow more to be created.
Now, you have a program that's really comprised of neural nets--call it
neuralscript.  Disparate nets could call upon each other (subroutines) and
would be allowed to create new nets, and so forth.  OK, this is pretty hazy,
but if you could start really small, it seems like you should be able to
grow it.

Matt Lewis

new topic     » goto parent     » topic index » view message » categorize

6. RE: AI

On 8 Nov 2002, at 14:23, Matthew Lewis wrote:

> 
> 
> > From: Kat [mailto:kat at kogeijin.com]
> 
> > The human brain has compartments, shown to be specialized, but 
> > specialized by what, we don't know. For instance, the 
> > occipital lobes will 
> > become audio processors when sight is permanently removed, and the 
> > auditory section will help in sight if the sound is 
> > permanently removed. This 
> > has been shown by MRI and CAT scans, and radioactive glucose uptake 
> > studies, to show activity. This is what lead me to believe in 
> > a few smaller 
> > neural nets, mediated, and called into action, by a central 
> > processor. The 
> > neural nets are naturally reprogrammable, and the central 
> > processor needs 
> > to know in what order to reprogram which, in what order to 
> > call what function. 
> > As you learn, you are either stacking up if-then statements, case 
> > statements, or new functions. Either way, it's new code, and 
> > you were not 
> > programmed to know of the new code beforehand, so you could 
> > not allow for 
> > it's dynamic inclusion.
> 
> Why hardcode if-then statements?  I think you start with a 'database' of
> neural nets, perhaps with one top level net, and allow more to be created.
> Now, you have a program that's really comprised of neural nets--call it
> neuralscript.  Disparate nets could call upon each other (subroutines) and
> would be allowed to create new nets, and so forth.  OK, this is pretty hazy,
> but
> if you could start really small, it seems like you should be able to grow it.

You'd still need a mediator to babysit the interaction tween the nets. For 
instance, if you had a fulltime vision nnet, there is no reason the Ai couldn't 
have a process to emit a good ole fashioned programmable interrupt when it 
sees a red disk hanging from a device over a traffic intersection. You wouldn't 
want that interrupt to go anywhere else but to a mediator, doing anything 
else with it would be seen by others as random behavior. I could prolly think 
up other reasons to moderate the nnets and their interconnectivity. Besides, 
if you commit to hardware, it's just dropping another fpga on the motherbd.

Kat

new topic     » goto parent     » topic index » view message » categorize

Search



Quick Links

User menu

Not signed in.

Misc Menu