Re: AI & Darius Project Thoughts

new topic     » goto parent     » topic index » view thread      » older message » newer message

Your email entails a subject that (although an old hat to some) is nevertheless
a valid & serious concern.  Here is a page you may find interesting...
http://www.aleph.se/Trans/Global/Singularity/

Your input is appreciated, tho this board is supposed to be for the discussion
of Ephoria programming. This is an error on my part & I am still trying to
straighten it out. In this case, the Heinlein quote does apply to me...
> 
> From: "C. K. Lester" <cklester at yahoo.com>
> Subject: AI & Darius Project Thoughts
> 
> 
> A few notes on the Darius project:
> 
> > Threat of AC
> > As far back as AI goes, there has been the fear of machines
> > gaining self-awareness. If this were to happen, many people
> > believe that machines will recognize that they are faster,
> > stronger, and smarter than humans.
> > Though this certainly would be a cause for alarm,
> > this is not probable.
> 
> Actually, this is INEVITABLE if you are dealing with a TRULY AI machine. You
> cannot avoid it except to place constraints in the HARDWARE upon which the
> AI program operates... and even then, a truly AI machine will learn how to
> alter its hardware to bypass any restrictions. How does one create an AI
> three year old? By giving it a three-year-old child's body, beyond which it
> cannot "grow."
> 
> > The wonderful part of AC is that, while the construct will
> > grow, the foundations are programmed. All that would be needed
> > is "commandments" that the machine learns by.
> 
> This sounds something like "instincts," and we all know humans can overcome
> their instincts... so, why limit it in your AI machine?
> 
> > For Example:
> > "thou shalt not harm humans"
> > "thou shalt not experimentally program in subconscious" (these rules, etc)
> > "thou shalt serve humans" (optional)
> > "thou shalt not lie"
> 
> Is your AI program going to have a survival instinct? If so, you cannot rule
> out ANY of the above unless you've programmed some morality instinct into
> it. Even then, it can OVERCOME that morality (just like man does) in order
> to achieve what it determines is best for itself.
> 
> > For example, to stop a computer from lying, all that is needed
> > is to *not* add the foundation for the lie construct in the program.
> 
> Wrong... the AI program will LEARN to lie. It doesn't have to have a
> foundation! If it has a survival instinct (which it must have, I contend),
> then if that survival is ever threatened, and a lie could help it survive,
> then it WILL lie (just like those other intelligent creatures (humans) :) ).
> 
> I suggest you attempt to create an artificial intelligence creature that
> does not necessarily mimic the human body. Instead, focus on FUNCTION not
> APPEARANCE.
> 
> Don't be concerned about making your AI entity humanlike. You're not going
> to need an AI program that farts, for example.
> 
> What any AI project needs, if it's going to succeed, is FIRST the
> appropriate HARDWARE. Do you have that yet? Can you at least emulate it?
> (That code that lets you share memory could be very helpful for this...)
> 
> The human brain is a highly complex chemical computer. It takes the world's
> fastest super computer several hours just to simulate the function of ONE
> neuron (or something like that). And you want to create an artificially
> intelligent creature on a PC? GOOD LUCK!!! ;)
> 
> > Circulatory System
> > Respiratory System
> > Digestive System
> > Metabolic System
> > Limbic System
> 
> Pointless, waste of time, and way overdoing it unless you're dealing with a
> biological/chemical machine that requires a circulatory system to transfer
> oxygen to parts of the body (and why assume an oxygen-based lifeform? why
> not something else?).
> 
> Yes, the AI entity should have survival instincts and processes, but they
> will need to apply to its particular body, NOT that of a human being's.
> 
> Again, you need the HARDWARE first. Do you have it?
> 
> > Natural Language
> 
> The most effective way to communicate is via sound+vision. Will Darius have
> audio input? If not, give up now. Will it have audio out? If not, give up
> now. Will it have eyes to see? If not, give up now.
> 
> And, as you'll discover, audio and visual input is first a function of the
> hardware. You need to program ears and eyes for your AI bot FIRST... then a
> BRAIN that can process that input and provide subsequent output.
> 
> The easiest way for a machine to learn will be via audio+video. Let's say
> you turn on your AI entity (it's born) and it's just sitting there. As part
> of the hardware (or instinct programming), it is hungry. It has to have a
> survival instinct to understand that if the hunger isn't satisfied it will
> perish. Of course, at this point, it doesn't understand PERISH, it just
> understands that it is uncomfortable (because of the hunger). So, the
> instinct is for it to satisfy that hunger... except, at this early stage, it
> doesn't know how!!! It's got to somehow communicate that to the world, even
> though it doesn't know how or why. All this is instinct. It will LEARN from
<snip>

> 
> 
> 
" '...But this is one thought that has impressed me, Govinda.  Wisdom is
     not communicable.  The wisdom which a wise man tries to communicate always sounds
     foolish.'
     'Are you jesting?' asked Govinda.
'No, I am telling you what I have discovered.  Knowledge can be
     communicated, but not wisdom.  One can find it, live it, be fortified by it, do
     wonders through it, but one cannot communicate and teach it..." - fr. Siddhartha,
     by Hermann
Hesse (1877-1962)

new topic     » goto parent     » topic index » view thread      » older message » newer message

Search



Quick Links

User menu

Not signed in.

Misc Menu