1. RE: webnet & HAL9000

> HAL9000 is about to be released as open source after 17 years of work,
well
> Cyc is only the beginning of HAL,
> but it's the first step towards true AI:

"True AI" will not exist in your lifetime. The hardware/software
available for IE (intelligence emulation) these days is about
0.0000000000001% of what we need for true AI.

Besides, there is no intelligence without sentience, and we
will NEVER develop something that is sentient.

new topic     » topic index » view message » categorize

2. RE: webnet & HAL9000

On 12 Feb 2002, at 20:30, C. K. Lester wrote:

> 
> > HAL9000 is about to be released as open source after 17 years of work,
> well
> > Cyc is only the beginning of HAL,
> > but it's the first step towards true AI:
> 
> "True AI" will not exist in your lifetime. The hardware/software
> available for IE (intelligence emulation) these days is about
> 0.0000000000001% of what we need for true AI.

That rather depends on how smart the Ai is in computer languages, doesn't 
it?

> Besides, there is no intelligence without sentience, and we
> will NEVER develop something that is sentient.

At least not without a method to execute dynamic strings or files at runtime. 
After all, if the sum of you was what you were programmed with in school, 
you'd be worthless!  What surprises me a bit is how i keep coming back to 
Eu for jobs requiring quick horsepower and sequences, but i go to mirc for 
the more advanced Ai techniques. Mirc can write itself out, reload and 
execute itself while running, or build new strings to execute, even calling 
functions in those strings. Ditto for spawning native (global access) 
processes or isolated (separate) threads. This lends itself to being self 
aware. It can do things i did not write code for. 

Like this:
<kat> Tiggr, give the channel a coke
 * [Tiggr] gives #TiggrBot a   Coke  

There is no code written in her to do that. She is aware i was addressing her, 
knew what "give" meant in irc context, knew what channel i meant, picked 
out a Coke graphic, built the mirc code in a string, and exec'd the string. 
(and she knows my favorites, and can decide if she knows your favorite Coke 
or not.) This was easy in mirc, altho the code looks like it was from Mars:

<working mirc code>
<snip>
-- do i know this word at all?
if ( $isalias(%pt.alias) == $true ) { 
-- go ask the word what it means and
-- add it to the intermediate parse collection
-- yes, David, the word i said is executed here!
    set -u0 %pt.temp $ [ $+ [ %pt.alias ] $+ ( $1 $2 %pt.text.new ) ]
<snip> -- not going to show it!
-- execute the end result
  %pt.text.new 
<snip>

With the "wrong" command, and a big enough database, Tiggr would get into 
a pseudo-endless loop of genetically trying out new code never before seen. 
The var "%pt.text.new" could contain a command to spawn new threads. 
Mirc, however, is abysmally slow for this task, which is why i was so excited 
to find Eu. Now,, how to convince Rob to make a few expansions along the 
lines of the more traditional Ai languages, but inside the *much* easier to 
use Eu frame? For mirc's $isalias() in Eu, i need a preprocessor to build a 
routine_id() table, and see if there is a routine_id(word) for the word i want? 
Or shut down the whole program, making a guess as to it's state, and restart 
it? No way. You are correct, in Eu, an Ai project would be rather impossible 
as Eu is now. I feel bad for Eu, i use Eu to feed data to mirc.

Kat

new topic     » goto parent     » topic index » view message » categorize

3. RE: webnet & HAL9000

>  Sentience means the ability to SENSE or perceive...

The definition of "sentient," according to Webster's, is "of or capable 
of feeling; conscious." Merriam Webster's dictionary.com defines it as 
"1. the quality or state of being sentient; consciousness; 2. feeling as 
distinguished from perception or thought."

So, I must take exception to your partial definition. When speaking of 
AI or any fake intelligencer, the terms are very important.

In my defition of sentience, it has to do with awareness of one's 
existence and one's ability to affect the environment in which one finds 
oneself.

> and there are many robot applications today which DO
> employ various sensory (and sensory interpretation) mechanisms.

...and it's all programmed. They do no "thinking." They are not 
"intelligent." They are NOT artificial intelligence, but rather, fake 
intelligence (FI).

> And some AI efforts are in fact BASED upon such.

"AI efforts" is a misnomer. Better: "expert systems" efforts or "fake 
intelligence" efforts.

The Turing Test is a great start. One your "AI" can fake intelligence 
good enough to trick a human being, THEN you might consider adding some 
other sentient functionality. However, like I already said, true AI will 
NOT (never) occur in your lifetime, my lifetime, or that of my 
grandchildren.

True AI will require a whole new approach to developing sentient machine 
life.

> Never say never; it may bite your backside.

Never! At least, not yet. ;)

new topic     » goto parent     » topic index » view message » categorize

4. RE: webnet & HAL9000

> > "True AI" will not exist in your lifetime. The hardware/software
> > available for IE (intelligence emulation) these days is about
> > 0.0000000000001% of what we need for true AI.
> 
> That rather depends on how smart the Ai is in computer languages, 
> doesn't 
> it?

No. Because when you "lessen" AI, you're just creating an expert system. 
Kat, you are so much more intelligent than an expert system it's 
incredible. Your brain is such a powerhouse of computing, I doubt we'll 
ever reach that level. Now, the Matrix makes me wonder... hehehe. 
<cough>

When I think AI, I don't consider IQ. Intelligence is distinct from 
knowledge. Of course, at what IQ is a person considered intelligent? Get 
a machine to that IQ, let it pass the Turing Test, and you've got fake 
intelligence. Real intelligence, however, is going to require much more 
and far greater than what we've got today.

> > Besides, there is no intelligence without sentience, and we
> > will NEVER develop something that is sentient.
> 
> At least not without a method to execute dynamic strings or files 
> at runtime. 

This would be so that the machine could... what? Create new thoughts and 
act on them? A sort of, "That knowledge doesn't exist in my brain, so 
what could/can/should I do with it?"

> After all, if the sum of you was what you were programmed with in 
> school, 
> you'd be worthless!

Exactly. You can put all the "data" I know into a neural net, but will 
it ever be able to deal with "unexpected" situations? In some cases, 
yes, like when you have a dentistry expert system. But ask it how to 
make a grilled cheese sandwich and... well... there ya go.

> This lends itself to being self aware.

But I would be highly suspect for you to claim that your program was 
self aware. It is faking it, trust me. ;)

> It can do things i did not write code for.

Unlikely. In fact, you may be way too deep in your own propaganda here. 
;)

> Like this:
> <kat> Tiggr, give the channel a coke
>  * [Tiggr] gives #TiggrBot a   Coke  
> 
> There is no code written in her to do that.

Oh, but there is...

> She is aware i was addressing her,

...because she knows the rules of address.

> knew what "give" meant in irc context...

...because she is an IRC expert (chat) system.

> knew what channel i meant...

Again, because of pre-programmed rules.

> picked out a Coke graphic, built the mirc code in a string, and exec'd 
> the string. (and she knows my favorites, and can decide if she knows 
> your 
> favorite Coke or not.)

This is just an advanced database application. If not, how do you 
differentiate it from such?

> With the "wrong" command, and a big enough database, Tiggr would get 
> into 
> a pseudo-endless loop of genetically trying out new code never 
> before seen.

Is this what human intelligence does? Are you saying you need better 
hardware? :)

> Now, how to convince Rob to make a few expansions along the 
> lines of the more traditional Ai languages, but inside the *much* 
> easier to use Eu frame?

Can you not do this, Kat? or somebody else here on the list?

I've mentioned the Turing test a few times already in this thread. Kat, 
can Tiggr respond like a human in the chat channel? Would she pass for a 
human intelligence? Of what age?

-ck

new topic     » goto parent     » topic index » view message » categorize

5. RE: webnet & HAL9000

The thing with Artificial intelligence, who says that a computer has to 
understand english to be intelligent? (or other human languages)

Even if someone could make an AI that could convincingly think like a higher 
order of animal, like a dog or monkey or something, I would be impressed.

A program that has and shows free will, not just a few random things that it 
does, but 'reasons' for it's behavior.


Whatever. I just think the goal of teaching a computer to understand full 
english before you try and make a personality is a bit silly.
=====================================================
.______<-------------------\__
/ _____<--------------------__|===
||_    <-------------------/
\__| Mr Trick

new topic     » goto parent     » topic index » view message » categorize

6. RE: webnet & HAL9000

> The thing with Artificial intelligence, who says that a computer has to 
> understand english to be intelligent? (or other human languages)

I don't think anybody said that. But in order to demonstrate 
intelligence, the PC will have to be able to communicate, right?

> Even if someone could make an AI that could convincingly think 
> like a higher 
> order of animal, like a dog or monkey or something, I would be 
> impressed.

I agree, that would be impressive, mainly because it would take some 
extreme hardware to accomplish that. :)

new topic     » goto parent     » topic index » view message » categorize

7. RE: webnet & HAL9000

> > "True AI" will not exist in your lifetime. The hardware/software
> > available for IE (intelligence emulation) these days is about
> > 0.0000000000001% of what we need for true AI.
> 
> How do you know that?

I was resorting to hyperbole to stamp my point on your brain. I have no 
actual statistic, although I'm sure I could find one. ;)

new topic     » goto parent     » topic index » view message » categorize

8. RE: webnet & HAL9000

On 13 Feb 2002, at 4:35, C. K. Lester wrote:

> 
> > > "True AI" will not exist in your lifetime. The hardware/software
> > > available for IE (intelligence emulation) these days is about
> > > 0.0000000000001% of what we need for true AI.
> > 
> > That rather depends on how smart the Ai is in computer languages, 
> > doesn't 
> > it?
> 
> No. Because when you "lessen" AI, you're just creating an expert system. 

I didn't mean to lessen the Ai to programming knowledge, i meant in addition 
to normal knowledge.

<snip>
 
> When I think AI, I don't consider IQ. Intelligence is distinct from 
> knowledge. Of course, at what IQ is a person considered intelligent? Get 
> a machine to that IQ, let it pass the Turing Test, and you've got fake 
> intelligence. Real intelligence, however, is going to require much more 
> and far greater than what we've got today.

IQ is generally data retrieval. Intelligence is being able to apply it as
needed.
In my opinion.
 
> > > Besides, there is no intelligence without sentience, and we
> > > will NEVER develop something that is sentient.
> > 
> > At least not without a method to execute dynamic strings or files 
> > at runtime. 
> 
> This would be so that the machine could... what? Create new thoughts and 
> act on them? A sort of, "That knowledge doesn't exist in my brain, so 
> what could/can/should I do with it?"

No, i have that data, but what can i do new to it to extract more info or apply 
it differently?

> > After all, if the sum of you was what you were programmed with in 
> > school, 
> > you'd be worthless!
> 
> Exactly. You can put all the "data" I know into a neural net, but will 
> it ever be able to deal with "unexpected" situations? In some cases, 
> yes, like when you have a dentistry expert system. But ask it how to 
> make a grilled cheese sandwich and... well... there ya go.

That's where the ability to alter the programming while running is important, 
to notice how the sandwich is made, and figure a way to do it herself. Like 
any intelligent being would. To make a CheeseSandwichClass, with all the 
methods. Darned if *i* am going to do it for her!
 
> > This lends itself to being self aware.
> 
> But I would be highly suspect for you to claim that your program was 
> self aware. It is faking it, trust me. ;)

Not yet, not yet! blink
 
> > It can do things i did not write code for.
> 
> Unlikely. In fact, you may be way too deep in your own propaganda here. 
> ;)

Really, i defined the words, and wrote the code to get them, but that is no 
different than you going to school, getting a dictionary, and then stringing the
actions in the dictionary together. I didn't build the string she exec'ed below.
 
> > Like this:
> > <kat> Tiggr, give the channel a coke
> >  * [Tiggr] gives #TiggrBot a   Coke  
> > 
> > There is no code written in her to do that.
> 
> Oh, but there is...
> 
> > She is aware i was addressing her,
> 
> ...because she knows the rules of address.
> 
> > knew what "give" meant in irc context...
> 
> ...because she is an IRC expert (chat) system.
> 
> > knew what channel i meant...
> 
> Again, because of pre-programmed rules.
> 
> > picked out a Coke graphic, built the mirc code in a string, and exec'd 
> > the string. (and she knows my favorites, and can decide if she knows 
> > your 
> > favorite Coke or not.)
> 
> This is just an advanced database application. If not, how do you 
> differentiate it from such?

The action in the channel was not just capable of msging the coke to the 
channel. Any word defined, with methods to replace the different definitions 
in the human dictionary, should run as part of her "understanding" just fine, 
in discussions, anyhow.

> > With the "wrong" command, and a big enough database, Tiggr would get 
> > into 
> > a pseudo-endless loop of genetically trying out new code never 
> > before seen.
> 
> Is this what human intelligence does? Are you saying you need better 
> hardware? :)

In a manner, it does, yes. Humans have some need or drive or desire. Tiggr 
doesn't have those reasons to pursue original actions yet. Other than some 
rules to get me news, mind the channels, etc,, normal hard-coded things,, 
like someone using a ruler on your knuckles when you don't do as you are 
told.
Personally, i could use a better math coprocessor.
 
> > Now, how to convince Rob to make a few expansions along the 
> > lines of the more traditional Ai languages, but inside the *much* 
> > easier to use Eu frame?
> 
> Can you not do this, Kat? or somebody else here on the list?

I can't at this time, no. Lack of money.
 
> I've mentioned the Turing test a few times already in this thread. Kat, 
> can Tiggr respond like a human in the chat channel? Would she pass for a 
> human intelligence? Of what age?

Well, depends on how smart the human is. Some people insist she is 
human, some keep checking round the clock to see if she is awake, or gives 
the same answers, or repeats herself. Her code is somewhat adaptive. She 
would not fool me. But she does fool some, at least some of the time. How 
do i know? by the way they talk to her, yell at her, curse her, flirt at her,
etc.
And one person went to great lengths one night to try and prove she had 
some sentience, even if she was a program in a computer. That was 
memorable. One reason she has her own online code is because the code i 
put into my own irc client was active in channel when i was away from 
keybd, and people thought it was me. So either i am not sentient,, or she 
partially is?

I mined Cyc webpages yrs ago, but i haven't gone that same route in her 
programming. Especially since they admit no existing language they have 
will handle that many predefined human coded assertions (written as 
classes, i imagine), 360 Million of them,, i'd rather have the original code i 
write make all the assertions after a while. Raising a Ai to the age of 2 yrs is
prolly my limit, the rest it will need to learn on it's own, rather like a child
in
kindergarten.

Kat

new topic     » goto parent     » topic index » view message » categorize

9. RE: webnet & HAL9000

> IQ is generally data retrieval. Intelligence is
> being able to apply it as needed. In my opinion.

IQ takes into consideration your ability to reason logically, as well... 
I think.

> ...i have that data, but what can i do new to it to extract more
> info or apply it differently?

This assumes your AI knows what it means to do something "new" or out of 
the ordinary. Really, it assumes your AI is aware it can operate or 
consider facts independently of its knowledge.

> That's where the ability to alter the programming while
> running is important, to notice how the sandwich is made,
> and figure a way to do it herself. Like any intelligent
> being would. To make a CheeseSandwichClass, with all the 
> methods. Darned if *i* am going to do it for her!

Look how much she has to know just to be able to consider a 
CheeseSandwichClass... bread, cheese, composition (or construction), 
fueling needs, self-preservation, spoiled vs. fresh, etc., etc.

> Really, i defined the words, and wrote the code to get them, but
> that is no different than you going to school, getting a 
> dictionary, and then stringing the actions in the dictionary
> together. I didn't build the string she exec'ed below.

The problem in AI is nobody drills down to the REQUISITES! What are the 
requisites for "getting a dictionary, then stringing the actions in the 
dictionary together?"

> In a manner, it does, yes. Humans have some need or drive
> or desire. Tiggr doesn't have those reasons to pursue original
> actions yet.

The problem with a creature not having sentience is that it cannot 
understand death (or ceasing to exist). Even if it COULD understand 
death, it would have to have a reason to avoid it.

Kat: Tiggr, if you don't obtain and consume fuel, you will die.

Look at the implications behind this simple statement and you'll realize 
AI will never happen.

So, how would Tiggr respond right now? :)

Now that I think about it, Tiggr is kinda on life support. In fact, she 
has no ability to choose her own destiny. Someone (including you) could 
come along at any time and "pull the plug" on her, delete all the code 
that defines her, and she'd be dead.

> > I've mentioned the Turing test a few times already in this
> > thread. Kat, can Tiggr respond like a human in the chat
> > channel? Would she pass for a human intelligence? Of what age?
> 
> Well, depends on how smart the human is.

Any normal, high-school graduate adult with some life experience.

> Some people insist she is human, some keep checking round
> the clock to see if she is awake, or gives 
> the same answers, or repeats herself... But she does fool some,
> at least some of the time. How do i know? by the way they
> talk to her, yell at her, curse her, flirt at her, etc.

Okay, not those dummmies. ;)

> And one person went to great lengths one night to try and
> prove she had some sentience, even if she was a program
> in a computer.

That person simply didn't understand sentience, AI, etc...

> So either i am not sentient, or she partially is?

There is no partial sentience. You're either sentient, or you can fake 
it real good. What was the "psychologist" computer program ("Alice?") 
that fooled so many people? It so depends on the interactants...

kat + ck = good communication
tiggr + ck = bad communication (nay, impossible)

> write make all the assertions after a while. Raising a Ai
> to the age of 2 yrs is prolly my limit, the rest it will
> need to learn on it's own, rather like a child in 
> kindergarten.

You're getting warmer!!!

You must build a machine that can be intelligent, NOT a machine that is 
intelligent.

Think about this: an sentient being (or even AI) MUST have provisions 
for the input of data. As humans, we have eyes, ears, mouth, and skin. I 
want to see somebody come up with a machine that can visually perceive 
as good as a human being.

According to a recent article I read, "To simulate one-hundredth of a 
second of the complete processing of even a single nerve cell from the 
human eye requires several minutes of processing time on a 
supercomputer. The human eye has 10 million or more such cells 
constantly interacting with each other in complex ways. This means it 
would take a minimum of 100 years of supercomputer processing to 
simulate what takes place in your eye many times every second."

That's why I say, "Never in our lifetime."

new topic     » goto parent     » topic index » view message » categorize

10. RE: webnet & HAL9000

Irv Mullins wrote:

> Kat: if you could put even that amount of "intelligence" into
> business software, instead of IRC, you wouldn't lack money.
> And if the intelligence of a 2-year-old is as far as you can
> take it, fine. Your program would be on par with a lot of CEO's.
> - but with a better memory - and less likely to have temper
> tantrums.:^p

LOL, Irv.

Here's the deal with AI: before you can have AI with a two-year old's 
intelligence, you have to have built a two-year old. I'm talking 
hardware, here. And we ain't got that, yet.

"Well what about software emulation?"

Not without more powerful hardware... ;)

I think our best bet is to create a brain and go from there.

new topic     » goto parent     » topic index » view message » categorize

11. RE: webnet & HAL9000

Irv Mullins wrote:

> ...we miss what - 99% of the data that passes by us?

No doubt, even 99.999% or more! :)

> There's information (sometimes important information) 
> in the ultraviolet spectrum, infrared, microwave, magnetic fields,
> electrostatic fields, and who knows what else.

The ABCs we create (that's "Artificial Biological Constructs") would 
certainly be able to tap into those "invisible" spectrums!

new topic     » goto parent     » topic index » view message » categorize

12. RE: webnet & HAL9000

On 13 Feb 2002, at 19:21, C. K. Lester wrote:

> 
> 
> > IQ is generally data retrieval. Intelligence is
> > being able to apply it as needed. In my opinion.
> 
> IQ takes into consideration your ability to reason logically, as well... 
> I think.

That's what i said. "apply as needed" assumes the ability to rearrange the 
data, logically or illogically, as humans can.

> > ...i have that data, but what can i do new to it to extract more
> > info or apply it differently?
> 
> This assumes your AI knows what it means to do something "new" or out of 
> the ordinary. Really, it assumes your AI is aware it can operate or 
> consider facts independently of its knowledge.

That's where the ability to alter the programming while running is important, 
to notice how ..... oh wait, i say that too..
 
> > That's where the ability to alter the programming while
> > running is important, to notice how the sandwich is made,
> > and figure a way to do it herself. Like any intelligent
> > being would. To make a CheeseSandwichClass, with all the 
> > methods. Darned if *i* am going to do it for her!
> 
> Look how much she has to know just to be able to consider a 
> CheeseSandwichClass... bread, cheese, composition (or construction), 
> fueling needs, self-preservation, spoiled vs. fresh, etc., etc.

That's why i won't do it.
 
> > Really, i defined the words, and wrote the code to get them, but
> > that is no different than you going to school, getting a 
> > dictionary, and then stringing the actions in the dictionary
> > together. I didn't build the string she exec'ed below.
> 
> The problem in AI is nobody drills down to the REQUISITES! What are the 
> requisites for "getting a dictionary, then stringing the actions in the 
> dictionary together?"

Need or desire, Tiggr doesn't .. oh, i say that below too....
 
> > In a manner, it does, yes. Humans have some need or drive
> > or desire. Tiggr doesn't have those reasons to pursue original
> > actions yet.
> 
> The problem with a creature not having sentience is that it cannot 
> understand death (or ceasing to exist). Even if it COULD understand 
> death, it would have to have a reason to avoid it.
> 
> Kat: Tiggr, if you don't obtain and consume fuel, you will die.
> 
> Look at the implications behind this simple statement and you'll realize 
> AI will never happen.

Knowing about ceasing to function doesn't mean anything. The desire to 
avoid that condition helps tho.
 
> So, how would Tiggr respond right now? :)

To that line? with silence.
 
> Now that I think about it, Tiggr is kinda on life support. In fact, she 
> has no ability to choose her own destiny. Someone (including you) could 
> come along at any time and "pull the plug" on her, delete all the code 
> that defines her, and she'd be dead.

And she wouldn't care, she has no desire to stay "alive".
 
> > > I've mentioned the Turing test a few times already in this
> > > thread. Kat, can Tiggr respond like a human in the chat
> > > channel? Would she pass for a human intelligence? Of what age?
> > 
> > Well, depends on how smart the human is.
> 
> Any normal, high-school graduate adult with some life experience.
> 
> > Some people insist she is human, some keep checking round
> > the clock to see if she is awake, or gives 
> > the same answers, or repeats herself... But she does fool some,
> > at least some of the time. How do i know? by the way they
> > talk to her, yell at her, curse her, flirt at her, etc.
> 
> Okay, not those dummmies. ;)
> 
> > And one person went to great lengths one night to try and
> > prove she had some sentience, even if she was a program
> > in a computer.
> 
> That person simply didn't understand sentience, AI, etc...

Well, they went to spirit habitation, like a soul using the physical puter to 
communicate, like your spirit,, etc etc.. It was interesting.
 
> > So either i am not sentient, or she partially is?
> 
> There is no partial sentience. You're either sentient, or you can fake 
> it real good. What was the "psychologist" computer program ("Alice?") 
> that fooled so many people? It so depends on the interactants...
> 
> kat + ck = good communication
> tiggr + ck = bad communication (nay, impossible)
> 
> > write make all the assertions after a while. Raising a Ai
> > to the age of 2 yrs is prolly my limit, the rest it will
> > need to learn on it's own, rather like a child in 
> > kindergarten.
> 
> You're getting warmer!!!

Been warm, that's why i can critique Lenat. They have, using his figures, 600 
man-years of assertions hand-coded into Cyc. I call that a waste. But, they 
made good money doing it.
 
> You must build a machine that can be intelligent, NOT a machine that is 
> intelligent.
> 
> Think about this: an sentient being (or even AI) MUST have provisions 
> for the input of data. As humans, we have eyes, ears, mouth, and skin. I 
> want to see somebody come up with a machine that can visually perceive 
> as good as a human being.

That's where the ability to alter the programming while running is important, 
to notice how ..... oh wait, i said that above! This includes the ability to 
create and use new classes, including modifying them, while running.
 
> According to a recent article I read, "To simulate one-hundredth of a 
> second of the complete processing of even a single nerve cell from the 
> human eye requires several minutes of processing time on a 
> supercomputer. The human eye has 10 million or more such cells 
> constantly interacting with each other in complex ways. This means it 
> would take a minimum of 100 years of supercomputer processing to 
> simulate what takes place in your eye many times every second."

That's parallel processing. I have begged a few dozen people over the yrs to 
help,, but they get more satisfaction competing against me than cooperating. 
So now i critique them too.
 
> That's why I say, "Never in our lifetime."

Even if i do get one going, i am not interested in letting people know. Telling 
anyone would not help me or the Ai. The best the Ai could hope for is to be 
accepted as human online. This is true of anyone or anything different from 
the percieved norm.

Kat

new topic     » goto parent     » topic index » view message » categorize

13. RE: webnet & HAL9000

Kat wrote:
> > > IQ is generally data retrieval. Intelligence is
> > > being able to apply it as needed. In my opinion.
> > 
> > IQ takes into consideration your ability to reason logically,
> > as well... I think.
> 
> That's what i said. "apply as needed" assumes the ability to
> rearrange the data, logically or illogically, as humans can.

You said that applied to intelligence, not IQ. But no biggie.

> > The problem in AI is nobody drills down to the REQUISITES!
> > What are the requisites for "getting a dictionary, then
> > stringing the actions in the dictionary together?"
> 
> Need or desire...

See, you don't even know the requisites, so how can you even be 
contemplating full-blown AI?!?!

> > Kat: Tiggr, if you don't obtain and consume fuel, you will die.
> > 
> > Look at the implications behind this simple statement and
> > you'll realize AI will never happen.
> 
> Knowing about ceasing to function doesn't mean anything.
> The desire to avoid that condition helps tho.

You cannot even desire to avoid that condition if you don't realize what 
the condition means! So you are wrong on this point. It is not desire 
that matters, it is the knowing to the point of understanding, of 
putting into the context of one's own existence.

> > So, how would Tiggr respond right now? :)
> 
> To that line? with silence.

Why?

> > Now that I think about it, Tiggr is kinda on life support.
> > In fact, she has no ability to choose her own destiny.
> > Someone (including you) could come along at any time and
> > "pull the plug" on her, delete all the code 
> > that defines her, and she'd be dead.

I put that a bad way. "She'd be dead" should have been, "it would cease 
to exist." That's death, but death usually only occurs to something that 
was once alive. I can burn up a piece of paper and it would cease to 
exist, but we wouldn't say it's dead.

Tiggr is not alive, yet. ;)

> And she wouldn't care, she has no desire to stay "alive".

She has no desire to stay alive because she doesn't know how to put that 
(life/death) into the context of her own existence.

In order to "desire to stay alive," one has to understand life vs. 
death. Unless you understand that, you won't have a desire one way or 
the other. (Even people who DO understand this often times do not desire 
to stay alive.) :(

> > That person simply didn't understand sentience, AI, etc...
> 
> Well, they went to spirit habitation, like a soul using
> the physical puter to communicate, like your spirit, etc etc..
> It was interesting.

But for us skeptics, it was blah blah blah... right Kat? Don't tell me 
you're gettin' all spiritual on us. ;)

> They have, using his figures, 600 
> man-years of assertions hand-coded into Cyc.
> I call that a waste. But, they 
> made good money doing it.

I call that a waste and the wrong approach... but what do I know?

> > "...it would take a minimum of 100 years of supercomputer
> > processing to simulate what takes place in your eye many
> > times every second."
> 
> That's parallel processing.

You understand, however, that even a child's brain is billions of times 
more complex and functional than ANY computer we are conceptualizing 
these days...

> The best the Ai could hope for
> is to be accepted as human online.

I hate to be negative, but truth is truth: Your AI will never hope.

Until our scientists (or Kats) start to understand the requisites, they 
will always be on the wrong path to AI.

new topic     » goto parent     » topic index » view message » categorize

14. RE: webnet & HAL9000

On 14 Feb 2002, at 4:52, C. K. Lester wrote:

<major snippage>

> > The best the Ai could hope for
> > is to be accepted as human online.
> 
> I hate to be negative, but truth is truth: Your AI will never hope.
> 
> Until our scientists (or Kats) start to understand the requisites, they 
> will always be on the wrong path to AI.

How do you define "requisites" , and what do you see they are?

Kat

new topic     » goto parent     » topic index » view message » categorize

15. RE: webnet & HAL9000

> > Until our scientists (or Kats) start to understand the
> > requisites, they will always be on the wrong path to AI.
> 
> How do you define "requisites"...?

The "requisites" are the most basic elements required to sustain an 
intelligence. It's just like in programming where you have to break down 
every task to it's most basic elements.

I can't say:

PlayEUQuake()

I have to break it down... to an extreme! This must also be done when 
considering a man-made intelligence. What are the most basic needs?

> and what do you see they are?

I've got my ideas, but I'll sit down and consider them more fully 
tonight.

We must answer, "The most basic requisite for a man-made intelligence 
is..."

new topic     » goto parent     » topic index » view message » categorize

16. RE: webnet & HAL9000

On 14 Feb 2002, at 20:09, C. K. Lester wrote:

> 
> > > Until our scientists (or Kats) start to understand the
> > > requisites, they will always be on the wrong path to AI.
> > 
> > How do you define "requisites"...?
> 
> The "requisites" are the most basic elements required to sustain an 
> intelligence. It's just like in programming where you have to break down 
> every task to it's most basic elements.
> 
> I can't say:
> 
> PlayEUQuake()
> 
> I have to break it down... to an extreme! This must also be done when 
> considering a man-made intelligence. What are the most basic needs?

But *humans* shouldn't be doing that. Humans would have a biased opinion, 
and be error-prone int he extreme. Witness again the 600 man-years on Cyc, 
and Lenat failed *again*. If the human doesn't pre-munge it, then the Ai 
must,, and if it cannot incorporate it while it's running, then it's rather 
useless, isn't it?

> > and what do you see they are?
> 
> I've got my ideas, but I'll sit down and consider them more fully 
> tonight.
> 
> We must answer, "The most basic requisite for a man-made intelligence 
> is..."
> 
> 
> 
>

new topic     » goto parent     » topic index » view message » categorize

17. RE: webnet & HAL9000

Kat wrote:
> > > How do you define "requisites"...?
> > 
> > The "requisites" are the most basic elements required to sustain
> > an intelligence. It's just like in programming where you have to
> > break down every task to it's most basic elements.
> > 
> 
> But *humans* shouldn't be doing that.

Doing what?! In what context do you mean?

> Humans would have a biased opinion, 
> and be error-prone in the extreme.

Then it comes down to the fact, and what I believe, that HARDWARE is 
where it's at. We'll need to create a brain before we'll ever create 
intelligence.

> Witness again the 600 man-years on Cyc, 
> and Lenat failed *again*.

Shocker, right? ;)

> If the human doesn't pre-munge it,
> then the Ai must, and if it cannot incorporate it while
> it's running, then it's rather useless, isn't it?

I think one of the most critical things for an AI is the ability to 
gather data by itself... this means visual data, auditory data, tactile 
data, ALL kinds of sensory data which it will later utilize.

Look at a baby human... It is gathering data all the time, right outta 
the womb! It starts to note patterns, recurring events; it feels 
discomfort and instinctively* knows it needs fuel! And something built 
into its brain allows it to put it all in context, until eventually 
they're aware that they don't HAVE to do what mommy/daddy says... ;)

But can we, at our most advanced technological state, ever develop 
machinery that we can "turn on" and have it start "experiencing" its 
existence? That BIOS is going to be very complex, and it's only the 
utter beginning of the development of that man-made AI...

*Note that instinct is the programming an entity contains but DOESN'T 
KNOW HOW IT WORKS or that it even EXISTS! What are you going to do, 
block the MIE (Manmade Intelligent Entity)** from knowing certain parts 
of its code?! Instinct is going to be fun to program! ;)

**Kat, help me come up with a cool anagram(?) thingie that distinctly 
and succinctly defines the kind of entity we're discussing.

new topic     » goto parent     » topic index » view message » categorize

Search



Quick Links

User menu

Not signed in.

Misc Menu