RE: webnet & HAL9000

new topic     » goto parent     » topic index » view thread      » older message » newer message

On 13 Feb 2002, at 19:21, C. K. Lester wrote:

> 
> 
> > IQ is generally data retrieval. Intelligence is
> > being able to apply it as needed. In my opinion.
> 
> IQ takes into consideration your ability to reason logically, as well... 
> I think.

That's what i said. "apply as needed" assumes the ability to rearrange the 
data, logically or illogically, as humans can.

> > ...i have that data, but what can i do new to it to extract more
> > info or apply it differently?
> 
> This assumes your AI knows what it means to do something "new" or out of 
> the ordinary. Really, it assumes your AI is aware it can operate or 
> consider facts independently of its knowledge.

That's where the ability to alter the programming while running is important, 
to notice how ..... oh wait, i say that too..
 
> > That's where the ability to alter the programming while
> > running is important, to notice how the sandwich is made,
> > and figure a way to do it herself. Like any intelligent
> > being would. To make a CheeseSandwichClass, with all the 
> > methods. Darned if *i* am going to do it for her!
> 
> Look how much she has to know just to be able to consider a 
> CheeseSandwichClass... bread, cheese, composition (or construction), 
> fueling needs, self-preservation, spoiled vs. fresh, etc., etc.

That's why i won't do it.
 
> > Really, i defined the words, and wrote the code to get them, but
> > that is no different than you going to school, getting a 
> > dictionary, and then stringing the actions in the dictionary
> > together. I didn't build the string she exec'ed below.
> 
> The problem in AI is nobody drills down to the REQUISITES! What are the 
> requisites for "getting a dictionary, then stringing the actions in the 
> dictionary together?"

Need or desire, Tiggr doesn't .. oh, i say that below too....
 
> > In a manner, it does, yes. Humans have some need or drive
> > or desire. Tiggr doesn't have those reasons to pursue original
> > actions yet.
> 
> The problem with a creature not having sentience is that it cannot 
> understand death (or ceasing to exist). Even if it COULD understand 
> death, it would have to have a reason to avoid it.
> 
> Kat: Tiggr, if you don't obtain and consume fuel, you will die.
> 
> Look at the implications behind this simple statement and you'll realize 
> AI will never happen.

Knowing about ceasing to function doesn't mean anything. The desire to 
avoid that condition helps tho.
 
> So, how would Tiggr respond right now? :)

To that line? with silence.
 
> Now that I think about it, Tiggr is kinda on life support. In fact, she 
> has no ability to choose her own destiny. Someone (including you) could 
> come along at any time and "pull the plug" on her, delete all the code 
> that defines her, and she'd be dead.

And she wouldn't care, she has no desire to stay "alive".
 
> > > I've mentioned the Turing test a few times already in this
> > > thread. Kat, can Tiggr respond like a human in the chat
> > > channel? Would she pass for a human intelligence? Of what age?
> > 
> > Well, depends on how smart the human is.
> 
> Any normal, high-school graduate adult with some life experience.
> 
> > Some people insist she is human, some keep checking round
> > the clock to see if she is awake, or gives 
> > the same answers, or repeats herself... But she does fool some,
> > at least some of the time. How do i know? by the way they
> > talk to her, yell at her, curse her, flirt at her, etc.
> 
> Okay, not those dummmies. ;)
> 
> > And one person went to great lengths one night to try and
> > prove she had some sentience, even if she was a program
> > in a computer.
> 
> That person simply didn't understand sentience, AI, etc...

Well, they went to spirit habitation, like a soul using the physical puter to 
communicate, like your spirit,, etc etc.. It was interesting.
 
> > So either i am not sentient, or she partially is?
> 
> There is no partial sentience. You're either sentient, or you can fake 
> it real good. What was the "psychologist" computer program ("Alice?") 
> that fooled so many people? It so depends on the interactants...
> 
> kat + ck = good communication
> tiggr + ck = bad communication (nay, impossible)
> 
> > write make all the assertions after a while. Raising a Ai
> > to the age of 2 yrs is prolly my limit, the rest it will
> > need to learn on it's own, rather like a child in 
> > kindergarten.
> 
> You're getting warmer!!!

Been warm, that's why i can critique Lenat. They have, using his figures, 600 
man-years of assertions hand-coded into Cyc. I call that a waste. But, they 
made good money doing it.
 
> You must build a machine that can be intelligent, NOT a machine that is 
> intelligent.
> 
> Think about this: an sentient being (or even AI) MUST have provisions 
> for the input of data. As humans, we have eyes, ears, mouth, and skin. I 
> want to see somebody come up with a machine that can visually perceive 
> as good as a human being.

That's where the ability to alter the programming while running is important, 
to notice how ..... oh wait, i said that above! This includes the ability to 
create and use new classes, including modifying them, while running.
 
> According to a recent article I read, "To simulate one-hundredth of a 
> second of the complete processing of even a single nerve cell from the 
> human eye requires several minutes of processing time on a 
> supercomputer. The human eye has 10 million or more such cells 
> constantly interacting with each other in complex ways. This means it 
> would take a minimum of 100 years of supercomputer processing to 
> simulate what takes place in your eye many times every second."

That's parallel processing. I have begged a few dozen people over the yrs to 
help,, but they get more satisfaction competing against me than cooperating. 
So now i critique them too.
 
> That's why I say, "Never in our lifetime."

Even if i do get one going, i am not interested in letting people know. Telling 
anyone would not help me or the Ai. The best the Ai could hope for is to be 
accepted as human online. This is true of anyone or anything different from 
the percieved norm.

Kat

new topic     » goto parent     » topic index » view thread      » older message » newer message

Search



Quick Links

User menu

Not signed in.

Misc Menu