1. a.i

This is a multi-part message in MIME format.

------=_NextPart_000_0005_01C29249.EA10E980
	charset="iso-8859-1"

Hey all,
=20
 Pattern recognition, cant be considered,in any way
conducive to reasoning in a computer program.This
cbr is really just at the most a simulation or an emulation.
While sometimes it seems like a real intelligent person,
it can no way be construed to be intelligent. While it
takes a very different approach,its no less an emulation than
video game AI .
  I agree with the thought of a project geared towards human
interaction,like a chatterbot but completely different in premiss,
completely different.So many people running around making
their own chatbots with a continuously modified cbr approach.Its =
stupid...as long as a computer program sees
a group of words as a string,all youve got is a database.It
has to see words from the user the same way it sees numbers or euphoria =
commands.User input has to be=20
something the computer can add,multiply,stretch or=20
otherwise manipulate.Data stored into a computer's ram
has no meaning for it,we have to find a viable way to make
it process words.
  Im beginning to see what its going to take to make it
come alive...I hope you are too.

-Have a happy day-
                                               JDUBE

------=_NextPart_000_0005_01C29249.EA10E980
Content-Type: text/html;
	charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META content=3D"text/html; charset=3Diso-8859-1" =
http-equiv=3DContent-Type>
<META content=3D"MSHTML 5.00.3315.2870" name=3DGENERATOR>
<STYLE></STYLE>
</HEAD>
<BODY bgColor=3D#ffffff>
<DIV><FONT face=3DArial size=3D2>Hey all,</FONT></DIV>
<DIV><FONT face=3DArial size=3D2>&nbsp;</FONT></DIV>
<DIV><FONT face=3DArial size=3D2>&nbsp;Pattern recognition, cant be =
considered,in=20
any way</FONT></DIV>
<DIV><FONT face=3DArial size=3D2>conducive to reasoning in a computer=20
program.This</FONT></DIV>
<DIV><FONT face=3DArial size=3D2>cbr is really just at the most a =
simulation or an=20
emulation.</FONT></DIV>
<DIV><FONT face=3DArial size=3D2>While sometimes it seems like a real =
intelligent=20
person,</FONT></DIV>
<DIV><FONT face=3DArial size=3D2>it can no way be construed to be =
intelligent. While=20
it</FONT></DIV>
<DIV><FONT face=3DArial size=3D2>takes a very different approach,its no =
less an=20
emulation than</FONT></DIV>
<DIV><FONT face=3DArial size=3D2>video game AI .</FONT></DIV>
<DIV><FONT face=3DArial size=3D2>&nbsp; I agree with the thought of a =
project geared=20
towards human</FONT></DIV>
<DIV><FONT face=3DArial size=3D2>interaction,like a chatterbot but =
completely=20
different in premiss,</FONT></DIV>
<DIV><FONT face=3DArial size=3D2>completely different.So many people =
running around=20
making</FONT></DIV>
<DIV><FONT face=3DArial size=3D2>their own chatbots with a continuously =
modified cbr=20
approach.Its stupid...as long as a computer program sees</FONT></DIV>
<DIV><FONT face=3DArial size=3D2>a group of words as a string,all youve =
got is a=20
database.It</FONT></DIV>
<DIV><FONT face=3DArial size=3D2>has to see words from the user the same =
way it sees=20
numbers or euphoria commands.User input has to be </FONT></DIV>
<DIV><FONT face=3DArial size=3D2>something the computer can =
add,multiply,stretch or=20
</FONT></DIV>
<DIV><FONT face=3DArial size=3D2>otherwise manipulate.Data stored into a =
computer's=20
ram</FONT></DIV>
<DIV><FONT face=3DArial size=3D2>has no meaning for it,we have to find a =
viable way=20
to make</FONT></DIV>
<DIV><FONT face=3DArial size=3D2>it process words.</FONT></DIV>
<DIV><FONT face=3DArial size=3D2>&nbsp; Im beginning to see what its =
going to take=20
to make it</FONT></DIV>
<DIV><FONT face=3DArial size=3D2>come alive...I hope you are =
too.</FONT></DIV>
<DIV>&nbsp;</DIV>
<DIV><FONT face=3DArial size=3D2>-Have a happy day-</FONT></DIV>
<DIV><FONT face=3DArial=20
size=3D2>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=20

------=_NextPart_000_0005_01C29249.EA10E980--

new topic     » topic index » view message » categorize

2. Re: a.i

> Im beginning to see what its going to take to make it
> come alive...I hope you are too.

Yes, far greater hardware than we've got at the moment... ;)

If you liken the human brain to the hardware, and our experience as the
software, and our instinct as the BIOS (?), then you'll begin to grasp how
impossible it is to create a HUMAN AI at the moment.

That's why I figure worms would be easier, though not easy. But first we
have to determine if worms are even smart. What is your criteria for
intelligence?

There is a great distinction between "life" and "intelligent life."

new topic     » goto parent     » topic index » view message » categorize

3. Re: a.i

Intelligence might have something to do with predicting the future based on
what we know now.

----------------
cheers,
Derek Parnell
----- Original Message -----
From: "C. K. Lester" <cklester at yahoo.com>
To: "EUforum" <EUforum at topica.com>
Subject: Re: a.i


>
> > Im beginning to see what its going to take to make it
> > come alive...I hope you are too.
>
> Yes, far greater hardware than we've got at the moment... ;)
>
> If you liken the human brain to the hardware, and our experience as the
> software, and our instinct as the BIOS (?), then you'll begin to grasp how
> impossible it is to create a HUMAN AI at the moment.
>
> That's why I figure worms would be easier, though not easy. But first we
> have to determine if worms are even smart. What is your criteria for
> intelligence?
>
> There is a great distinction between "life" and "intelligent life."
>
>
>
>

new topic     » goto parent     » topic index » view message » categorize

4. Re: a.i

> Intelligence might have something to do with predicting the future based
on
> what we know now.

To what extent, then? A worm can "predict" that it will die if it doesn't
eat, so it scrounges for food. (I know it's acting on instinct (or is it?),
but we need more concrete, less ambiguous evidence for intelligence.) I was
going to suggest "survival" as an indicator of intelligence, but it's too
general and easily emulated by non-intelligent entities... like plants. :)

new topic     » goto parent     » topic index » view message » categorize

5. Re: a.i

> > Intelligence might have something to do with predicting the future based
> on
> > what we know now.
>
> To what extent, then? A worm can "predict" that it will die if it doesn't
> eat, so it scrounges for food. (I know it's acting on instinct (or is
it?),
> but we need more concrete, less ambiguous evidence for intelligence.) I
was
> going to suggest "survival" as an indicator of intelligence, but it's too
> general and easily emulated by non-intelligent entities... like plants. :)

Oops! Maybe plants are intelligent, though. No offense to any
EUPHORIA-programming plants out there.

new topic     » goto parent     » topic index » view message » categorize

6. Re: a.i

>
>If you liken the human brain to the hardware, and our experience as the
>software, and our instinct as the BIOS (?), then you'll begin to grasp how
>impossible it is to create a HUMAN AI at the moment.

Negative.Simply put,look at Euphoria,ok?The interpreter understands
a series of commands,a limited set of words generally refered to as
a HIGH LEVEL PROGRAMMING LANGUAGE,the very word "interpreter" itself
points to changing our words to something the computer understands.
What Im thinking of is a VERY HIGH LEVEL language,with maybe thousands
of words,(average person has what?60000 words in vocabulary?We start
with the vocabulary of,say your average 3 year old).But not
just a programming language,thats not the point,but use the technology
of a computer lnguage's interpreter so that the program can understand what 
we say.You also use this very high level language to program all
your instincts,or you could just use euphoria.Do you see what I mean?
And what do you think of the idea,would it work?

                          --"Imagination is more important than 
knowledge"--Albert Einstein

                               -JDUBE
>
>

new topic     » goto parent     » topic index » view message » categorize

7. Re: a.i

> >If you liken the human brain to the hardware, and our experience as the
> >software, and our instinct as the BIOS (?), then you'll begin to grasp
how
> >impossible it is to create a HUMAN AI at the moment.
>
> Negative.Simply put,look at Euphoria,ok?The interpreter understands
> a series of commands,a limited set of words generally refered to as
> a HIGH LEVEL PROGRAMMING LANGUAGE,the very word "interpreter" itself
> points to changing our words to something the computer understands.

You cannot liken the human intelligence to a programming language, not even
a high one.

> What Im thinking of is a VERY HIGH LEVEL language,with maybe thousands
> of words,(average person has what?60000 words in vocabulary?We start
> with the vocabulary of,say your average 3 year old).But not
> just a programming language,thats not the point,but use the technology
> of a computer lnguage's interpreter so that the program can understand
what
> we say.

The problem is, the human intelligence has 5 modes of input: vision, sound,
touch, taste, and smell. WITHOUT THESE, any "intelligent" creature will not
be able to "understand what we say." It's that simple. Intelligence is
experiential... it has to be because it is so complex.

> You also use this very high level language to program all
> your instincts,or you could just use euphoria.Do you see what I mean?
> And what do you think of the idea,would it work?

It's an approach I'd not heard before... but you'd have to give it sensory
input and then use EUPHORIA to develop the instinct programming to utilize
that input.

new topic     » goto parent     » topic index » view message » categorize

8. Re: a.i

> Negative.Simply put,look at Euphoria,ok?The interpreter understands
> a series of commands,a limited set of words generally refered to as
> a HIGH LEVEL PROGRAMMING LANGUAGE,the very word "interpreter" itself
> points to changing our words to something the computer understands.
> What Im thinking of is a VERY HIGH LEVEL language,with maybe thousands
> of words,(average person has what?60000 words in vocabulary?We start
> with the vocabulary of,say your average 3 year old).But not
> just a programming language,thats not the point,but use the technology
> of a computer lnguage's interpreter so that the program can understand what
> we say.You also use this very high level language to program all
> your instincts,or you could just use euphoria.Do you see what I mean?

No, from here I can't see anything but a computer screen :)

Until you find a way to implement a 'context handler', ai isn't going to 
work. And the fewer the words available, the more context-dependent 
intelligence becomes. A baby can interact reasonably well with "gimme" and 
"NO!", but these only work because the context is understood by both the baby 
and others.

Regards,
Irv

new topic     » goto parent     » topic index » view message » categorize

9. Re: a.i

>From: Irv Mullins <irv at take.maxleft.com>
>Subject: Re: a.i
>
>
> > Negative.Simply put,look at Euphoria,ok?The interpreter understands
> > a series of commands,a limited set of words generally refered to as
> > a HIGH LEVEL PROGRAMMING LANGUAGE,the very word "interpreter" itself
> > points to changing our words to something the computer understands.
> > What Im thinking of is a VERY HIGH LEVEL language,with maybe thousands
> > of words,(average person has what?60000 words in vocabulary?We start
> > with the vocabulary of,say your average 3 year old).But not
> > just a programming language,thats not the point,but use the technology
> > of a computer lnguage's interpreter so that the program can understand 
>what
> > we say.You also use this very high level language to program all
> > your instincts,or you could just use euphoria.Do you see what I mean?
>
>No, from here I can't see anything but a computer screen :)
>
>Until you find a way to implement a 'context handler', ai isn't going to
>work. And the fewer the words available, the more context-dependent
>intelligence becomes. A baby can interact reasonably well with "gimme" and
>"NO!", but these only work because the context is understood by both the 
>baby
>and others.
>
>Regards,
>Irv
>
Well congratulations!Welocome to the node-based architecture of a neural
network.Words in context are memorized patterns stored as neural
nets or "thoughts".The only difference is that the data stored is
not a "string",its a related series of "words",that all have an
active meaning to the ai because they are a part of "language".



>==^^===============================================================
>This email was sent to: dubetyrant at hotmail.com
>
>

new topic     » goto parent     » topic index » view message » categorize

10. Re: a.i

From: "Kat" <kat at kogeijin.com>

> .....
> Would the Ai know who these nicks are, normally, given only the
> intelligence i those lines? This is an example of why you haveto build =
in methods of
> determining data, what it means, and instanciating new methods to =
manipulate the  data.=20

> Some things no one can predict, and these events > raise errors at the =
worst times.=20

Very true.=20
If you mean that uninstanciated vars will raise logical errors ...  very =
true.

You are appraching what essensially is a design flaw in modern computer =
languages, including Euphoria. While it may be too late to do anything =
about the design flaws in C, it is maybe not too late to correct the =
design flaw i Euphoria. I am afraid this design flaw has to corrected =
before Euphoria can truely be usable for formulating logic.

The design flaw in Euphoria is the same as expressed in this pascal =
documentation:

"A Boolean variable can assume the ordinal values 0 and 1 only, but =
variables of type ByteBool, WordBool, and LongBool can assume other =
ordinal values. An expression of type ByteBool, WordBool, or LongBool is =
considered False when its ordinal value is zero, and True when its =
ordinal value is nonzero. Whenever a ByteBool, WordBool, or LongBool =
value is used in a context where a Boolean value is expected, the =
compiler will automatically generate code that converts any nonzero =
value to the value True."

This is NOT correct boolean logic. Neither Pascal nor Euphoria can =
handle uninstanciated  vars correctly.  "a > 1" is such a structure that =
implicitly uses a boolean var that cannot always be resolved, that means =
the value of the var is undecidable. Then the result is neither true not =
false! But Pascal and Euphoria will not catch that result.

So the control structure should have been like this:

    if a > 1 then ....    -- a > 1 is true
    ifnot then ....         -- a > 1 is false
    ifnil then ...           -- a > 1 is undecidable

The truth table is like this:

    (a > 1)      result
    true     ->    true
    false    ->    false
    a is uninstanciated    ->  nil

Rom

new topic     » goto parent     » topic index » view message » categorize

11. Re: a.i

On Sat, 23 Nov 2002 09:49:26 +0100, Rom <kjehas at frisurf.no> wrote:

> Neither Pascal nor Euphoria can handle uninstanciated  vars correctly.

All you need is this:

object a
integer ablank
ablank=3D1

Then anywhere a is modified:
	...
	a=3D<something>
	ablank=3D0
	...

Then if the above has or has not been called:

	if ablank then
		undecideable
	elsif a > 1 then
		true
	else
		false
	end if

new topic     » goto parent     » topic index » view message » categorize

12. Re: a.i

rom wrote:

> This is NOT correct boolean logic. Neither Pascal nor Euphoria can hand=
le
> uninstanciated  vars correctly.  "a > 1" is such a structure that
> implicitly uses a boolean var that cannot always be resolved, that mean=
s
> the value of the var is undecidable. Then the result is neither true no=
t
> false! But Pascal and Euphoria will not catch that result.

I'm not following what you mean here. If you attempt to read from an=20
uninstaciated (declared but value undefined) variable in Euphoria, you WI=
LL=20
get a runtime error.

-- David Cuny

new topic     » goto parent     » topic index » view message » categorize

13. Re: a.i

From: "Pete Lomax" <petelomax at blueyonder.co.uk>

> All you need is this: ...

All I need to do when I want this:

    integer a, b, c, d, e

         a =3D 2
         b =3D 5
         c =3D 10  =20
         e =3D (a + b) / (c + d)
        =20
         if assigned( e) then  print( 1, e)
         end if


.... is to write this:

    =20
   integer a, b, c, d, e
   integer ablank, bblank, cblank, dblank, eblank

   ablank =3D 1  =20
   bblank =3D 1  =20
   cblank =3D 1  =20
   dblank =3D 1  =20
   eblank =3D 1  =20
   a =3D 2
   ablank =3D 0  =20
   b =3D 5
   bblank =3D 0  =20
   c =3D 10
   cblank =3D 0  =20
   if ablank =3D 0 and bblank =3D 0 and cblank =3D 0 and
      dblank =3D 0 and c !=3D 0 and e !=3D 0 then=20
         e =3D (a + b) /(c + d)
         eblank =3D 0
   end if

   if eblank =3D 0 then print( 1, e)=20
   end if


.... because that is how Euphoria and Pascal++ works.
Then what is:

        if (a > 1) or (b > 1) or (d > 1)...?

when d is not assigned?

Rom

new topic     » goto parent     » topic index » view message » categorize

14. Re: a.i

From: "David Cuny" <dcuny at LANSET.COM>

> I'm not following what you mean here. If you attempt to read from an=20
> uninstaciated (declared but value undefined) variable in Euphoria, you =
WILL=20
> get a runtime error.

You are right. What I said is true for C/C++/Delphi, but Euphoria will =
at least force you to instanciate all vars to something to get program =
to run.

I a practical application the instanciation can be values from all kind =
of sources, and Euphoria has not defined "nil" (uninstanciated as a =
flag) so you cannot simple test if assigned( <var>)....
If a var is not properly assigned during execution then default value =
will be used...

Is that proper logic (when application actually do not know the value of =
a var is...except that the programmer had to put some value into it)? =
And of course you cannot uninstanciate a value, meaning that the value =
is rubbish/nil (you must declare a flag for each var and add a lot of =
code).

Rom

new topic     » goto parent     » topic index » view message » categorize

15. Re: a.i

On Friday 22 November 2002 06:37 pm, you wrote:

> Well congratulations!Welocome to the node-based architecture of a neural
> network.Words in context are memorized patterns stored as neural
> nets or "thoughts".The only difference is that the data stored is
> not a "string",its a related series of "words",that all have an
> active meaning to the ai because they are a part of "language".

-------------------------------------------------------------------------------

"By the year ______ (1) computers will exhibit the intelligence of 
a ______________ (2) !"

Instructions:
1) Fill in any year between 1854 and 2600.
2) Enter the name of any animal, vegetable, or mineral.
3) Add your prediction to those made by AI researchers since the time of 
Charles Babbage.
4) Repeat as needed (that's what they have been doing).
---------------------------------------------------------------------------------

new topic     » goto parent     » topic index » view message » categorize

16. Re: a.i

>From: Irv Mullins <irv at take.maxleft.com>
>Subject: Re: a.i
>
>
>On Friday 22 November 2002 06:37 pm, you wrote:
>
> > Well congratulations!Welocome to the node-based architecture of a neural
> > network.Words in context are memorized patterns stored as neural
> > nets or "thoughts".The only difference is that the data stored is
> > not a "string",its a related series of "words",that all have an
> > active meaning to the ai because they are a part of "language".
>
>
>"By the year 1954 (1) computers will exhibit the intelligence of
>a _____IRV_________ (2) !"(couldnt resist)
>
>Instructions:
>1) Fill in any year between 1854 and 2600.
>2) Enter the name of any animal, vegetable, or mineral.
>3) Add your prediction to those made by AI researchers since the time of
>Charles Babbage.
>4) Repeat as needed (that's what they have been doing).



Its not that I dont care what other researchers have done and what
they are doing now,on the contrary,I try to take what worked, from
a project and dispense with what didnt work.Eat the hay,spit out the
sticks.



                               YOURS TRULY
                                 JDUBE
>==^^===============================================================
>This email was sent to: dubetyrant at hotmail.com
>
>

new topic     » goto parent     » topic index » view message » categorize

17. Re: a.i

On 23 Nov 2002, at 9:49, Rom wrote:

> 
> From: "Kat" <kat at kogeijin.com>
> 
> > .....
> > Would the Ai know who these nicks are, normally, given only the
> > intelligence i those lines? This is an example of why you haveto build in
> > methods of determining data, what it means, and instanciating new methods to
> > manipulate the  data. 
> 
> > Some things no one can predict, and these events > raise errors at the worst
> > times. 
> 
> Very true. 
> If you mean that uninstanciated vars will raise logical errors ...  very true.
> 
> You are appraching what essensially is a design flaw in modern computer
> languages, including Euphoria. While it may be too late to do anything about
> the
> design flaws in C, it is maybe not too late to correct the design flaw i
> Euphoria. I am afraid this design flaw has to corrected before Euphoria can
> truely be usable for formulating logic.

Actually, i was talking about *code* , not *variables*. But Eu also can't 
natively deal with Eu code in a variable, and evaluate it.

Kat

new topic     » goto parent     » topic index » view message » categorize

18. Re: a.i

On Saturday 23 November 2002 11:34 am, JDUBE wrote:

> Its not that I dont care what other researchers have done and what
> they are doing now,on the contrary,I try to take what worked, from
> a project and dispense with what didnt work.Eat the hay,spit out the
> sticks.

That's always good advice (wish Rob would try that).
The problem with ai is that there is so little that actually works.

Irv

new topic     » goto parent     » topic index » view message » categorize

19. Re: a.i

On 23 Nov 2002, at 13:28, Irv Mullins wrote:

> 
> On Saturday 23 November 2002 11:34 am, JDUBE wrote:
> 
> > Its not that I dont care what other researchers have done and what
> > they are doing now,on the contrary,I try to take what worked, from
> > a project and dispense with what didnt work.Eat the hay,spit out the
> > sticks.
> 
> That's always good advice (wish Rob would try that).
> The problem with ai is that there is so little that actually works.

There is so little that is publicised that works. If it works, it's proprietary
and
secret. If it doesn't, it's only there to make money or throw others off the
real
track.

Kat

new topic     » goto parent     » topic index » view message » categorize

20. Re: a.i

>
>
>Then what is:
>
>        if (a > 1) or (b > 1) or (d > 1)...?
>
>when d is not assigned?
>
>Rom
>  
>
an error:  d has not been declared

new topic     » goto parent     » topic index » view message » categorize

21. Re: a.i

"C. K. Lester" <cklester at yahoo.com> said...
Derek said...

Intelligence might have something to do with predicting the future based on what we know now.

To what extent, then? A worm can "predict" that it will die if it doesn't eat, so it scrounges for food. (I know it's acting on instinct (or is it?), but we need more concrete, less ambiguous evidence for intelligence.) I was going to suggest "survival" as an indicator of intelligence, but it's too general and easily emulated by non-intelligent entities... like plants. :)

Recently a worm's brain was dissected neuron by neuron, simulated in software, and applied to a robotic worm, mechanical bones, muscles, skin. The robot woke up and began acting like a worm, with zero behavioral programming. See:

https://duckduckgo.com/?q=worm+neuron+robot

useless

new topic     » goto parent     » topic index » view message » categorize

22. Re: a.i

eukat said...

Recently a worm's brain was dissected neuron by neuron, simulated in software, and applied to a robotic worm, mechanical bones, muscles, skin. The robot woke up and began acting like a worm, with zero behavioral programming.

Yes, they've sure made a lot of progress recently. Just a few years ago all they had were some scripts for the muscle model (https://github.com/openworm/muscle_model )

But now, ...

http://radar.oreilly.com/2014/11/the-robotic-worm.html

http://www.i-programmer.info/news/105-artificial-intelligence/7985-a-worms-mind-in-a-lego-body.html

I wonder how the synaptic weights were determined for this latest robot. I seem to remember the following criticism of this earlier work, P. Frenger, Simple C. elegans Nervous System Emulator, Houston Conf Biomed Engr Research, 2005, pg.192.

Wikipedia said...

These early attempts of simulation have been criticized for not being biologically realistic. Although we have the complete structural connectome, we do not know the synaptic weights at each of the known synapses. We do not even know whether the synapses are inhibitory or excitatory. To compensate for this the Hiroshima group used machine learning to find the weights of the synapses which would generate the desired behaviour. It is therefore no surprise that the model displayed the behaviour, and it may not represent true understanding of the system.

new topic     » goto parent     » topic index » view message » categorize

23. Re: a.i

Suppose for a hypothesis a super-computer of the future COULD solve the total of all synapses and their relative weights.

What would the consequences be for a robot being provided with this info?
and
What woukd it mean for us humans IF such robot were to come alive in fact?

Gruesome thoughts are flashing through my mind and I for one would fight as hard
as I could to prevent this from happening because:

What if the indestructable robot would consider mankind to be inferior to him and eradicate US.

new topic     » goto parent     » topic index » view message » categorize

24. Re: a.i

Ekhnat0n said...

Suppose for a hypothesis a super-computer of the future COULD solve the total of all synapses and their relative weights.

Supposition is unnecessary. For C. elegans, the weighting issue appears to have been solved.

Ekhnat0n said...

What would the consequences be for a robot being provided with this info?

You mean, like, what happened in November?

Ekhnat0n said...

What woukd it mean for us humans IF such robot were to come alive in fact?

Define alive. Define life.

Ekhnat0n said...

Gruesome thoughts are flashing through my mind and I for one would fight as hard
as I could to prevent this from happening because:

What if the indestructable robot would consider mankind to be inferior to him and eradicate US.

Unless we've successfully uploaded an individual human being's connectome, I'd argue the resultant robot is more likely to be like Rain Man than Terminator.

new topic     » goto parent     » topic index » view message » categorize

25. Re: a.i

Come on Jim, don't be a nit-picker as you got my drift clearly.
I know nothing about November as I was fully taken by the realisation of my plan.
That is.. helping ppl as my wife asked in the last weeks of her life
and
deciding on the way to do just that.

As for the definition of life, you mean conscious life I suppose
and as far as I am concerned, that will be found in a story I wrote years ago.
You can find it on my site buscadero.nl and it is called Three tubes of paint

new topic     » goto parent     » topic index » view message » categorize

26. Re: a.i

Ekhnat0n said...

Come on Jim, don't be a nit-picker as you got my drift clearly.

Right, I was picking on two points.

First, you seem to be speculating on future technology and its consequences .. but I pointed out that this technology was already here today.

Second, I thought the "machines take over the world" theme was a bit silly.

Ekhnat0n said...

As for the definition of life, you mean conscious life I suppose

No, I don't. Artificial life, aritificail consciousness, and artificial intelligence are all separate things.

new topic     » goto parent     » topic index » view message » categorize

27. Re: a.i

jimcbrown said...

No, I don't. Artificial life, aritificail consciousness, and artificial intelligence are all separate things.

To be fair, Colossus didn't seem alive, but did intelligently demonstrate one unforeseen way to stop war. It was obviously self-aware, which strongly implies consciousness.

I read the paperback ~1969, but didn't find the movie until it was on Ebay a couple years ago. Still too much killing. Time for a modern remake.

I'd say it's time to program a better Colossus in Euphoria, call it Euphorious <cough>, to be more intelligent and run on modern hardware, but since Euphoria cannot "learn" new procedures during runtime (string execution), that's not possible. Yes, i have had that on my mind 45 years now. Every time i need that string eval function, i must formulate it in a different language and export it to a new environment where it loses context. Even when still in a Euphoria context (an application's environment), it's sometimes difficult to know what it is that the Eu app doesn't know. Until Jiri and DCuny enlightened me on associated lists ~1999, i couldn't "ask" the value of George.owns.cow.#12.color unless it was already defined and initialised, even before the program knew George possessed any cows (altho i can ask that on a C64 without crashing). And then something so simple as knowing that varname existed could derive to knowing cows can be owned by George, or that brown can be on the surface of cows. The Eu sequence can be a very powerful thing, but Eu needs improvements of the sort it could not do in 1999, and still cannot do.

useless

new topic     » goto parent     » topic index » view message » categorize

28. Re: a.i

hey useless,
it seems we are about the same age, considering you have a thing
on your mind for 45 years, I guess you will be approaching 65
or even be older, like me.(I'm 69 since dec 9th last year)
Come on man, you nor anyone else ever is useless, so shed that name blink
Every last little contribution can make the difference between failure and success
and you have contributed quite a bit over the years I guess.
So thanks for your modesty, like you I like to work from the limelight
because I hate to stand out.

Let us join forces and make the difference ONCE and for all

new topic     » goto parent     » topic index » view message » categorize

29. Re: a.i

Looks like my posts are still being removed. I didn't misrepresent anything, i didn't attack anyone. Of what use was responding to Ekhnat0n? None, it was

useless

new topic     » goto parent     » topic index » view message » categorize

30. Re: a.i

@useless
(might it be you were known on the IRC-channel of Euphoria by the name of Kat long ago???
And if so.... how is Tiggr??
And how is House now?? Is it ready??
Where for the sake of the Mother can I contact you?)


dear utterly USEFUL college-euphorian,
no-one can hinder you in contacting me personaly at nyellorion@live.nl or for that matter ekhnaton@live.nl.

Please do so if you like and just you and me will show what can be achieved by real cooperation.

Ekhnat0n

He who erases or blocks this message does do great harm to the
development of Euphoria imho/
And that damage will be even be almost beyond repair if useless IS Gertie, because SHE is the very woman we really NEED

new topic     » goto parent     » topic index » view message » categorize

31. Re: a.i

That is indeed me, Ekhnat0n. I sent you an email, so we may have a less-filtered conversation.

I think OE isn't ever going to be the platform we hoped Euphoria would be over 10 years ago. It's nice to see DCuny back with his gentle nagging requests, and to see Ryan has developed an app that is pushing at the needs for real core improvements. But there is a jump between what is and what could be, and pushing people across that jump isn't going to happen.

Kat

new topic     » goto parent     » topic index » view message » categorize

32. Re: a.i

katsmeow said...

That is indeed me, Ekhnat0n.

Kat



Dear Kat, I replied to your personal email already as you might already have found out by now

new topic     » goto parent     » topic index » view message » categorize

Search



Quick Links

User menu

Not signed in.

Misc Menu