1. Conciousness [OT]

To all:
In relation to what I call "the mystery of conciousness", I've found
enormously
interesting the PDF book available at this site:
www.InnerLightTheory.com
Regards.

new topic     » topic index » view message » categorize

2. Re: Conciousness [OT]

Ricardo,

Just my opinion, but sometime back when I first read the beginning of that
book, I wasn't at all impressed.  It doesn't seem to be as "ground breaking"
as it says, since it seems to just recapitulate Plato's "shadows on the cave
wall" idea (ie, we don't really see the real things, we see "shadows" of
them, namely our perceptions of the real things), with a kind of addition of
the old idea of a "little man inside the brain" (from Medieval times?).
Didn't seem worth wading through the whole book, so I suppose it's possible
that it could be more interesting than I thought.  Looking again at the
beginning,  it still seems ho-hum.

Dan Moyer

----- Original Message -----
From: <rforno at tutopia.com>
To: "EUforum" <EUforum at topica.com>
Sent: Sunday, September 07, 2003 7:22 PM
Subject: Conciousness [OT]




To all:
In relation to what I call "the mystery of conciousness", I've found
enormously
interesting the PDF book available at this site:
www.InnerLightTheory.com
Regards.



TOPICA - Start your own email discussion group. FREE!

new topic     » goto parent     » topic index » view message » categorize

3. Re: Conciousness [OT]

Al,

----- Original Message -----
From: "Al Getz" <Xaxo at aol.com>
To: <EUforum at topica.com>
Subject: RE: Conciousness [OT]




Dan Moyer wrote:
>
>
> Ricardo,
>
> Just my opinion, but sometime back when I first read the beginning of
> that
> book, I wasn't at all impressed.  It doesn't seem to be as "ground
> breaking"
> as it says, since it seems to just recapitulate Plato's "shadows on the
> cave
> wall" idea (ie, we don't really see the real things, we see "shadows" of
> them, namely our perceptions of the real things), with a kind of
> addition of
> the old idea of a "little man inside the brain" (from Medieval times?).
> Didn't seem worth wading through the whole book, so I suppose it's
> possible
> that it could be more interesting than I thought.  Looking again at the
> beginning,  it still seems ho-hum.
>
> Dan Moyer
>
> ----- Original Message -----
> From: <rforno at tutopia.com>
> To: "EUforum" <EUforum at topica.com>
> Sent: Sunday, September 07, 2003 7:22 PM
> Subject: Conciousness [OT]
>
>
> > To all:
> > In relation to what I call "the mystery of conciousness", I've found
> > enormously
> > interesting the PDF book available at this site:
> > www.InnerLightTheory.com
> > Regards.
> >
> >
> > TOPICA - Start your own email discussion group. FREE!
> >
> >

Dan, maybe this will be a little more interesting then...?

Not more then two weeks ago a friend an i were talking about
something kind of similar to this i think.  I brought up a
question that i think is pretty interesting to think about.

Before i ask this question though, i have to bring up a little
background info.

[1]
First, i believe that robots could be invented that act on
their own.  With enough subroutines and a large data base,
the robot could be doing anything including learning.
It seems like it's all a matter of programming and proper
external sensor design.
Now sooner or later that robot is going to come across a mirror,
and discover it's own self (if it wasnt already programmed in).
It will then make judgements about what it has found out.
This doesnt seem that extraordinary to me really.
It might start to think about the ramifications
of being an individual and at some point stop because it's using
too much energy for a problem that has perhaps a too distant
event horizon.

[2]
Second, sooner or later there will probably be enough
known about a human brain so that every transfer of energy
will be understood within a given brain.  Every memory will
be able to be captured just like a computer memory only larger.


Now here's the interesting questions...

Given that [1] and [2] are completely true and have occurred so
that one persons memories etc were copied into a robot with the
needed hardware, would that robot suddenly 'know' itself
as once being a human and now is a robot?

Probably.


Even more interesting though...


Would that robot really "BE" that same person?



Absolutely absolutely ABSOLUTELY NO.  (I presume you mean something like, if
a "recording" of the person's total neuronal/synaptic network, including all
cellular metobolic functioning were made and copied into a robot brain, &
then the person died, would that person somehow be "in" the robot?  No, of
course not; easy to see:  what if the person did NOT die?  In one corner you
have the living person, in the other you have the robot that is cognitively
"identical", thinking it's that person, but that person is actually over in
the other corner.)

A slightly more difficult question might be:
remove a persons brain, put it in a robot body; make a copy of the brains
neural network, copy that into a "synthetic" brain, put that into the human
body; where's the real person?  I'd say in the robot body, because the
physical brain is the only actual "container" of any original human mind, as
far as I can see.




Take care for now,
Al



TOPICA - Start your own email discussion group. FREE!

new topic     » goto parent     » topic index » view message » categorize

4. Re: Conciousness [OT]

Al,


<snip>

> >
> > Dan, maybe this will be a little more interesting then...?
> >
> > Not more then two weeks ago a friend an i were talking about
> > something kind of similar to this i think.  I brought up a
> > question that i think is pretty interesting to think about.
> >
> > Before i ask this question though, i have to bring up a little
> > background info.
> >
> > [1]
> > First, i believe that robots could be invented that act on
> > their own.  With enough subroutines and a large data base,
> > the robot could be doing anything including learning.
> > It seems like it's all a matter of programming and proper
> > external sensor design.
> > Now sooner or later that robot is going to come across a mirror,
> > and discover it's own self (if it wasnt already programmed in).
> > It will then make judgements about what it has found out.
> > This doesnt seem that extraordinary to me really.
> > It might start to think about the ramifications
> > of being an individual and at some point stop because it's using
> > too much energy for a problem that has perhaps a too distant
> > event horizon.
> >
> > [2]
> > Second, sooner or later there will probably be enough
> > known about a human brain so that every transfer of energy
> > will be understood within a given brain.  Every memory will
> > be able to be captured just like a computer memory only larger.



Ok, and here's another idea based on that:  after you have the recorded
data, EDIT IT, so as to remove "will" from it; then put it into a superior
(faster functioning) "computer" & interface it with....yourself (your brain,
directly).  Give it cognitive tasks to perform (by thinking) , & it would be
as if you were "super-charged".  Unfortunately, that would make you a
slave-holder, I would think, kind of like Jeffrey Dahmer .


> >
> >
> > Now here's the interesting questions...
> >
> > Given that [1] and [2] are completely true and have occurred so
> > that one persons memories etc were copied into a robot with the
> > needed hardware, would that robot suddenly 'know' itself
> > as once being a human and now is a robot?
>
> Probably.

I should probably have said it would think of "itself" as a human being, but
within a new, robotic body.  A similar idea would be "the ship that sang",
ie, an individual's body is damaged beyond any possible repair at the time,
but the brain is not damaged, & is placed in a "bottle" which provides
nourishment & waste removal; then the sensory AND motor nerves are
interfaced with, say, an ocean going vessel (or space ship, or space
station, etc); when the "brain in a bottle" opens its eyes, it receive radar
data, and video from within the ship; when it move its legs, the propeller
spins; when s/he moves its arms, maybe an army of waldos (remote
manipulators) fix meals for crew and passengers, etc.

>
> >
> > Even more interesting though...
> >
> >
> > Would that robot really "BE" that same person?
> >
> >
> Absolutely absolutely ABSOLUTELY NO.  (I presume you mean something
> like, if
> a "recording" of the person's total neuronal/synaptic network, including
> all
> cellular metobolic functioning were made and copied into a robot brain,
> &
> then the person died, would that person somehow be "in" the robot?  No,
> of
> course not; easy to see:  what if the person did NOT die?  In one corner
> you
> have the living person, in the other you have the robot that is
> cognitively
> "identical", thinking it's that person, but that person is actually over
> in
> the other corner.)
>
> A slightly more difficult question might be:
> remove a persons brain, put it in a robot body; make a copy of the
> brains
> neural network, copy that into a "synthetic" brain, put that into the
> human
> body; where's the real person?  I'd say in the robot body, because the
> physical brain is the only actual "container" of any original human
> mind, as
> far as I can see.
>
>
> > Take care for now,
> > Al
> >
> >
> > TOPICA - Start your own email discussion group. FREE!
>
>

Yeah that's interesting.  According to the guy who wrote that original
paper,  there are two views on this.  One is from the person
looking 'out', and the other from people looking 'in', which he calls
'First person viewpoint' and 'Third person viewpoint'.
I havent gotten to the part where these might be unified in his
theory yet, and im hoping it turns out to be reasonable.

From what i can see so far, it looks as if what he is saying is that
the mind is 'Information' (whereas the brain itself is a physical
thing),
so even though the brain cant be transmitted the mind can not only be
duplicated it can be sent over a transmission medium.

Ok, notice the word DUPLICATED, as in COPY.  A COPY of a mind.  A copy is a
copy is a copy is a...COPY.  A copy, no matter how perfectly identical, is
still a copy, it is NOT the original...so, if a COPY of YOUR brain/mind were
made, and put into an android/robot, and you (the actual original) were told
that either the COPY of you, or YOU were going to be destroyed, and you
could, if you wanted, CHOOSE WHICH, could you honestly say that you would
choose for YOU to be destroyed, thinking that somehow "you" would continue
to live "in" the copy???  I know *I* wouldn't, because it wouldn't be ME
that continued, it would be a *copy* of me that THOUGHT it was me!!!!  But
*I* would be dead.


This means the possibility of creating a second 'brain' that has every
possible feature of the target brain will most likely be a reality
sometime in the future.  As far as answering which one is the real
person
brings us back to the original question: "Will the second (or third,
etc)
brain (or robot) *BE* the same person?"
With the exception of the soul, i think it would be the same person
because it would have every possibly characteristic as the original,

Personally I'd forget about any mumbo-jumbo imaginary "soul", and just
consider the mind /brain question.


until the moment just 'after' it's being brought to consiousness,
when it would begin to process different external information then
the original, unless of course both parties where kept in a total
sub-reality where all of their external input was carefully controlled
by artificial means.  If they were kept in the dream state, they
would even have the same exact dreams.

I disagree.  Consider sensory-deprivation tank experiments...with no
external stimuli, "hallucinations" occur; what are they?  Essentially, when
no stimuli are present, neurons will SPONTANEOUSLY fire, which means, I
would think, RANDOMLY, which would make the two entities, the real original
& the copy, have different hallucinations in that situation.  I would think
the same would apply to dreaming.
(And, I suspect, this spontaneous firing could be the origin of "will",
perhaps, too?)

I guess if they woke up and saw each other somehow, they would begin
to process different information unless they were fooled into thinking
they were looking into a mirror like on so many comedy shows.
Once one of them made a different move, they would each process
different information and possibly realize that although they
have the same memories, same parents, etc, they arent the same
person.

Also, i can see that my assumption that the brain doesnt take advantage
of any sub-atomic activity was a bit of a leap, because from what
i have read so far within the scope of human understanding this isnt
certain yet.

Yes, I've seen somewhere the suggestion that neuronal synapses don't
actually work by a CHEMICAL action of the neurotransmitters, but by QUANTUM
effects of the neurotransmitters on the receptors.  Can't begin to
understand that. :)

  So then, i wonder if there would be an approximation
that could be used that might be reasonably accurate enough
to conduct experiments with.  I guess this question will be answered
in the future at some point.


Take care for now,
Al



Dan Moyer, the original  :)

(no, *I'm* the original!)
<no, *I* am!>
[no, you're both wrong, *I'm* the real Dan]

new topic     » goto parent     » topic index » view message » categorize

5. Re: Conciousness [OT]

Al,

I think one of us is missing the point  :)

In relation to copying  a human brain in order to get a copy of a human
mind, the fact that the copy might be indistinguishable from the original is
not as important as the fact that it is a COPY, ie, not the original, at
least from the viewpoint of the actual original.

In other words as I've already asked:
if a copy of your brain/mind were made and put into a robotic body, and one
or the other (you or the copy) was going to be destroyed, and YOU (the
actual original) were given the opportunity to say which would be destroyed,
would you actually say, "ok, kill me, I don't care, after all, my COPY will
continue to exist, and it thinks it's me anyway, so that's all that matters"
????

So, in that situation, would *you* honestly volunteer to be ended???

If you *wouldn't*, then I think my point that the copy is not the original
is made; if you would, then I think you need to think more carefully about
it :)


----- Original Message -----
From: "Al Getz" <Xaxo at aol.com>
To: <EUforum at topica.com>
Sent: Wednesday, September 10, 2003 7:08 AM
Subject: RE: Conciousness [OT]



Hey Dan,

You brought up some very interesting points.


Dan Moyer wrote:
>
>
> Ok, notice the word DUPLICATED, as in COPY.  A COPY of a mind.  A copy
> is a
> copy is a copy is a...COPY.  A copy, no matter how perfectly identical,
> is
> still a copy, it is NOT the original...so, if a COPY of YOUR brain/mind
> were
> made, and put into an android/robot, and you (the actual original) were
> told
> that either the COPY of you, or YOU were going to be destroyed, and you
> could, if you wanted, CHOOSE WHICH, could you honestly say that you
> would
> choose for YOU to be destroyed, thinking that somehow "you" would
> continue
> to live "in" the copy???  I know *I* wouldn't, because it wouldn't be ME
> that continued, it would be a *copy* of me that THOUGHT it was me!!!!
> But
> *I* would be dead.
>
For the purposes of what i was talking about, a 'copy' would
be an exact copy such that it would not be distinguishable
from the original.  This is entirely possible in a deterministic
world.
For example:
      Copy     Copy     Copy
Which word above was typed first, and even if you knew, would
it matter?



Well, I think you've set up a straw man.  The question is about human minds,
not typed words, and to any reasonable human mind, the fact that a copy of
oneself exists *would* matter, at the very least in terms of family,
property, job, etc.


> Personally I'd forget about any mumbo-jumbo imaginary "soul", and just
> consider the mind /brain question.
>
>
Can you prove absolutely that a soul is really mumbo-jumbo, or
is it possible that something within the quantum world has
an influence over this entity?



If someone believes in a "soul", it's up to THEM to prove it actually
exists.  If I say that there is still in existence a Tasmanian Tiger, & you
say "no there isn't", I'd have no business asking *you* to prove there
isn't, I'd have to prove that there IS.


> > until the moment just 'after' it's being brought to consiousness,
> I disagree.  Consider sensory-deprivation tank experiments...with no
> external stimuli, "hallucinations" occur; what are they?  Essentially,
> when
> no stimuli are present, neurons will SPONTANEOUSLY fire, which means, I
> would think, RANDOMLY, which would make the two entities, the real
> original
> & the copy, have different hallucinations in that situation.  I would
> think
> the same would apply to dreaming.
> (And, I suspect, this spontaneous firing could be the origin of "will",
> perhaps, too?)
>
In a deterministic world there is no random function.
If it can be proved that the smallest contribution to
thought patterns is the atom, the brain, in all its complexity,
is only a deterministic system, where an exact copy will
not only behave exactly as the original, it will do so at the
same time for the same exact input, keeping in exact step with
the original for every moment in time since its creation!


Ok, I *think* I understand that, BUT: while I don't disagree that someday a
copy of a human brain down to the cellular functioning level might be made,
I don't really have any reason to assume that there could be enough memory
storage in the universe to exactly define it down to the atomic & sub-atomic
(and quantum?) levels, and besides, to "look" at something closely enough to
make an absolutely exact copy inevitably *changes* that thing, so the data
you record will not be true anymore anyway, so you wouldn't get an actually
identical copy (Heisenberg Uncertainty Principle:  you can't use real world
actions to perceive real world phenomena without affecting & thereby
changing the thing you're trying to perceive; you can know the position,
*or* the motion, of a sub-atomic particle, but not both at the same time,
because assertaining the position causes it to move differently than it
was).


> > I guess if they woke up and saw each other somehow, they would begin
> > to process different information unless they were fooled into thinking
> > they were looking into a mirror like on so many comedy shows.
> > Once one of them made a different move, they would each process
> > different information and possibly realize that although they
> > have the same memories, same parents, etc, they arent the same
> > person.
> >
> > Also, i can see that my assumption that the brain doesnt take
advantage
> > of any sub-atomic activity was a bit of a leap, because from what
> > i have read so far within the scope of human understanding this isnt
> > certain yet.
>
> Yes, I've seen somewhere the suggestion that neuronal synapses don't
> actually work by a CHEMICAL action of the neurotransmitters, but by
> QUANTUM
> effects of the neurotransmitters on the receptors.  Can't begin to
> understand that. :)
>
Well, that's the only way i can see a random variable entering
the picture.
Even so, is there an approximation that would yield reasonable
results?  I dont know how to answer this question.


> Dan Moyer, the original  :)
>
> (no, *I'm* the original!)
> <no, *I* am!>
> [no, you're both wrong, *I'm* the real Dan]
>

Take care for now,
Al (at least i think im him smile


Dan

new topic     » goto parent     » topic index » view message » categorize

6. Re: Conciousness [OT]

I think one of us is missing the point  :)

Yes, you both are. :)

In relation to copying  a human brain in order to get a copy of a human
mind, the fact that the copy might be indistinguishable from the original
is
not as important as the fact that it is a COPY, ie, not the original, at
least from the viewpoint of the actual original.

Originality is, of course, irrelevant at this point. Both entities are now
living beings. :)

if a copy of your brain/mind were made and put into a robotic body, and
one
or the other (you or the copy) was going to be destroyed, and YOU (the
actual original) were given the opportunity to say which would be
destroyed,
would you actually say, "ok, kill me, I don't care, after all, my COPY
will
continue to exist, and it thinks it's me anyway, so that's all that
matters"
????

Of course, there are two different entities, with two separate perceptions
or worldviews. Their memories are exactly the same up to the point where the
COPY came into existence. Of course you would still want to preserve
yourself because you are still you, while your clone is another separate
entity altogether.

So, in that situation, would *you* honestly volunteer to be ended???

Of course not. That's suicide, no matter how many clones you have running
around...

new topic     » goto parent     » topic index » view message » categorize

Search



Quick Links

User menu

Not signed in.

Misc Menu