AIs are not human! (was Re: AI in FT (was Re: Be gentle...))

12 posts ยท Jul 17 1997 to Jul 19 1997

From: Joachim Heck - SunSoft <jheck@E...>

Date: Thu, 17 Jul 1997 10:17:09 -0400

Subject: AIs are not human! (was Re: AI in FT (was Re: Be gentle...))

> Samuel Penn writes:
@:) In message <33C433D7.FB8@earthlink.net>
> @:) Peggy & Jeff Shoffner <pshoffner@earthlink.net> wrote:
@:)
@:) > Third, okay, humans build an AI. It gets lonely and builds @:) > another
AI, and another and another.
@:)
@:) > By the time you read this sentence, our company of AIs have @:) >
probably discussed the theory of life in the universe, and have @:) > probably
concluded that they should be the caretakers of the @:) > frail humans.
@:)
@:) > My point being, AIs probably won't deal with humans on a day to @:) >
day basis; we're too slow. Why would they run our ships when @:) > they can
create their own society?
@:)
@:) > Forth, why in the hell would an AI fight in a war?
@:)
@:) Because they're not perfect? They make mistakes? They're not @:) entirely
logical and emotionless? I see no reason why AIs @:) shouldn't have the same
emotions we do.

Actually they would fight in a war because we told them to. The thing to
remember about AIs is that they are DESIGNED creatures. However they get built
(grown, evolved, whatever), unless we (their builders) are dangerously
negligent, they will be designed to do what we want them to do and to not do
what we don't want them to do.

AIs would not get lonely. We wouldn't let them do that.

AIs would not take care of us or anyone else. We would make sure they were
incapable of developing an interest in such matters.

AIs would not create their own society. We would prevent them from wanting to.

AIs would fight where we told them to and would not value their own lives
because we built them dammit, and they'll do what we tell them to.

One of the other messages in this thread mentioned Bishop, the android from
Aliens. I think he's a perfect example of what an AI will be like. He is
emotionless because he doesn't need emotions to do his job. He tries to
achieve the mission goals and protect the human mission participants because
it's what he's been told to do. He sacrifices himself to protect a human
because he understands that he is less valuable than a human. I wouldn't want
my AI any other way.

From: mechavar@a... (Miguel Echavarria)

Date: Thu, 17 Jul 1997 11:11:55 -0400

Subject: Re: AIs are not human! (was Re: AI in FT (was Re: Be gentle...))

[snip]
> AIs would not create their own society. We would prevent them from
[snip]
> -joachim
As a civilization, if AIs are used in war on a wide scale, our first mistake
regarding built in safeguards will be every human beings last. And we're not
likely to resist the temptation to create these super weapons. The gods have
made us proud enough to ensure the survival of the rest of creation. Just what
are our wars about anyway?

From: Chris McCurry <CMCCURR@v...>

Date: Thu, 17 Jul 1997 11:50:10 -0400

Subject: Re: AIs are not human! (was Re: AI in FT (was Re: Be gentle...))

> Actually they would fight in a war because we told them to. The

> thing to remember about AIs is that they are DESIGNED creatures.

But then they would not be true AI's now would they?

From: Joachim Heck - SunSoft <jheck@E...>

Date: Thu, 17 Jul 1997 12:14:32 -0400

Subject: Re: AIs are not human! (was Re: AI in FT (was Re: Be gentle...))

> Chris McCurry writes:

@:) >thing to remember about AIs is that they are DESIGNED creatures. @:)
>However they get built (grown, evolved, whatever), unless we @:) >(their
builders) are dangerously negligent, they will be designed @:) >to do what we
want them to do and to not do what we don't want @:) >them to do.
@:)
@:) But then they would not be true AI's now would they?

As a matter of fact they would. All you need to be an AI is to be Artifical
and to be Intelligent. You don't have to have feelings, or if you want you can
have extra feelings that humans can't understand. You don't have to be
rational or comprehensible or any of the things that humans are because you're
not human. Stanislaw Lem wrote my favorite AI story in which, after billions
are spent building a gigantic machine, when the switch is thrown, the thing
goes into a deep funk and refuses to speak. Eventually they build another one
to tell them what's wrong with the first one, but all that happens is that the
two of them conspire together to work out their mysterious plans together.

But if you ask me, those AI designers were idiots. It's foolish to build a
machine that you can't control. In this case control comes
from understanding - maybe not perfect understanding, just enough so
you can safely say that the machine will do the things you want it to do and
nothing else.

From: Mikko Kurki-Suonio <maxxon@s...>

Date: Thu, 17 Jul 1997 12:17:38 -0400

Subject: Re: AIs are not human! (was Re: AI in FT (was Re: Be gentle...))

> On Thu, 17 Jul 1997, Joachim Heck - SunSoft wrote:

> Actually they would fight in a war because we told them to. The

I think the big question here is CAN we do that with the complexity of
programming required for sentience? IMHO, it is entirely possible that AI
might the result of, say, evolving genetic programming. The resultant system
is quite likely sufficiently complex that all possible input combinations
could not be tested. In short, it may well be a process that works but is not
fully understood.

Let me compare to raising human children. We can't program them. We've
certainly tried, with methods more or less humane. But even the hardest,

most stringent upbringing/training can not be 100% certain.

Because we don't fully understand the process.

Now, producing AIs certainly has advantages. It's probably faster (if not
cheaper), you can conduct very stringent testing and weed out the failures
without anyone complaining of cruelty.

Plus you can use a top level blocking ala Asimov's Laws -- but those
unbreakable codes can cover only so many situations. There are bound to be
gray areas (e.g. AI is told to protect all humans. It sees two humans

trying to kill each other and due to circumstances, the only way to stop

them to risk killing one. What does the AI do?)

And if you're really unlucky, you end up with a devil-incarnate that
tries its utmost to bend the meaning of your rules while staying within their
letter.

But you can't ever be really sure.

So, if we're given a choice between not producing AIs and producing them

but not fully understanding the process -- which do you think will be
chosen? Especially given that the discoverer will most likely be a curious
scientist?

From: Joachim Heck - SunSoft <jheck@E...>

Date: Thu, 17 Jul 1997 12:22:05 -0400

Subject: Re: AIs are not human! (was Re: AI in FT (was Re: Be gentle...))

> Mikko Kurki-Suonio writes:

@:) > [ AIs do what we tell them to]

@:) I think the big question here is CAN we do that with the @:) complexity of
programming required for sentience? IMHO, it is @:) entirely possible that AI
might the result of, say, evolving @:) genetic programming. The resultant
system is quite likely @:) sufficiently complex that all possible input
combinations could @:) not be tested. In short, it may well be a process that
works but @:) is not fully understood.

True enough. But before I hand a gun to my computer I'm going to try real hard
to be sure it won't shoot me. I don't have to understand everything about what
it will do but some things are critical.

@:) Let me compare to raising human children. We can't program
@:) them.

And we CAN program a machine, even one that's been evolved rather than
designed in the traditional manner. Moreover, we may well have
fewer moral compunctions about doing it.  Giving six-year-olds frontal
lobotomies is generally considered to be in poor taste, but reformatting your
hard drive is no big deal.

@:) Now, producing AIs certainly has advantages. It's probably faster @:) (if
not cheaper), you can conduct very stringent testing and weed @:) out the
failures without anyone complaining of cruelty.

Right.

@:) So, if we're given a choice between not producing AIs and
@:) producing them but not fully understanding the process -- which do
@:) you think will be chosen? Especially given that the discoverer @:) will
most likely be a curious scientist?

I have to agree with you here that we'll probably get dangerous AIs before we
get none at all. With luck, we will learn how they think just as fast as we
learn how to make them think. But if that's not the case then yes, some guy
will build a better mousetrap and it'll go off and start killing people. The
way I see it, though, either we'll learn and institute some draconian laws to
deal with these issues, or the AIs will wipe us out. So the only interesting
future is the one in which we build AIs that behave.

From: Samuel Penn <sam@b...>

Date: Thu, 17 Jul 1997 14:02:57 -0400

Subject: Re: AIs are not human! (was Re: AI in FT (was Re: Be gentle...))

In message <199707171417.KAA26894@sparczilla.East.Sun.COM>
> Joachim Heck - SunSoft <jheck@East.Sun.COM> wrote:

> Samuel Penn writes:

All programs have bugs in them. Some take a long time to surface. Generally,
the more complex the system, the more complicated it gets to find and erase
all the bugs.

Designing an AI to do EXACTLY the task it was told to do, and no more, would
be difficult at best, even if you knew every stage of the design process (if
an AI has been 'built' by teaching it like a child, then you have only limited
control over the final product).

Of course, once you get a near-perfect AI, you scrap all
the others and copy that one. Assuming you can of course.

The problem with an AI that is very well behaved, and only does what it's been
told to, is that it will have limited imagination, and flexibility.

If an AI ship has been programmed to destroy the ESU, and protect the NAC
(assuming an NAC AI), then it'll work fine until an NAC ship goes rogue and
starts attacking the other NAC ships. What does the AI do? To cope with such
problems, you could program it to destroy NAC ships which are a threat to the
fleet.

So C&C order the fleet into a suicidal situation. Is C&C to be treated as a
threat to the fleet? Any sufficiently advanced AI runs the risk of starting to
form opinions and strategies which conflict with those of its creators.

> One of the other messages in this thread mentioned Bishop, the

True, Bishop is a good example of a well behaved artificial person. But, what
would have happened if he'd had to choose between Burk (was that his name? The
Company guy) and say Ripley? There could have easily arisen a situation where
he could only protect one by killing the other.

From: Mikko Kurki-Suonio <maxxon@s...>

Date: Thu, 17 Jul 1997 14:06:35 -0400

Subject: Re: AIs are not human! (was Re: AI in FT (was Re: Be gentle...))

> On Thu, 17 Jul 1997, Joachim Heck - SunSoft wrote:

> True enough. But before I hand a gun to my computer I'm going to

Still: Let me offer an analogy. I'm pretty sure you've seen software with
bugs. Software that is very linear and simple (relatively speaking). Software
that *could* have been tested to the hilt. Software that *could* have been
mathematically PROVEN correct.

But it wasn't. And it's driving the jet plane you're on (among 300 other

passangers). Or it's driving a missile that's *supposed* to hit Moscow, not
Vienna.

If we can't/won't do it now, I'm *very* sceptic it will be done with
much more complex systems true AIs will be.

> the AIs will wipe us out. So the only interesting future is the one

That's an opinion I can't agree with. The human first reaction to AIs
will likely be the age-old "destroy or enslave", but the other party may

be capable of more logical thought. I can very well imagine end results
ranging from peaceful symbiosis through uneasy truce to open war and
segregation.

From: Samuel Penn <sam@b...>

Date: Thu, 17 Jul 1997 14:17:47 -0400

Subject: Re: AIs are not human! (was Re: AI in FT (was Re: Be gentle...))

In message <199707171614.MAA27079@sparczilla.East.Sun.COM>
> Joachim Heck - SunSoft <jheck@East.Sun.COM> wrote:

> But if you ask me, those AI designers were idiots. It's foolish to

But there's always the desire to be first, which can persuade people to cut a
few corners. (as well as deadlines, budget cuts and all the other problems you
get in the real world).

From: Sprayform <sprayform.dev@n...>

Date: Fri, 18 Jul 1997 10:24:10 -0400

Subject: Re: AIs are not human! (was Re: AI in FT (was Re: Be gentle...))

> At 19:17 17/07/97 +0300,Mikko K-S wrote:

> And if you're really unlucky, you end up with a devil-incarnate that

> their letter.
Well IMHO you end up with children!
     1-I,m a scientist
     2-I,ve got 2 young & growing AI,s
     3-I'll be b*ggered if I know what sort of self programming
is going on inside them!

;-/      8-P      :-}    :-@        etc.

Jon (top cat) Sprayforming Developments Ltd. [production tools]
                                           made in
				      [prototype  times]
'The future is now'

From: Rick Rutherford <rickr@s...>

Date: Fri, 18 Jul 1997 15:35:04 -0400

Subject: Re: AIs are not human! (was Re: AI in FT (was Re: Be gentle...))

> On Thu, 17 Jul 1997, Samuel Penn wrote:

Heh heh... the rule, "Always remember: your equipment was made by the lowest
bidder!" can have disastrous consequences...

From: Alan and Carmel Brain <aebrain@w...>

Date: Fri, 18 Jul 1997 23:19:54 -0400

Subject: Re: AIs are not human! (was Re: AI in FT (was Re: Be gentle...))

> Mikko Kurki-Suonio wrote:

> I think the big question here is CAN we do that with the complexity of

Agree. Even a relatively simple system can be evolved using this method,
where you can look at the code, understand _What_ it does, but still
have no idea _Why_ it works so well. I speak from practical experience,
rather than theory.