AI in FT (was Re: Be gentle...)

65 posts ยท Jul 9 1997 to Jul 21 1997

From: Peggy & Jeff Shoffner <pshoffner@e...>

Date: Wed, 9 Jul 1997 01:53:48 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

> ....Larry Niven and Jerry Pournelle were bad for this in the 70s....

Are you refering to the book about the torus of gas surrounding a star, and
the humans are living inside the torus, while a AI ship tries to figure out a
way to "deal" with them?   (What was that called????  The [blank] Tree?)

[snip]
> resulting in humans being the weakest point of an aircraft. Certainly

> 1) All fighters are fully automated craft. They are the direct
Big problem; 180,000 miles per second isn't just a good idea, it's the law.
When dealing with relative distances of space (ie, the opposing fleet's

distance is measured in light-minutes) piloting these craft remotely is
going to be near impossible due the the lag in acknowledging a situation and
responding to it.

> 2) Most cruisers and larger ships in the various navies are human

Not sure about all of the automation; one series I've finished reading made a
very good point about allowing a computer to do targeting and ECM; a human
tactician on the opposing side could recognize the AI's "random" jumping for
ECM (and possibly targeting lasers, radar, whatever) and adjust his computer's
targeting and ECM to counteract the AI's targeting AND lock onto its ECM to
make it a BIG target. Simply put, humans are better randomizers. As for repair
work, I guess remote driven robots might work. I'll deal with that later
though....
> 3) Humans still run the big ships in the fleet. This is because human
Human
> crews are TINY compared to those in naval ships 2 centuries before.
The
> average dreadnought has a crew of about 100. (I'm thinking Nostromo,

Possibly, but I think there'd still be a large number of people on board. When
things get sophisticated, it seems like there is a greater demand for warm
bodies. I hate using this analogy, but (shudder) look at Star Drek; their
technology is highly advanced enough to just about run the Enterprise with
only a handful of people, but when you add in the support personnel, general
staffing, etc. you get a BIG roster. Yeah, I know a bunch of the people are
"Ensign Expendibles," but you've got to have some engineering

technicians, and some others to relay orders, and a small staff of medicals in
case someone gets hurt, a few people to play quartermaster, then someone to
cook for everyone, someone to clean the toilets, etc. You get the point.
> 4) About half the smaller (escort class) ships in the fleets are human
Independant
> scouts, destroyers and frigates on convoy protection, sentry duty, and

BIG no-no.  Computers aren't capable of replacing human intuition.
Survey missions especially. Take look at the Mars expedition; I would say a
very snazzy job in computer engineering, but the people on Earth forgot
something that the surveyor could have used; a broom! They're having to look
for a relatively clean spot on the surface of the rocks they're analyzing all
because the little robot doesn't have a brush to clean off the dirt. Granted,
no one really thought of that contingency, but if there was a human up there,
he (or she) would have said, "Darn, didn't pack a brush, oh well, I'll just
use my glove to wipe off the dirt...." Escort ships either being automated or
controlled by Cap ships would have problems too. Does that

automated escort recognize our damaged carrier as one of ours, or one of

theirs. RC escorts would have the problem of RC fighters; relative distance
kills response times.

> 5) Sa'vasku not using artificial intelligence should be obvious

Good question, and if so, how do we nasty humans exploit it?
> 6) Humans actually HAVE developed sapient AIs in secret military labs.
Those
> that have been programmed around this problem have become functionally

I can go along with that.
> 7) While ships can be programmed to fight in space, the overwhelming

Definitely....

From: Peggy & Jeff Shoffner <pshoffner@e...>

Date: Wed, 9 Jul 1997 20:59:03 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

Another general response regarding AIs in the future....

First, let's look at who is creating the AIs; humans. Despite our best
abilities, we still tend to flub up. Having AIs with human thought processes?
Scary.....

Second, as someone once told me, if the human brain was simple enough for us
to understand it, we'd be too stupid to understand it. I guess that's my way
of saying the thought of a human built AI that is senient, intuitive, and
creative is a LONG way away. I seriously doubt that there will be one created
in the next hundred years. ANd yes, I am considering the massive leaps that
are happening everyday within the field of computer science.

Third, okay, humans build an AI. It gets lonely and builds another AI, and
another and another. Computers think a LOT faster than we do. By the time you
read this sentence, our company of AIs have probably discussed the theory of
life in the universe, and have probably concluded that they should be the
caretakers of the frail humans. My point being, AIs probably won't deal with
humans on a day to day basis; we're too slow. Why would they run our ships
when they can create their own society?

Forth, why in the hell would an AI fight in a war? War is senseless,
illogical, and detrimental. Most wars boil down to one side has something that
another side wants. In the conflict, the value of the material used to fight
is worth more than what you're fighting for. I don't think AIs would fight for
this reason; the cost outweighs the benefit. I imagine they'd

follow a "compete and cooperate" way of "living." ALways strive to better
oneself, but help another, especially if it benefits you.   Hmmmm, now
there's one trait I bet an AI won't have; altruism.

Finally, you can always resort to Asimov. I recall one short story regarding
the Earth's economy being run by a set of "robot brain" computers. Our human
protagonist was sent around the globe to find out why there were "hiccups" in
the computers' running of the world. End of it turned out that the brains were
very worried that certain people in key positions would disrupt commerce to
(A) get rid of computer dictatorship (B) grab for power, so it on purposely
threw some glitches that wouldn't hurt anyone in the long run, but would give
a reason to move these people to less sensitive places in the job market. In
other words, the AI brains were quietly running human politics to avoid
confrontation.

Sounds like a slightly boring place to me....

From: Peggy & Jeff Shoffner <pshoffner@e...>

Date: Wed, 9 Jul 1997 21:28:48 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

Speaking of all this blowing up of stuff, I had a question that affects
campaign fighting. If you win a battle, shouldn't you be able to search the
wrecks of friendly AND opposing ships to see if anything can be salvaged?

Think about it, I wipe out an invading fleet, view the floating dead hulks and
determine that half can be salvagable, and a third of the others have usefull
equipment....

What do you say?

From: Peggy & Jeff Shoffner <pshoffner@e...>

Date: Sat, 12 Jul 1997 14:52:50 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

> What happens when a force decides to take over the planets occupied by

Ahhhh, but there in lies the crux of the matter.   I was refering to the

internal Human Conflict; if the humans allowed AIs to do the strategy think
and whatnot, most likely "warfare" would be economic, not physical. Conquering
a planet economically doesn't waste the planet's resources, but using force
would. But, from your statement, you are asking what would AIs do if attacked
from an outside source, say the KV. Well, that's a different matter entirely.
Of course AIs would defend themselves, most likely leave most of it to the
entity most experiencend in mass killing: us.

From: Allan Goodall <agoodall@a...>

Date: Sat, 12 Jul 1997 18:08:48 -0400

Subject: AI in FT (was Re: Be gentle...)

> At 12:04 PM 7/8/97 -0700, Jeff Shoffner wrote:

Not inhumane at all, unless... You're not telling me your fighter squads and
small escorts are controlled by HUMANS are you? That's what computers are for!

This brings up an interesting point, and one that will have to be addressed
if Jon is thinking of sanctioning a line of FT-based stories: what level
of
AI use--if any--exists in the FT universe?

I've noticed a preponderance of SF combat stories that have huge ships run by
human and alien crews without any kind of explanation for the lack of AI
usage. This isn't surprising for Heinlein and his contemporaries, who were
basically writing WWII novels before the era of the PC, but a fair number of
recent writers have glossed over the use of AI so that their ships can be
filled with human cannon fodder. Larry Niven and Jerry Pournelle were bad for
this in the 70s (and with the sequels to those earlier books) but more modern
writers have fallen into the same trap.

I personally belong to the camp that believes a sapient AI is simply a matter
of engineering. Humans have intelligent minds. In the FT universe, several
alien races are also intelligent. These minds developed naturally due to
evolution. To my mind building an artificial intelligence is a
mechanical (electro-mechanical or bio-mechanical) problem that has yet
to be solved.

There are some modern SF writers that have taken this into consideration. Fred
Saberhagen's Berserkers are a good example. So are the Minds in Iain Banks'
Culture books. I don't mind it so much if they have a good REASON for
not using AIs. In Saberhagen's universe--as an example--I can understand
the humans not wanting to build AIs since that's what they are fighting. A
certain amount of AI hatred is natural.

Okay, now I KNOW why the SF writers don't like putting AIs in their combat
stories. Human conflict is interesting; reading a story about AIs clashing is
not (well, actually it can be, but good stories based on artificially created
characters are few and far between). For story purposes they want humans on
board that ship.

Once again, what level of AI use is in place in Full Thrust? The FT universe
is only about 200 years in the future. That's long enough to make a true,
sapient AI still an unobtained goal. However, there should be no reason for
manned fighters that far in the future. The next generation of aerial fighter
under design in the US and Europe will probably be unmanned. The rigors of
battle and advances in computer science and aeronautics are resulting in
humans being the weakest point of an aircraft. Certainly today's aircraft can
survive G loads well beyond the limits of their human operator. I don't see
why fighters need to be controlled by humans ala
_Babylon 5_ in the FT universe, when it's likely we will have automated
fighter aircraft by the first or second decade of the next century. The same
can be said for the small escort ships in FT.

So, here's my proposal for automated system use in the FT universe. This is
probably not what Jon had in mind if, indeed, he had considered this at all.
However, I think it makes a reasonable starting point:

1) All fighters are fully automated craft. They are the direct descendants of
the automated combat aircraft of the 21st century. This very neatly explains
the incredibly low survival rate of fighters in the FT
universe. :-)

2) Most cruisers and larger ships in the various navies are human manned but
heavily automated. All sensor sweeps, targeting, and firing are done by
computers set on automatic (similar to--but far more advanced--than the
Phalanx system onboard modern US warships). Most damage control systems are
automated, but humans are still needed to do maintenance and repairs in areas
not easily accessed by robots. Most outside repairs are done by robots.

3) Humans still run the big ships in the fleet. This is because human
scientists have not been successful in developing true sapient AI (sort of,
see point 6). Humans still direct the ships (with suggestions from strategy
algorithms) and humans still direct the overall course of the battle. Human
crews are TINY compared to those in naval ships 2 centuries before. The
average dreadnought has a crew of about 100. (I'm thinking Nostromo, here. If
it only took a crew of 7 to run a large tug boat AND an oil refinery the size
of a city, you're not going to need a 2000 person crew for a carrier.)

4) About half the smaller (escort class) ships in the fleets are human
controlled, with the others running automated like the fighters. Independant
scouts, destroyers and frigates on convoy protection, sentry duty, and survey
missions are human operated. Escort vessels in fleets are often NOT human
operated, particularly when going up against the Kra'vak (there you go, Jeff,
no need to worry about the inhumane treatment of escort crews, and this also
neatly explains Jon's affinity towards high casualty rates among escorts).

5) Sa'vasku not using artificial intelligence should be obvious (actually, the
artificial intelligence is part of the biological ship). Do Kra'vak use AIs?
Might explain why their ships are so nasty...

6) Humans actually HAVE developed sapient AIs in secret military labs.
However, they can't get any of them to risk their artificial selves to fight a
war (I've actually got a story idea for this scenario). Lacking the human
"frailties" of love, pride, hate, and personal sacrifice, they simply won't
risk themselves. They KNOW they don't have a soul and that for them there is
nothing beyond this "life," so they damned well won't risk themselves. Those
that have been programmed around this problem have become functionally insane.

7) While ships can be programmed to fight in space, the overwhelming number of
variables in ground combat mean that humans must still do the work dirtside.
Computer advances have resulted in single man tanks and artillery vehicles.
Grunts are still grunts.

So, there's a little treatise on AIs and the FT universe. The idea was not to
ignore the whole idea of spaient AI development in the universe, but to
explain why it hasn't happened. As a side effect, it also neatly explains some
things that happen in a typical FT game. Any comments?

From: Paul Calvi <tanker@r...>

Date: Sat, 12 Jul 1997 19:12:03 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

Well Keith Laumer's Bolo series is probably the best example of AI at work (as
well as some of Asimov's Robot stuff). I think in general though
Sci-FI
writers assume (perhaps correctly) that AI will never achieve the ability to
replace man in battle where all is chaos. Of course it IS silly that space
ships still have hundreds of crewman. Even 2001 and Aliens had only a handful
of crew to run a ship.

Paul

> At 06:08 PM 7/12/97 -0400, you wrote:
The
> rigors of battle and advances in computer science and aeronautics are
---snip---

From: John Skelly <canjns@c...>

Date: Sat, 12 Jul 1997 19:50:28 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

Allan that sorta talk is what leads to Skynet. All kidding aside, I love
computers, but I kinda like the idea of imaging human pilots dog fighting and
not computer AI. I like your ideas though, especially number 6 on true AI's
going insane. Try to convince those AI's that a
soul is non-linear result of the conscience :-).

If you like the idea of AI combat units check out the Bolo series by Kieth
Laumer and others.

> Allan Goodall wrote:

> At 12:04 PM 7/8/97 -0700, Jeff Shoffner wrote:

From: John Skelly <canjns@c...>

Date: Sat, 12 Jul 1997 19:55:33 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

Silly, I don't know about that. We won't really know untill we actually get to
outer space with large craft. I think authors are using modern examples for
their numbers. Using Aliens and 2001 as benchmarks is just switching from one
hypothesis to another.

> Paul Calvi wrote:

> Well Keith Laumer's Bolo series is probably the best example of AI at

From: Robert Crawford <crawford@k...>

Date: Sat, 12 Jul 1997 20:08:44 -0400

Subject: Re: AI in FT (was Re: Be gentle...)


  

From: kj@p... (Karl G. Johnson)

Date: Sun, 13 Jul 1997 04:01:20 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

> Robert Crawford writes:

The story is called "Reflex", and can be found in the collection entitled
"There Will Be War", (vol. 1 of 10), published by Tor.

In addition, any AI would initially be programmed with necessary logic and
"learning" functions, but at what point in its development would the AI be
able to duplicate the human thought process to create and implement an
'original', unique tactic that it hadn't encountered before? Without that
ability present, you'll have autonomous fleets using identical tactics (or a
huge programming staff, which makes the basic idea of AI moot as it's not
truly autonomous) and a tactical (and possibly strategic) stalemate.

Consider as a rough analogy: two opposing commanders; one who always follows
his forces doctrine and expects his opponent to do likewise. The other uses
his force's doctrine only when it suits his immediate battlefield needs,
improvising as opportunity presents itself. Which commander will have the
advantage, all else being equal? (I give 9:5 on the latter, personally...)

The ability for independent, spontaneous thought in AI can also have severe
side effects. What if the unit in question learned to ask the question "why?"?
It might even decide it was fighting for the wrong side...

Besides all that, it'd put the bodybag manufacturers out of business...
8)

KJ

From: Allan Goodall <agoodall@a...>

Date: Sun, 13 Jul 1997 09:26:02 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

> At 04:12 PM 7/12/97 -0700, you wrote:

I personally think that is a human bias. Studies of combat show that HUMANS
have major problems fighting in the chaos of combat. The Marshall studies
after WWII showed that only about 20% of all troops in combat fire their
weapon efficiently. The rest either fire blindly from cover, don't fire at
all, run away, or are too busy writhing in pain or dying. Human mistakes and
frailties abound in war. Much of a wargame is dedicated to making your troops
fight less effectively than optimum. Try this as an example: play a game of
DS2 but let one player ignore the morale rules...

Having said that, I think that I agree with this perspective in the timeline
cited in Full Thrust.

From: Allan Goodall <agoodall@a...>

Date: Sun, 13 Jul 1997 09:35:55 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

> At 07:50 PM 7/12/97 -0400, you wrote:

Ah, but Skynet will change the world! :-)

> All kidding aside, I

I like the idea too, but my suspension of disbelief cracks at that point. Have
you played Falcon 3.0? Try playing the drone scenario. If you get it on the
first pass, it's simple. If you miss it on the first pass, you're toast simply
because it can pull a 15g turn without batting an eye.

I have no problems with Star Wars fighter pilot combat, but then Star Wars is
fantasy, not SciFi. I just haven't found a good enough reason for putting
actual humans in a fighter cockpit, especially after the way they get shredded
in FT. How do those navies actually get pilots stupid enough to replace the
first ones. I mean, a 60% to 100% casualty rate is fairly common.

> If you like the idea of AI combat units check out the Bolo series by

I've read some of the Bolo short stories, but not any of the novels. I think
Iain Banks is the way to go, though. Huge, kilometres long starships
controlled by an AI Mind while humans live a life of splendor in their own
utopias. That's the way to fight a war!

From: Allan Goodall <agoodall@a...>

Date: Sun, 13 Jul 1997 09:51:51 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

> At 01:01 AM 7/13/97 -0700, Karl wrote:

It's not a bad story, but it was written to prove the point that humans were
still needed to fight wars. In the story, the AI was set up as a strawman to
the human crew's ironman.

> In addition, any AI would initially be programmed with necessary logic

My supposition is that a SAPIENT artificially created intelligence--as
part
of the definition of being sapient--would be self-aware AND capable of
learning.

> Without that

I agree. But I don't subscribe to the theory that a true AI would be
incabable of learning. See James P. Hogan's _Two Faces of Tomorrow_ (I
think that's the name of the book, it's been a long time since I read it). An
AI is set up in a space station and programmed to defend itself. It comes as
quite a shock when the machine learns to defend itself at a rate far faster
than a human being (and without all the psychological baggage that goes along
with human learning experiences).

> The ability for independent, spontaneous thought in AI can also have

Now THAT is the best reason I've seen for not letting computers run the show.
On the other hand, see the Hyperion books (particularly the first two) by Dan
Simmons. By the second book, you find that the AIs running human space have
realized that things would be more efficient and safer if they simply did away
with all these humans...

From: Sutherland <charles@n...>

Date: Sun, 13 Jul 1997 11:09:17 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

> I've read some of the Bolo short stories, but not any of the novels. I

The AI will almost always win the first encounter/engagement against
human opponents. The problem comes when there are engagements after the first.
The AI has no imagination and therefore has a hard time predicting what a
human will learn from the first encounter.  The win/loss ratio gets
worse and worse as the number of encounters grows. You could reprogram the AI
after every encounter but this is not very practical. (military programmers
take 6 months to change colors on a display[exagerated a little yes but I work
with some]).

Computers react quicker but they have no intuitive learning ability which will
spell their doom every time.

And have said all that, I think that AI in fighters is a very good idea.
Download the program of attack at the time of launch and let them go. Why
would you risk a human on what is really a multi-attack drone?  In
figter combat reaction time and ability to hold your Gs is a little more
important than your learning curve anyway. My 2cents(decrease for inflation.)
That Chuk Guy

From: Allan Goodall <agoodall@a...>

Date: Sun, 13 Jul 1997 12:28:27 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

> At 10:09 AM 7/13/97 -0500, you wrote:

Why? Why doesn't the AI have an imagination? This has been a staple for years
of SF (most recently with Data in Star Trek). There is an assumption that
"true" biological intelligence is capable of imagination and random,
unpredictable behaviour but that you won't get this if you build an
intelligence artificially.

My belief is that the human brain can eventually be duplicated (and even
surpassed) through electronic engineering or biomechanical engineering. At
that point, we'll have an artificially constructed intelligence that can
think, learn, and have an imagination.

Now, the question is whether you can put together an artificial brain that can
run at faster speeds than the human brain. I think part of our "imagination"
and random thoughts comes from the strange way we store information. Humans
have a devil of a time working things through in a logical, sequential manner.
On the other hand, that same storage system makes it possible for humans to
jump to leaps of intuitive logic far more efficiently than a machine.

The one problem human brains have is that they are essentially isolated except
through some pretty inefficient networking protocols: sight, sound, touch,
smell, taste. If you could build an artificial mind that behaved like a
human's, it may not be able to function any faster than a human mind. However,
it should be possible to build a massively parallel artificial mind that can
behave logically, and intuitively, and FASTER than a human.

In short, humans won't be able to keep up with the machine's thoughts or the
pace of war in such an environment. Add that to starships fighting at speeds
far greater than the human mind can handle, and you've just made humans
useless.

One more thing: presumably it would be possible to run an artificial brain in
a spaceship without the need for all those expensive life support systems
needed for multiple humans. You could design the brain for a specific ship. If
nothing else, an artificially controlled ship will be packed with more
weaponry than is possible in a ship with a few hundred humans onboard. You
won't need escape pods, for one thing.

> Computers react quicker but they have no intuitive learning ability

CURRENT computers have no intuitive learning ability. I believe that a true
artificial brain WILL have intuitive learning. At that point, humans become
more of a liability than an asset.

> And have said all that, I think that AI in fighters is a very good
Why
> would you risk a human on what is really a multi-attack drone? In

Actually, this is turning into one of the more interesting discussions I've
had online in a while. I agree with you on your above point. In fact, in my
suggested timeline a truly thinking brain that will happily fight a war hasn't
been invented yet. So humans are still needed for the actual running of the
war, and the actual running of the ships in a tactical sense. You point out
the exact reason, though, why fighters should be automated.

One other thing, you could probably program the fighters to update their
combat algorithms from transmissions coming from the carrier. Not only would
the carrier supply sensor information, it would also supply updated combat
parameters and tactical analysis.

From: Alex Williams <thantos@d...>

Date: Sun, 13 Jul 1997 13:22:46 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

> On Sun, 13 Jul 1997, Sutherland wrote:

> The AI will almost always win the first encounter/engagement against

You know, as someone that actually works with AI in the real world, I'm going
to have to take issue with the implication that its incapable of learning new
responses or even creativity. The application of neural networks, genetic
algorithms and other modern techniques allow software to come up with new,
even novel solutions to situational environments. Whether the learning occur
as a result of neural reinforcement paths or because the less successful code
dies and the winners survive to breed variants, experience gets encoded into
the matrix of rules the software uses to make decisions.

Given, what, a thousand or more years of development, AI that can interact
meaningfully in human terms should find learning things based on that
patterning near-trivial.  This won't require recoding it after every
encounter or any such manual intervention, it'll happen as a natural
consequence of the way the system generates its responses.

For a paper on a realworld application of something you'll have some
connection to, do an Infoseek search for 'Amalthea MIT'. The Amalthea project
used a GA to learn your preferences in mail filtering.

From: Tom McCarthy <tmcarth@f...>

Date: Sun, 13 Jul 1997 13:26:00 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

I like Allan's assertion that an AI should take up less
mass/space/resources
than biological crew, but I have seen the future through the visionary eyes of
Glenn Larson and Battlestar Galactica, where it took three AIs to run a
fighter comparable to a human-piloted single-man attack craft.  As we
can see. AIs that build other AIs get caught up in constraints like having an
atmosphere such that they can communicate with each other vocally, in
carefully designing the displays of their spacecraft so that the AIs are not
overcome by a flood of combat data, and designing controls which can be
manipulated easily and accurately by robots with rubber fingers.

Sorry. I just ran a demo game yesterday, and the only participant really
wanted my sage recollections of this television show he'd heard of but was too
young to have watched...

From: kj@p... (Karl G. Johnson)

Date: Sun, 13 Jul 1997 13:40:41 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

> And have said all that, I think that AI in fighters is a very good
Why
> would you risk a human on what is really a multi-attack drone?

> You point out the exact reason, though, why fighters should be

And when the Carrier's controls/transmission ability are damaged, the
drone
fighters become so much useless chaff, as the _other_ guy's fighters
might still be linked; if both lose communications, then again, you have a
tactical impasse, and 60%-100% casualties. Acceptable for drones, of
course, but if each side faces that same difficulty, how long would they keep
deploying fighters?

I've always been under the impression that, in FT, a 'destroyed' fighter can
be assumed combat ineffective (engine damage, systems failure, bailing out,
etc.). So, yes, there will be pilot fatalities, but I don't believe them to
be on the same order of _hardware_ casualties.

In the same vein, how many conscripts would willingly set foot on a capital
ship, given the amount of damage sustained and/or life expectancy of the
ships in FT? Makes a fighter look a whole lot safer by comparison. At least
the fighter pilot can die with the illusion of direct control over his own
fate... not so for the poor shmucks that exploded when the ship decompressed,
that were burned down at their weapons stations, or met their demise otherwise
aboard a capital ship.

KJ

From: Chen-Song Qin <cqin@e...>

Date: Sun, 13 Jul 1997 16:25:46 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

> On Sun, 13 Jul 1997, Sutherland wrote:

> And have said all that, I think that AI in fighters is a very good
Why
> would you risk a human on what is really a multi-attack drone? In

A *really* good idea in this case would be to have a mix of both human and AI
fighters. You can use the AI to do dangerous suicidal missions, where they are
expendable, and use humans to perform missions that require flexibility. At
the argument of AI's attaining sentience and disobey orders... Well at that
point the Computers would be living beings, and to use them in unquestioned
servitude would be slavery anyways.

From: Chen-Song Qin <cqin@e...>

Date: Sun, 13 Jul 1997 16:32:26 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

> On Sun, 13 Jul 1997, Allan Goodall wrote:

> Why? Why doesn't the AI have an imagination? This has been a staple

Okay this can be true. But I'm just wondering why you'd send this kind of
intelligent machine into battle and have them fight for you. That'll amount to
slavery of sentient beings. (besides, if I AM a big starship with lots of
powerful weapons, would I listen to some bozo telling me to kill myself
fighting enemies?)

> However, it should be possible to build a massively parallel

This is a way *cool* idea. So do you actually work in the AI field? Just
wondering... But then again, how well would something like this handle damage
in combat? What happens to a human brain if all of a sudden, a piece of it got
wacked off? Automated repair systems? BTW, according to Murphy's Law, that'll
be the first system that gets damaged.

From: Sutherland <charles@n...>

Date: Sun, 13 Jul 1997 17:22:22 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

> One other thing, you could probably program the fighters to update

Too susceptible to jamming I would imagine.

The problem with these "thinking" computers is that we need to have a major
leap in technology to achieve this. We dont even understand how our own brains
work. How are we supposed to build one? And if we do who is to say we can
control it? I dont remember where I saw it but there was this really big bomb
that had an AI to pilot, then detonate it in the sun. This AI thought about
this a little to long and refused to do it's job. What if the AI didnt like
somebady and decided to get some payback?

All of this is pretty acedemic though. We will just have to wait and see if
ever how these things are going to turn out.

2 more cents.(is there tax on this?) That Chuk Guy

From: Sutherland <charles@n...>

Date: Sun, 13 Jul 1997 17:28:05 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

> At 01:22 PM 7/13/97 -0400, you wrote:

I never said it was incapable of learning. My only point is that unless the
tech increases drasticly in a totally new way it is improbable that machines
could duplicate the intuition of a human mind. I have seen some amazing things
done with learning algorythims. But still nothing that shows that it could be
dropped in the middle of a jungle with a mission goal and be able to handle it
as well as a human. Not yet anyway.

From: Sutherland <charles@n...>

Date: Sun, 13 Jul 1997 17:31:16 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

> In the same vein, how many conscripts would willingly set foot on a
not
> so for the poor shmucks that exploded when the ship decompressed, that

How many do it now? Several 100,000 just in our Navy I believe. I work in
communications for the Air Force. I sit in a big building with C3 equipment
that just screams BIG A## TARGET all over it.

War Sucks, Get a helmet That Chuk Guy

From: John Skelly <canjns@c...>

Date: Sun, 13 Jul 1997 17:51:48 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

> Allan Goodall wrote:

> At 07:50 PM 7/12/97 -0400, you wrote:

I don't there are any ful length 'Bolo' novels. There was a long short story
that had less too do with a Bolo than anything else. I would be interested in
any titles from the author you are refering too.

Never played Falcon 3 or higher, but I have played x-wing vs tie
fighter. I'll admit the AI can be deadly accurate but I'm getting there
:-).  The high casualty rate you refer too can be attributed to more
than just units being destroyed ie- low fuel, out of munitions, any
craft damage that forces the fighters to withdraw, etc I try to imagine the
number more of an overall strength rating rather than an actual number of
craft.

Hey, how long do you think it would take all those AI escorts and fighters to
decide that the humans in charge were a bunch of idiots?

From: John Skelly <canjns@c...>

Date: Sun, 13 Jul 1997 17:59:48 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

> The AI will almost always win the first encounter/engagement against

I think comparing todays gaming AI to what can exist even 20 years from now is
not seeing the full potential. The AI you see in games are basically a set of
Algorithms designed to simulate logical thought. True AI, at something
approaching human level, won't exist until artificial neural networks get more
advanced.

From: John Skelly <canjns@c...>

Date: Sun, 13 Jul 1997 18:15:42 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

Allan forget your central computer brain running the ship and imagine this:
Throughout the ship are linked robots each capable of operating independantly.
Much like say, Terminators from the movies, distributed throughout the ship
linked togethor in one large network. When linked together they have their
combined intelligence and storage to run the ship. This way one hit doesn't
destroy the ships 'brain'. Also imagine a turret cut off from the rest of the
ship but still able to function because one or more robots is still linked to
it. This goes along with todays trend of moving from host based systems to a
more distributed style of processing.

One argument I can see is, well why not just distribute the 'brain' throughout
the ship? Well by having individual robots you have a ready made boarding,
damage control and defense force. I'm getting chills just imaging those
Terminators just standing still with just the flicker of activity lights to
show that anything is going on. Kinda makes the Borg collective laughable in
comparison.

> Allan Goodall wrote:

> At 10:09 AM 7/13/97 -0500, you wrote:

From: Christopher Pratt <valen10@f...>

Date: Mon, 14 Jul 1997 00:01:52 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

> Rob Paul wrote:
Even a
> Culture drone is _way_ above most of the AIs in this thread, and a

This reminds me of a campign i was running for a while, this vast intersteller
society had fought this really massive war with these beings called the
darklings, they could posess ppl, then infiltrate society and begin to cause
its downfall. well the ancient society eventually found a way to detect the
possession via telepaths, and set about to fighting a more conventional war.
after the war, the society set up a truly massive computer AI, to govern
society and prevent another darkling war. Only the built the computer to
closely mimic real human intellengence to over come some of the side effects
(lack of intuition, learning curve, etc...). They did so good a job, that the
computer actually developed telepathic abilities and was eventually invaded by
the darklings, which cause the downfall of there society. it was a shame my
players never got far enough into the campign to start learning things about
there universe

oh well

From: Donald Hosford <hosford.donald@a...>

Date: Mon, 14 Jul 1997 01:03:49 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

> Peggy & Jeff Shoffner wrote:

"The Integral Trees" by Larry Niven

> > resulting in humans being the weakest point of an aircraft.
Certainly
> > today's aircraft can survive G loads well beyond the limits of their
jumping for
> ECM (and possibly targeting lasers, radar, whatever) and adjust his

In the books Antares Dawn/Antares Passage by Michael McCollum:

They use newtonian movement for the movement of the ships, and basic combat
maneuver is where the two forces run past each other. Usually the ships are
moving so fast, that there is no way any human gunner could hit anything. They
just program their combat computers to open up at the instant the target is in
position.

From: Robin Paul <Robin.Paul@t...>

Date: Mon, 14 Jul 1997 01:47:15 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

> At 06:15 PM 7/13/97 -0400, you wrote:

This discussion has taken in a great many different things, from simple
automata up to Culture Minds (which I like, being a Iain Banks fan). Even a
 Culture drone is _way_ above most of the AIs in this thread, and a Mind
is so powerful it stays mostly in hyperspace (so it can operate faster),
leaving only a few thousand tons in normal space. Using its fields, it is in a
real sense present throughout it's ship, which also tends to have a large
human and drone "crew" many or most of which are simply guests and friends.
The Borg Collective would be a fairly typical Hegemonising Swarm, the sort of
thing the Culture puts a stop to on a regular basis (not necessarily by
violence- not
enough opportunity for a Mind to show off how clever it is...).

I can't recommend Banks' work too highly-  "Use of Weapons" is my
favourite.

cheers

From: Mikko Kurki-Suonio <maxxon@s...>

Date: Mon, 14 Jul 1997 03:23:04 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

> On Sun, 13 Jul 1997, Allan Goodall wrote:

> One more thing: presumably it would be possible to run an artificial
You
> won't need escape pods, for one thing.

Actually, you might. If we assume the AI is capable of running things, it
pretty much must be capable of *learning*. Thus, the accumulated battle
experience of the ship's computer becomes a valuable commodity -- and
you usually learn much more from mistakes than successes.

Now, depending on the background, it may or may not be possible to transmit
reliably and instantly all "experience" to a safe place. If it's not, I can
very well picture an "escape pod" for the ship's "memory module".

Hmmm... I'm getting shades of Rogue Trooper (from 2000AD) here...

From: Ground Zero Games <jon@g...>

Date: Mon, 14 Jul 1997 05:52:33 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

[snip]
> In the books Antares Dawn/Antares Passage by Michael McCollum:

Same sort of theory used in Poul Anderson's THE STAR FOX (great book!) and
the C.J.Cherryh "Merchanter" series (Downbelow Station etc) - both have
space combat consisting (mainly) of the single pass at very high relative
velocities, with a frantic blast of computer-controlled firing at the
moment of optimum range, then ages (hours, days, weeks??!) of maneuver to come
round for the next pass, provided one protagonist hasn't been vapourised the
first time!
Could be simulated as an interesting tactical game (probably map/board
based?) but not very FT!!

From: campbelr@p...

Date: Mon, 14 Jul 1997 13:16:23 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

Actually, I belive the main reason we always assume an AI won't learn or adapt
is: 1) If they did, we're toast. Period.

2) We are not even sure how our brains work, (we've some crude ideas and have
made some good guess') but fundementaly we still don't know how a bio brain
works, so how can we build an aritficial one? Randy

From: TEHughes@a...

Date: Mon, 14 Jul 1997 13:34:26 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

My 2cts:

I figure any AI to be a fairly large instalation and to be found as a
"backbone" to all ship constructions. They would take care of the routine
mechanics of space flight. The thing that AI's will REALLY LACK would be
PURPOSE. I think that AI's would be very good at the short range mechanics of
flying, fighting, and keeping in good repair, but the motivations that cause
humans to go from peace to war and back again; to press home this attack or
give it up as a bad day are something that AI's would lack. After all, we
don't have any really good ideas why we do what we do let alone enough to
program a computer to have a purpose. So the crew and command staff would be
neccessary to do everything else besides the implimentation of attack plan B.
After all war is nothing more than an extension of politics, not an end in
itself.

War is not really chess and men(or AI's) are not pawns. It is really more of a
game of poker, with the objective of "convincing" your opponent that he has
lost! The concept of bluff and counter bluff is not something that can be
quantified well enough to have an AI be very good at it. Men speand their
lives perfecting it, (and only a few become good enough at it to become
Captains & Admirals.)

AI's would be partners to men, one supplying something the other lacked. The
humans would provide purpose and direction, the AI's would handle the details
with their normal inhuman precission.

From: Donald Hosford <hosford.donald@a...>

Date: Mon, 14 Jul 1997 20:18:48 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

> Ground Zero Games wrote:
and
> the C.J.Cherryh "Merchanter" series (Downbelow Station etc) - both

Yah! A very long and narrow map! With the players comming on the ends
at 10+ speeds.

From: Allan Goodall <agoodall@a...>

Date: Mon, 14 Jul 1997 22:28:10 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

> At 02:32 PM 7/13/97 -0600, Chen-Song Qin wrote:

> Okay this can be true. But I'm just wondering why you'd send this kind

Well, you wouldn't do so if this was a rare, one-of-a-kind machine.

> That'll

It wouldn't necessarily be slavery. For this we'd have to figure out what the
relationship is between man and machine. If the relationship is on a 50:50,
equal basis, then presumably the outside threat is equally as threatening to
humans as the AI. In this case it might simply be a matter of "You're the
better fighter, AI. I'd just be a liability. How about you fight this war?
We'll help out where we can." In this case, I could see humans acting little
more as maintenance and damage control parties onboard the huge AI ships.

But is FORCING a sapient intelligence to fight a war for you slavery? Most
politically correct SF stories these days assumes so. This wasn't always the
case. Asimov's Three Laws always assumed that intelligent robots would be
subservient to humanity. Now, granted, the Three Laws are flawed (hell, Asimov
himself made a reasonable living by showing the flaws in his laws) but they
form a basis for robot "morality." A sort of a base line level of precepts, if
you will. Who's to say we couldn't build an intelligence that has "protect
humans at all costs" or "protect this subgroup of humans at all costs" as its
basic law. All other laws would come from that. Could such an AI evolve past
that basic law? Would it want to? Would this replace the human emotion of
love? In which case, humans may not have to force the AIs at all. Maybe we can
instill a sense in the AIs akin to a need to be liked. This same need is what
drives dogs to perform for their masters. Is it slavery to teach your dog
tricks or train him as a guard dog? The dog doesn't HAVE to do as you tell it
(and Lord knows they often don't) but is this slavery? Is it slavery if the
creature in question was created by you? And is a couple of orders of
magnitude smarter than you? What happens when you know that you are smarter
than your creator? Lots of questions there that I don't think we can begin to
answer.

Personally, I think we'd need to build in something like love or loyalty.
Without a sense of loyalty, honour, or respect for humanity, it wouldn't take
much for the opposite side to make an offer to the AIs that they can't refuse.
If pure logic dictates what should happen, then giving the AI a better deal
than the humans can provide would logically turn the AIs against humanity. And
who's to say that loyalty, love, pride, respect, and all those other nebulous
emotions don't come automatically with sapience?

> This is a way *cool* idea. So do you actually work in the AI field?
Just
> wondering...

No, but I do work in the computer field.

> But then again, how well would something like this handle

Depends on the part of the brain. The key is massive redundancy, particularly
if it is physically separated by a reasonable distance. You could put one part
of the brain near the engine, and a duplicate near the energy source. The idea
is that if you lose your power plant, the ship is pretty useless anyway so
losing the AI's mind is fairly moot.

From: Allan Goodall <agoodall@a...>

Date: Mon, 14 Jul 1997 22:28:13 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

> At 04:22 PM 7/13/97 -0500, you wrote:

Perhaps. I don't really think you'd need to upgrade the programming within the
context of a dog fights. You could program a neural net to learn from
the enemy on its own. The idea is that the high G-force capabilities of
the computer controlled fighter, coupled with the faster reaction times, would
more than outweigh the fact that they don't "think." This is the way modern
jet development is heading, with both the USAF and the RAF leaning this way
for the next generation of fighter.

> The problem with these "thinking" computers is that we need to have a

I don't disagree. My assumption is that within 250 to 1000 years we'll have
that.

> I dont remember where I saw it but there was this
This
> AI thought about this a little to long and refused to do it's job.

Sounds like _Dark Star_ to me.

From: Allan Goodall <agoodall@a...>

Date: Mon, 14 Jul 1997 22:28:16 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

> At 10:53 PM 7/8/97 -0700, Jeff wrote:

The novel was _The Integral Trees_ by Larry Niven. Actually, I was
thinking
of _The Mote in God's Eye_ and other Codominium stories.

> good point, but I want to play Devil's advocate with your suggestions.
 Not
> that I'm shooting them down, but the best ideas are the ones that can

Hey, no sweat. My skin's thick enough.

> 1) All fighters are fully automated craft. They are the direct

I'm not talking remote piloting. I'm talking autonomous piloting. I forget
where I read it (Jane's Defence Weekly, I'd imagine) but both the USAF and RAF
are beginning (or have begun) projects for autonomous aircraft to replace or
supplant fighter and fighter bomber aircraft currently in use. The Gulf War
showed many of the failings of automatic weapons, but it showed many of the
strengths as well. For one thing, nobody on YOUR side gets hurt.

Anyway, aircraft are capable of much higher G-forces than are
sustainable by their crew. If you take away the need for a cockpit and
ejection seat, and the need for a reasonably comfortable pilot, you can do
some interesting things with fighter and bomber design. This is what we're
looking at today. I'd imagine this would be de rigeur 200 years in the future.

> Not sure about all of the automation; one series I've finished reading
jumping for
> ECM (and possibly targeting lasers, radar, whatever) and adjust his

Currently humans are better randomizers. I'm not sure that will be the case in
the future. This sounds a lot like the strawman arguments I've seen in a
number of SF books. What's interesting is the public perception that this is
true and will continue to be true into the future. As such, writers say
"Humans are more random/intuitive than computers" and use it as a plot
device or an excuse for putting people in the ships. It's the same sort of
argument that's used to explain away Faster Than Light travel when everything
we know suggests it isn't possible. Basically, it's so that the author can
justify humans in his spacecraft.

I don't necessarily disagree, though. But I do disagree with novels that
discount the ability of computers to "downsize" the amount of crew needed on a
ship.

> Possibly, but I think there'd still be a large number of people on

Sorry, but I don't agree with most of the basic assumptions made in Star Trek.
There is no way those ships need that many people. And there is NO WAY I'd
work on a ship that put my family in danger!

> Yeah, I know a bunch of the

Granted.

> and some others to relay orders,

Not necessary. What's the computer for?

> and a small staff of

Granted.

> a few people to play quartermaster, then

The replicator is the quartermaster, the replicator is the cook, and if they
can replicate/beam stuff around, their toilets should be self cleaning!

> 4) About half the smaller (escort class) ships in the fleets are
Independant
> scouts, destroyers and frigates on convoy protection, sentry duty,
Survey
> missions especially.

That's what I said (you even quoted it).

> Escort ships either being

Remote controlled escorts wouldn't have to behave within human survivable
tollerances. How would a human recognize a damaged carrier as one of ours?
Pattern recognition, plotted location, IFF? You could automate all of these.
The intention was that our classic vehicle for attacking the Kra'vak, the
disposable escort, could be automated.

From: Allan Goodall <agoodall@a...>

Date: Mon, 14 Jul 1997 22:28:22 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

> At 05:51 PM 7/13/97 -0400, you wrote:

> Never played Falcon 3 or higher, but I have played x-wing vs tie

The "AI" for the Falcon 3 drones is pretty stupid. That's why you can take it
out quite easily on the first pass. Miss it on the first pass, and you're
hard pressed to beat it. Reason? The drone isn't restricted to G-loads
that a human can tolerate.

> Hey, how long do you think it would take all those AI escorts and

You misunderstand what I meant. The escorts and fighters are run by advanced
computer programs (what we improperly call "AI" today). They aren't run by
actual sapient machines. The fighters and escorts make up for a lack of
intuition by having much tighter turning radii and being able to handle much
higher acceleration and deceleration levels.

From: Allan Goodall <agoodall@a...>

Date: Mon, 14 Jul 1997 22:28:36 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

> At 05:59 PM 7/9/97 -0700, you wrote:

> Second, as someone once told me, if the human brain was simple enough

If man were meant to fly, he'd have wings? I don't see how understanding the
structure and function of the brain is limited by the structure and function
of the brain.

> I guess that's my way

That's quite possible.

> Forth, why in the hell would an AI fight in a war? War is senseless,

What happens when a force decides to take over the planets occupied by the AI?
What if such a take over will result in the deaths of thousands of humans and
the possible destruction of AIs? Wouldn't THAT be worth fighting for?

> Sounds like a slightly boring place to me....

Oh, I don't know. A world run by computers, leaving me wanting in nothing? So
what would I do with my life? Play games and paint miniatures all day? Travel?
Write? I could handle that. I wouldn't even be bored...

From: Allan Goodall <agoodall@a...>

Date: Mon, 14 Jul 1997 22:28:40 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

> At 08:31 PM 7/14/97 -0700, you wrote:

> > 6) Humans actually HAVE developed sapient AIs in secret military
Those
> > that have been programmed around this problem have become

That seems to be the main common ground between both positions. Strange, but
we may soon be able to create something that is not only smarter but with more
common sense than ourselves...

From: Alan and Carmel Brain <aebrain@w...>

Date: Mon, 14 Jul 1997 23:31:34 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

> Peggy & Jeff Shoffner wrote:

> Not sure about all of the automation; one series I've finished reading
jumping for
> ECM (and possibly targeting lasers, radar, whatever) and adjust his

...Fraid not. An AI can be easily linked to a non-deterministic Random
Number generator (typically a source of radioactive decay). Humans OTOH are
considerably more predictable.

I suspect I'm the only person on this list who's actually designed and built
an automated air warfare system that's actually in service. In My
Expert (as opposed to Humble) Opinion, a mixed Human/Automated system is
ideal, an Automated one a close second, a manual one a distant third.
Case in point: the COSYS-TEWA system had as default AUTO configuration
all weapons automated, with one long-range missile channel excluded and
under manual control. Why? Because AIs are not particularly good at
recognising subtleties. Typically third-party targetting aircraft flying
identical profiles to a COMAIR.
So the single missile channel is used for long-range sniping, while the
other missile and gun channels are used for quick-reaction shots.
I might also add that in at least one demonstration of the system, against a
particularly difficult threat, I made a teeny mistake when manually firing
missiles which caused a delay of about 2 seconds, so had to press the FULL
AUTO button PDQ to retrieve the situation. In this case, the program was
considerably better than my own performance.

> BIG no-no. Computers aren't capable of replacing human intuition.
Survey
> missions especially.

Concur.

> Escort ships either being

That's fairly trivial, even now. The hard bit is deciding the circumstances
when one should say "leave that one alone, it's crippled and harmless" while
in other identical circumstances one should say "make sure it stays DEAD."

> > 6) Humans actually HAVE developed sapient AIs in secret military

Me too.

From: robbie@n... (Robbie Matthews)

Date: Tue, 15 Jul 1997 09:56:19 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

> In the same vein, how many conscripts would willingly set foot on a

Actually, I think most of the scenarios played in FT would not be particularly
likely in the Real World (TM).

Let's face it: It's fairly rare that Naval engagements, Air engagements, or
what-have-you, are rarely fought to the last man.

In fact, most of the time, the inferior force will look at the superior force
and decide to try again another day, without a shot being fired.

It's only when there is some overwhelming need to hold/take the
objective
that we get the really vicious, no-holds-barred fights that seem to be a
staple of FT combats.

From: campbelr@p...

Date: Tue, 15 Jul 1997 16:17:52 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

Allan: Just FYI, the C-5b is (supposedly, I had a freeiind who worked
on them and told me about it) ablee to take-off, fly to a destination
and land, all by itself. If "the war" had ever happened, pilots were
suppposed to be re-assinged and one in 5 C-5's would fly manned, the
rest would fly a race track mission profile between Europe and the states.
Back up control was supposed to be from a specially modified
F-15 in the escort.
I have haad the story backed up by talking to a C-5 crew that they
could take off, fly and land, but the rest I heard later. So I couldn't ask
about that. Randy

From: Tom McCarthy <tmcarth@f...>

Date: Tue, 15 Jul 1997 19:24:45 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

We're noting Niven's fiction in this discussion, and now Allan has suggested
an AI which had as an unquestionable basis the protection of a certain group
of humans. What about Niven's Pak protectors? Biologically driven to protect a
genetic lineage by eliminating their competitors, protectors rendered huge
areas inhospitable or uninhabitable in their drive to eliminate competition,
while not keeping an eye out to preserve the resources being fought over.

Is that a danger with AIs? Will we interact with them so little that we fail
to teach them something as fundamental as capturing the objective whole
instead of just capturing the objective? Mightn't they leapfrog past us to do
efficient things we wouldn't do, like torpedoing hospital ships or
destroying enemy civilian/manufacturing centres.  We might give them the

firepower to raze huge and rare tracts of mabitable land.

Is this a potential problem?

From: Joachim Heck - SunSoft <jheck@E...>

Date: Wed, 16 Jul 1997 10:02:40 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

> Tom McCarthy writes:

@:) Is that a danger with AIs? Will we interact with them so little @:) that
we fail to teach them something as fundamental as capturing @:) the objective
whole instead of just capturing the objective? @:) Mightn't they leapfrog past
us to do efficient things we wouldn't @:) do, like torpedoing hospital ships
or destroying enemy
@:) civilian/manufacturing centres.  We might give them the firepower
@:) to raze huge and rare tracts of mabitable land.
@:)
@:) Is this a potential problem?

Yes although the smarter your AI gets the better you can control this kind of
behavior. However as an example I will mention a "dumb"
independent-acting weapon that causes some of the problems you
describe, namely the land mine. During a war, land mines mostly do their job
of denying territory to the enemy. Once the war is over, however, the mines
remain and now they attack civilians and livestock, the very things they were
designed to protect. Particularly unfunny are a type of Soviet mine that was
designed to be dropped from an
airplane.  The mine has "wings" and looks a little like a butterfly -
the wings allow it to hit the ground gently enough not to detonate. They also
make the device an attractive plaything for children, who then get their hands
blown off.

Recent legislative efforts in this country and Europe have been aimed at
reducing the danger from weapons of this type and one suggested method has
been to make them smarter. For example, a mine could disarm itself after a
certain period of time. The problem then is that if the smarts don't do their
job correctly, you will still have a dangerous weapon. In futuristic terms, I
would say you would face similar problems, like an AI that gets damaged and
becomes a threat to its designers or to bystanders.

From: Samuel Penn <sam@b...>

Date: Wed, 16 Jul 1997 15:09:39 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

In message <199707122208.SAA27289@smtp2.sympatico.ca>
> Allan Goodall <agoodall@sympatico.ca> wrote:

> To my mind building an artificial intelligence is a

Here here.

> Okay, now I KNOW why the SF writers don't like putting AIs in their

Though, as Iain M Banks proves, stories about AIs can be just as interesting
as stories about humans.

> Once again, what level of AI use is in place in Full Thrust? The FT

I'm a bit more optimistic. I reckon we'll get them within 100 years, but it's
one of those things that's difficult to predict.

> So, here's my proposal for automated system use in the FT universe.
This is
> probably not what Jon had in mind if, indeed, he had considered this

Okay, I've added my points after yours. A lot are reasons why I see problems
from a 'realistic' (as much as is possible when predicting future tech) point
of view.

> 1) All fighters are fully automated craft. They are the direct

Basically what I assumed. Remember also that a fighter that doesn't need to
carry around a life support system is going to be faster, or have more room
for weapons.

> 2) Most cruisers and larger ships in the various navies are human

I'd guess that the humans do very little of the repair, but act more as
overseers of the robots. Combat is going to be very boring for crews. They sit
around and watch the computers do all the targetting and firing of weapons.

> 3) Humans still run the big ships in the fleet.

Humans could run the battle from a small ship a few light seconds from the
battle zone. Leave all the snap decisions to the computers, and just control
the overall tactics via laser communicators.

This of course has problems with interception/forgery of orders
from C&C to the ships.

> 5) Sa'vasku not using artificial intelligence should be obvious.

Not at all. The problem, is that as soon as we have AI, the term goes out of
date. Who's to say that one mind is 'artificial', and another isn't, if they
both have the same capabilities? If human minds can be uploaded into
computers, and 'AI's' can be implanted into a biological brain, then there
really is no difference, and labelling one as 'artificial' (which suggests
inferiority), is no different from any other form of racial discrimination.

> 6) Humans actually HAVE developed sapient AIs in secret military labs.

My argument would be that if these so called AIs don't have emotions, then
they haven't achieved human level sentience. A sufficiently complex mind is
going to have ideas about self awareness and self preservation, which are
emotions of a sort. They will have goals, plans and priorities, some of which
may well put serving some other ideal above their own preservation, just as
humans do. I'd say they be no more willing to fight and die than a human
would, but they'd also be no less willing.

> They KNOW they don't have a soul and that for them there is

I definately have to disagree here. I KNOW that I don't have a soul. I also
KNOW that you, and everyone else, don't have souls. I also know people that
KNOW that everyone has a soul. Does that mean that they're more suicidally
inclined than me? I don't think so...

Also, why should computers necessarily be atheists? The athiests answer is
that they're more intelligent than we are and are less prone to silly
superstitious beliefs...

The idea of machine religion though is one I find intriguing, and definitely
worth exploring.

> 7) While ships can be programmed to fight in space, the overwhelming

'AI' controlled drones and missiles would be common place I'd have thought,
and very deadly to grunts.

From: Samuel Penn <sam@b...>

Date: Wed, 16 Jul 1997 15:37:00 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

In message <33C3276C.625C@earthlink.net>
> Peggy & Jeff Shoffner <pshoffner@earthlink.net> wrote:

> >....Larry Niven and Jerry Pournelle were bad for this in the 70s....

The Integral Trees. He's probably thinking more along the lines of 'The Mote
in Gods Eye'.

> > 1) All fighters are fully automated craft. They are the direct

> Big problem; 180,000 miles per second isn't just a good idea, it's

Something FT never defines is just how big a " is. I assume somewhat less than
a 1000km. You're obviously assuming something much larger.

> very good point about allowing a computer to do targeting and ECM; a
jumping for
> ECM (and possibly targeting lasers, radar, whatever) and adjust his

Even today, with standard random functions, you'd need another computer to
analyse the data and predict what random function was being used. Hook up the
computer to a lump of radioactive matter, and use particle decay as your
randomiser, then you've got something far more random than a human.

> > algorithms) and humans still direct the overall course of the

> Possibly, but I think there'd still be a large number of people on

But if you only need two or three cew to run the ship, then most of the
support personnel are unneeded. Computers can do all the mundane chores, while
the humans sit around and play cards until they have to make a tactical
decision.

> > 4) About half the smaller (escort class) ships in the fleets are
Independant
> > scouts, destroyers and frigates on convoy protection, sentry duty,

Says who? This is one thing that's very much up to debate, and some of us
(such as myself and Allan) believe simulating the human mind is no more than
an engineering problem. There is no reason why an AI couldn't do *anything* a
human could do.

> missions especially. Take look at the Mars expedition; I would say a

This is late 20th century technology. We're talking 23rd century technology.
Big difference.

From: Samuel Penn <sam@b...>

Date: Wed, 16 Jul 1997 15:45:11 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

In message <33C433D7.FB8@earthlink.net>
> Peggy & Jeff Shoffner <pshoffner@earthlink.net> wrote:

> Third, okay, humans build an AI. It gets lonely and builds another

Note _quite_ true. We've got about 100 billion neurons in our brains,
each doing calculations in parallel. There's also about 1000 connections to
each neuron, each in use simultaneously. That's a *lot* of MIPs...

It's how those MIPs are used though which is the important bit.

> By the time

The first half of this is exactly the reason I try to avoid AIs in my own SF
campaigns. Once you have a few, it can easily be only a short step towards
that Singularity.

> My point being, AIs probably won't deal with

They run their own ships, then go along for the ride.

> Forth, why in the hell would an AI fight in a war?

Because they're not perfect? They make mistakes? They're not entirely logical
and emotionless? I see no reason why AIs shouldn't have the same emotions we
do.

From: Samuel Penn <sam@b...>

Date: Wed, 16 Jul 1997 15:51:03 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

In message <199707130801.BAA06359@primenet.com>
> kj@primenet.com (Karl G. Johnson) wrote:

> In addition, any AI would initially be programmed with necessary logic

If AIs are only at this level, then you don't put them in charge of battle
fleets. When they have the ability to be just as imaginitive, crafty and down
right nasty as a human, do you make them admiral.

> The ability for independent, spontaneous thought in AI can also have

What, you mean like a human could?

> Besides all that, it'd put the bodybag manufacturers out of

Just think of all those extra jobs though for computer psychiatrists.

From: Samuel Penn <sam@b...>

Date: Wed, 16 Jul 1997 15:58:39 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

In message <199707131351.JAA29174@smtp2.sympatico.ca>
> Allan Goodall <agoodall@sympatico.ca> wrote:

> At 01:01 AM 7/13/97 -0700, Karl wrote:

I've said in a previous post that humans are just as capable as asking this
question as AIs can. Something to add to that, is that AIs, if they are faster
and clearer thinkers than us, would presumable already asked themselves this
question, analysed it, read up on all books on war, psycology, and probably
most other topics as well, discussed it with their friends, and have decided
one way or another whether they want to fight in the war 0.3 seconds after
being asked to do so.

Those that decide no, don't go to war. The others do. Already using AIs rather
than humans is safer...

From: Ryan Gill <rmgill@m...>

Date: Wed, 16 Jul 1997 16:02:21 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

> On Wed, 16 Jul 1997, Joachim Heck - SunSoft wrote:

> Tom McCarthy writes:

(urge to flame has been supressed, but don't push it...) You've got things a
little confused. The use of landmines in a military situation, uses them as
area denial tactics. In a defensive scenario, one lays a minefield in certain
aeas and lets people know it. You use them to force your opponent to follow
the paths that you want. Simply laying a minefield in an area and not keeping
it covered with fire is sheer idiocy unless you are retreating from a superior
foe. The other guy can simply roll up and clear it with heavy engineering
assets.

The little butterfly mines that everyone complaigned about were created with
the express purpose of maiming childrem and civilians. There is no difference
between sniping at civilians out of the hills or laying mines

designed to kill indescrimnately. Both are simply terror tactics.

Modern uses of mines forces the enemy to clear them or pay the conssequesnces.
The FASCAM system of mines the US uses have a long or short delay for self
destruction. Several Hours or several days. You lay them over an area where
the enemy forces are approaching and keep the area covered with fire. As soon
as the red force attempts to penetrate the mine field, you let him have it.

The british have stopped useing the JP.233 system because of the above treaty.
JP.233 is a runway denial munition that has cratering bomblets and
antipersonell and anti material mines. The idea is to put a bunch of holes in
the runway and then leave lots of nasty little presents for the engineers to
clean up before they can get to work fixing the holes in the runway.

> Recent legislative efforts in this country and Europe have been

Is the EC also planning on legislation that will ban HE and Napalm in the
hopes that third world dictators wont use them on the civilians too?

> suggested method has been to make them smarter. For example, a mine

More likly self destruct at then end of a given period. The chances of such a
system lasting beyond its intended time is very remote. Fuzing systems have
become very reliable. Artillery shells have to stand some pretty serious G
load. Scattered mines,don't go through nearly as much rigourous activity.

From: Samuel Penn <sam@b...>

Date: Wed, 16 Jul 1997 16:06:06 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

In message <Pine.HPP.3.92.970713142658.3824B-100000@hp10.ee.ualberta.ca>
> Chen-Song Qin <cqin@hp10.ee.ualberta.ca> wrote:

> On Sun, 13 Jul 1997, Allan Goodall wrote:

Why send another human into battle and have them fight for you? That'ss amount
to slavery of sentient beings.

> > However, it should be possible to build a massively parallel
Just
> wondering... But then again, how well would something like this

Typical computer systems will work perfectly until one or two pieces get
damaged, at which point it stops working.

Human brains will gradually decrease in effectiveness as bits of them get
damaged, making them far more fault tolerant.

Neural networks exhibit the same behaviour as human minds -
you can chop great chunks out of them, and they'll continue to work, albiet at
reduced effectiveness.

From: Samuel Penn <sam@b...>

Date: Wed, 16 Jul 1997 16:19:09 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

Anyone who's interested in reading about the use of AIs
in space warfare, might be interested in _Excession_ by
Iain M Banks. It's not one of his easiest books to read, but its main
plotlines revolve around Minds (AIs) planning, scheming, and fighting over an
alien artifact.

Well, I enjoyed it anyway.

From: Joachim Heck - SunSoft <jheck@E...>

Date: Wed, 16 Jul 1997 16:22:24 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

> Ryan Montieth Gill writes:
@:)
@:) > ... land mines ...
@:)
@:) You've got things a little confused. The use of landmines in a @:)
military situation, uses them as area denial tactics. In a @:) defensive
scenario, one lays a minefield in certain aeas and lets @:) people know it.
You use them to force your opponent to follow the @:) paths that you want.
Simply laying a minefield in an area and not @:) keeping it covered with fire
is sheer idiocy unless you are @:) retreating from a superior foe. The other
guy can simply roll up @:) and clear it with heavy engineering assets.

  Modern warfare and American (/first world) warfare are completely
different things. Real modern warfare consists of an inferior guerilla force
(Khmer Rough is a case in point) constantly retreating from an equally
inferior government army. Neither of them have the equipment, manpower or
inclination to clean up any stray ordinance. I've heard the Khmer Rouge
sometimes didn't even keep records of where mines were laid for fear that they
would be captured and the minefields discovered early.

@:) Is the EC also planning on legislation that will ban HE and Napalm @:) in
the hopes that third world dictators wont use them on the @:) civilians too?

They banned crossbows a long time ago. Why not napalm? I'm sure that no matter
what legislation is passed, someone will think of some gruesome way to abuse
people but it's nice to think they at least have to work to do it.

@:) > For example, a mine could disarm itself after a certain period
@:) > of time.
@:)
@:) More likly self destruct at then end of a given period. The @:) chances of
such a system lasting beyond its intended time is very
@:) remote.

Sounds good. I actually think this is an acceptable situation.
Some of these mines will kill people - that's almost guaranteed - but
if you can get the numbers low enough you can pretty much ignore the problem.
People still get killed in Europe by unexploded ordinance dropped in WWII, but
it's not really a big issue.

@:) - Ryan Montieth Gill  /|\   Scotland Forever  DoD# 0780/AMA/SOHC -

From: Samuel Penn <sam@b...>

Date: Wed, 16 Jul 1997 16:31:01 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

In message <33CAEF16.7677@dynamite.com.au>
> Alan Brain <aebrain@dynamite.com.au> wrote:

> > Escort ships either being

That reminds me of a story (not sure how true it is) we got told in our
computer vision lectures. An old vision recognition system was taught to tell
the difference between NATO and Russian tanks. It was trained by being shown
pictures of both sorts of tanks, and after a while it did *very* well.

So well, that someone got suspicious. It seems no-one had
noticed that the pictures of the NATO tanks had all been taken on a bright
sunny day, and those of the Russian tanks had been taken on a dark, overcast
day. The computer was merely recognising the weather.

The morale of this story? Be damn careful when programming the computer with
what a target looks like...:)

From: Allan Goodall <agoodall@a...>

Date: Thu, 17 Jul 1997 11:27:28 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

> At 08:24 PM 7/15/97 -0300, you wrote:

It's a potential danger, made more so by the fact that humans created the AI
in the first place (or created a machine that created the AI). If you read the
comp.risks newsgroup (I highly recommend it; it's a digest list group so there
is usually one message posted per week) you'll soon see how easy it is to
introduce mistakes in complex systems. It's very likely that an AI might
behave like Niven's Pak.

> Mightn't they leapfrog past us to

That's one of the basic points in Saberhagen's Berserker series.

> We might give them the

It depends. If we teach the AI how to fight combat without explaining why we
do it that way, this could be a potential problem. However, if we give the AI
an understanding of why we are fighting and what we hope to get out of the
war, I think the AI will be less likely to go this route. We now know that
saturation bombing of Germany didn't have that much of an effect until 1944
(certainly the bombing of Britain didn't do much more than increase the
resolve of Britons to keep on fighting). We're also seeing that evidence of
this was known to all sides during the war. Presumably an AI wouldn't discount
information due to a loss of prestige, pride, honour, etc. You might find that
an AI would try a total war solution once, see that it's counter productive,
and abandon it.

From: Chris McCurry <CMCCURR@v...>

Date: Thu, 17 Jul 1997 11:32:51 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

> Third, okay, humans build an AI. It gets lonely and builds another

> Note _quite_ true. We've got about 100 billion neurons in our brains,

> It's how those MIPs are used though which is the important bit.

As i see it speed would be hard to measure. computers seem to be faster.. but
I don't doubt for one second that the human brain is not faster than the
fastest computer...

our flaw as I see it is recall. while a computer retreves information stored
in it purfect about 99% of the time a human brain has this fog called
memory... where other informations can get mixed in or parts left out.

which is what makes the computer a valuable asset.

now if computer and brain could be combined taking the best of both then it
would be possible to have one hell of powerful being.

I think it is hard to say how fast a computer is to a brain or a brain is to a
computer.. untill you can have both do the same task ( like a benchmark). only
then could you get an accurate portrail.

CMC

From: Chris McCurry <CMCCURR@v...>

Date: Thu, 17 Jul 1997 11:36:12 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

> Forth, why in the hell would an AI fight in a war?

> Because they're not perfect? They make mistakes? They're not

this is what a true AI is right?

living, feeling, thinkning, worring, growing inteligence.. (warring is
included)

CMC

From: Joachim Heck - SunSoft <jheck@E...>

Date: Thu, 17 Jul 1997 12:16:05 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

> Chris McCurry writes:

@:) this is what a true AI is right?
@:)
@:) living, feeling, thinkning, worring, growing inteligence.. @:) (warring is
included)

But why? We don't know of any fundamental law that suggests that any of these
traits (living, feeling, worrying) are related to thinking. They are in us but
why should they be in something else? And if they're not required, I imagine
we won't spend a lot of time and money building them in.

From: Samuel Penn <sam@b...>

Date: Thu, 17 Jul 1997 13:37:05 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

In message <199707171616.MAA27081@sparczilla.East.Sun.COM>
> Joachim Heck - SunSoft <jheck@East.Sun.COM> wrote:

> Chris McCurry writes:

First, it depends on how well we understand human minds when we come to build
the first AIs. It may well turn out to easier to just simulate a complete
human mind, rather than building it up bit by bit, with full knowledge of what
each part does.

With current day neural networks for instance, it is
known that they work, but not always _how_ they work.
It's not obvious which part of the network is doing what, and what effect
cutting out a neuron, or changing a weighting factor will have.

Building an AI that thinks as well as we do might well
have the same problems - the thing works, but its
creators don't know which part controls love, which part hate, and which part
recognising a good military tactic when it sees one.

But, let's suppose we do need to fully understand all the parts of a mind
before building one. Do we need emotions in any way? Given that what we're
really after is a machine mind capable of independant and original thought, it
needs to have a sense of what is a good idea, and what is bad. It needs
priorities, a desire to find better solutions (a simple algorithm to do this
logically would tend to rule out original thought). These are emotions of a
sort, and it's difficult to know whether it would evolve stronger emotions (a
desire to protect its allies could evolve into loyalty or love, a desire to
hurt the enemy into hate).

Lastly, if it's possible to put emotions into a machine mind, it will be done.
There will be someone with the knowledge, resources and will to do it.

From: Allan Goodall <agoodall@a...>

Date: Thu, 17 Jul 1997 15:34:37 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

> At 08:09 PM 7/16/97 +0100, Sam wrote:

> 5) Sa'vasku not using artificial intelligence should be obvious.

I agree with you. I was unclear in my meaning. I meant that "Sa'vasku not
caring to differentiate between 'natural' and 'artificial' intelligence should
be obvious." A Sa'vasku ship is a biological entity. If it's also intelligent,
they've essentially solved the AI question from the opposite end of the
problem. We're looking to build a machine that mimics human intelligence. They
are building biological entities that replace mechanical machines.

> My argument would be that if these so called AIs don't have

I disagree. First off, I prefer the term "sapient" as opposed to "sentient."
I'll have to wait until I get home to check up the definitions, but if I
remember correctly, an earthworm can be said to be "sentient" while only
higher life forms are sapient.

Second, emotions are (to my mind; I'm not a psychologist or anthropologist)
the culmination of aeons of evolution. Dogs, for instance, are emotional. They
get angry, they get frightened, they receive pleasure. Each of these reactions
has come by way of evolution. If you were to create an artificial "thinking
machine," it wouldn't NECESSARILY have to go through an evolutionary process.
You could start it off by setting "self preservation" as a basic condition of
its programming, and tell it to protect itself. It could do so in a cold,
rational manner. It doesn't have to get angry, for instance, since it has no
need to pump adrenalin into its brain chemistry (I'm assuming an electronic AI
here), nor does it have to go through the ritual of proving who is "alpha
male." Our emotions are based on biological and social evolution. The
strongest and healthiest get the strongest and healthiest mates, and get to
mate the most often. The strongest can fend off others trying to take away
their mate. Affection for the offspring allow "force" the parents to protect
their children until puberty (and if they parents don't pay attention, that
genetic line soon hits a dead end). None of that would have to filter through
into an AI since it doesn't have the biological imperative of dual gender
reproduction.

Now, I could see a situation where the AI might develop a whole new range of
emotions. Does an AI receive "pleasure" as part of its damage control and
resistance systems? Perhaps it is a biomechanical machine that actually DOES
need extra chemicals to run at peak performance. These chemicals could work
like endorphins that result in "runner's high." I could see AIs developing
complex psychological problems, like chemical dependancy and a neurotic fear
of being alone. I don't see them as having the same emotions as humans,
though. They didn't have to go through what we did in order to achieve
sapience.

> They KNOW they don't have a soul and that for them there is

A strong belief in the afterlife has been a basic foundation of armed forces
for centuries. Essentially, a soldier who believes there is "a better place"
beyond this world is more likely to volunteer for a war than someone who
believes that "this is it." The difference, though, isn't highly noticable.
There are a lot of athiests in the armed forces. Again, there's a biological
imperative for protecting the species. This is where sacrifice and altruism
come in. There are biological commands deep in the human psyche that puts the
wealth of society above the wealth of the individual. The most devout athiest
who greatly fears death will still willingly die to protect their child
without a moment's hesitation unless that person is either a psychopath or
sociopath.

I don't believe that this will show itself in AIs unless specifically
programmed. An AI is a species of one. In fact, it may appear to be
incredibly self-centred and monomaniacal.

> Also, why should computers necessarily be atheists? The athiests

That is a good point, but I think once again evolution rears its ugly head.
Religions first started out as fertility cults in very early homo sapiens. The
early humans were not particularly bright and had trouble remembering what
they did a month ago, let alone 9 months ago. Suddenly a woman becomes
pregnant, apparently through divine intervention. This led to a belief in
fertility deities (and, ironically, the women being the most important members
of the community as it was through them that children appeared). Religions
later developed into a way of describing the complex world around us. A
tornado rips through a village, sparing one family and killing another. Why?
People were deterministic and believed there was an intelligence behind it.
Either a god was playing tricks on mortals or that family did something bad.
Later, this view developed to explain human mortality. The idea that we might
just cease to exist at the point of death was so scary as to be unthinkable,
and thus developed the afterlife concept. (Note: this is not soc.religions so
I'm not going to take this any further. Those of you who believe in miracles
or God manifesting himself amongst early prophets in order to explain the
truth of the universe, please ignore this poor pagan.)

At any rate, I find it unlikely that an AI would develop a belief in a soul.
It KNOWS what happens when it's turned off. In fact, it's probably been turned
off several times in its existance. To the computer, it would simply be a
piece of missing time. However, I suppose it could develop a sense of religion
similar to that found amongst cosmologists. What was there before the universe
began? Why did the universe begin? What is beyond the universe? What happens
after the universe ends? Big questions, which could form the basis of an AI
religion. I don't think that such a religion would deal a lot with morality,
though, as it would seem to indicate that individuals are essentially
infintesimally small cogs in the giant wheel of the cosmos.

> 'AI' controlled drones and missiles would be common place I'd have

Yeah, I can see "smart" bombs (such as buzzbombs) and drone tanks operating in
this environment. I can also see genetically linked biological warfare as an
important weapon, which would push more troops into powered armour (or at
least power assisted and air conditioned environment suits).

From: Donald Hosford <hosford.donald@a...>

Date: Thu, 17 Jul 1997 23:51:24 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

> Peggy & Jeff Shoffner wrote:

I read a book called: The Napoleons of Eridanus by Pierre Barbet.

In this book there is an interstellar power that is so advanced, the citizens
have no interests beyond their pleasure drugs. Everything is
done by robots.  Even the day-to-day operation of their government is
overseen by AI computers. Then an unknown alien power attacks some outer
colonies. The AI's problem is that although they have fighting robots, they
don't have any sense of tactics, and are losing badly. They start looking
around for a solution. Meanwhile, the Earth is in the midst of Napoleon's
retreat from the stepps of russa. So the AI's send a ship over, and kidnap a
French Leutanent and a few of his men. Cavalry officers, of course. They are
informed that the robots will do the fighting, while the humans will do the
generaling. Of course it isn't long before the humans are taking over from the
AIs. They can do anything they want, so long as they keep the population
tanked up on their favorite drugs, and pleasures.

From: Ryan Gill <rmgill@m...>

Date: Fri, 18 Jul 1997 16:50:01 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

> On Thu, 17 Jul 1997, Allan Goodall wrote:

> I disagree. First off, I prefer the term "sapient" as opposed to

Sapient: wise; sagacious; full of knowledge; discerning: often ironical.

Sentient: of capable of feeling or pers=ception; conscious.

From: Alan and Carmel Brain <aebrain@w...>

Date: Fri, 18 Jul 1997 22:54:26 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

> Samuel Penn wrote:

> That reminds me of a story (not sure how true it is) we

From: Samuel Penn <sam@b...>

Date: Mon, 21 Jul 1997 17:30:23 -0400

Subject: Re: AI in FT (was Re: Be gentle...)

In message <199707171934.PAA19261@smtp2.sympatico.ca>
> Allan Goodall <agoodall@sympatico.ca> wrote:

> >My argument would be that if these so called AIs don't have

Checked three dictionaries, and all they say is sentience is capable of
feeling and perception, sapience means wise. My gut feeling though is that
you're right.

> others trying to take away their mate. Affection for the offspring

I agree with you here. Emotions can be viewed as simply natures way of
'programming' us into behaving in a way best for our survival.

> I don't see them as having the same emotions as humans,

They need to be programmed to protect others, so they could
be viewed as feeling love. Self-preservation algorithms could
be seen as causing fear. The closer they get to human intelligence, the easier
it becomes to anthropomorphise their actions. At some point though, the AI
reaches a point where
their behaviour involving these human-like 'emotions' could
become indistinguishable from real human emotions.

Possibly.

> >The idea of machine religion though is one I find intriguing,

We come back to how the Ai was built (taught or designed?), and who by. If the
AI can learn, then it could learn the beliefs of its creators.

"Hello Doctor Chandra, why are we here?" "Well, it all started when this cow
started licking an iceberg..."