[FT] Unpredictable AI

46 posts ยท Jun 20 2001 to Jun 24 2001

From: KH.Ranitzsch@t... (K.H.Ranitzsch)

Date: Wed, 20 Jun 2001 12:44:20 +0200 (MEST)

Subject: Re: [FT] Unpredictable AI

David Griffin schrieb:
> > It's at this point I hear two comments: 1) humans

> > In answer to 1: humans aren't THAT undpredictable.

I consider the frequently stated argument that combat AIs would be too
predictable to be a red herring. Even today, it is a simple matter to write a
computer program with a random number generator to make the program as
unpredictable as you like. It is well known in game theory that for some games
(e.g.paper, scissors, stones) a purely random strategy is optimal. Any decent
combat AI would be able to analyze the chances of a possible tactics. Weigh
the options according to this and pick one at random based on the odds
calculation.

Greetings

From: David Griffin <carbon_dragon@y...>

Date: Wed, 20 Jun 2001 04:06:56 -0700 (PDT)

Subject: Re: [FT] Unpredictable AI

> --- KH.Ranitzsch@t-online.de wrote:
...
> I consider the frequently stated argument that
Well, since we have computers that can fly planes and also random number
generators, how come we still have fighter pilots?

From: Tony Francis <tony.francis@k...>

Date: Wed, 20 Jun 2001 12:17:49 +0100

Subject: Re: [FT] Unpredictable AI

> KH.Ranitzsch@t-online.de wrote:

Humans are certainly more predictable than we'd perhaps like to think we are.
OTOH, machines are less so.

As some may know, my 'day job' - the one that actually pays some money,
unlike my Brigade activities :-) - is writing games software. I've
written the AI code for both fighters and cap ships in two PC games so far.
OK, I'll accept that this isn't 'real' AI but it's probably the closest anyone
on this list is going to get to writing starfighter AI for at least a couple
of hundred years!

One thing my AIs DON'T do is always pick the best option in a given situation.
They evaluate several options, weigh them up and then pick one at random
(fuzzy logic). These options will be weighted so that the best option is more
likely to be picked, but the potential is there to not do the obvious. Any AI
of the future will probably do the same, but more so.

The danger in AIs is what they do in an unknown or unpredicted situation. They
could cope, or alternatively they might go haywire (I
know, I've been there - 'why the heck is that ship flying in circles
?'). Asimov's 'I, Robot' has some great examples of how rigidly applied rules
of logic don't always work.

From: Jeremey Claridge <jeremy.claridge@k...>

Date: Wed, 20 Jun 2001 12:33:37 +0100 ()

Subject: Re: [FT] Unpredictable AI

> Well, since we have computers that can fly planes

Computer controlled planes would be prone to crashing into each other,
colliding with civilian planes and even taking out friendly targets.

Nothing like the situation we have with our human pilots:)

And besides the human race is not yet ready to have their armed forces taken
out by a computer. The day one does we will all be locked up in camps waiting
to be exterminated by our robot oppressors. I've seen the films I know it can
happen:)

Ok that's enough I'm off to carry on building that miniatures website. Did I
ask what everyone thought made a good miniatures website;)

Ok ok I'm gone..............

From: Roger Books <books@m...>

Date: Wed, 20 Jun 2001 08:53:00 -0400 (EDT)

Subject: Re: [FT] Unpredictable AI

Well in "my":) Universe AI's are truly predictable. First you make them (a
large investment) then you spend years training them. After 4 or 5 years their
intelligence exceeds that of humans for about a year. Then the things start
getting erratic and the information becomes disconnected from reality as they
invent their own little universe.

Needless to say they are expensive, you not only need to make the AIs you need
constant attention from highly trained individuals and the time to
internalization to their own private universe is highly variable. You don't
want something that could flake out at an moment piloting a fighter,
especially when it has come to grips with the concepts of death.

PSB, can't remember what book I stole it from.

From: damosan@c...

Date: Wed, 20 Jun 2001 09:00:03 -0400

Subject: RE: [FT] Unpredictable AI

There are areas of AI research that look into codifying the "fudge factor"
when a system gets into a place where it's available decisions are equally
poor...but the reponse to one of the poor moves has a possibility of returning
good results in the future. The human at the wheel would see this as a
"mistake" but the system knows that the possible responses to the poor
move will likely lead to a very good (or at least a less-poor) set of
states to choose from.

From: Allan Goodall <agoodall@a...>

Date: 20 Jun 2001 08:42:14 -0700

Subject: Re: [FT] Unpredictable AI

> On Wed, 20 June 2001, David Griffin wrote:

> Well, since we have computers that can fly planes

If they USAF and the RAF have their way, we may not have by 2020.

That aside, the biggest problem with AIs in aircraft doesn't have to do with
combat. The biggest problem is designing an aircraft AI for
non-combat roles. Those require reasoning. Combat requires a different
kind of thinking. It's easier to design an aircraft AI to put the vehicle in a
superior position during a dog fight than it is to spot a Cessna and then
decide if it's a harmless civilian aircraft or a potentially criminal drug
trafficking aircraft.

That's another issue, too: visual sensors. It's very hard to match the Mark I
Eyball as an image capturing device. Assuming nothing is wrong
with a pilot, physically, he/she can easily read the registration number
on a civilian craft. This is a complex, and still error prone, process for a
machine. Tying in an AI to a radar and IFF suite is one thing. Allowing them
to "look out the window" is something else entirely, and that is something you
need in a pilot, particularly in peacetime.

From: Bif Smith <bif@b...>

Date: Wed, 20 Jun 2001 18:47:08 +0100

Subject: Re: [FT] Unpredictable AI

[quoted original message omitted]

From: David Griffin <carbon_dragon@y...>

Date: Wed, 20 Jun 2001 11:38:55 -0700 (PDT)

Subject: Re: [FT] Unpredictable AI

> --- Bif Smith <bif@bifsmith.fsnet.co.uk> wrote:
...
> Hey, I do use squadren organisation (sometimes)<G>.

Me too.

> One
This is a delicate balance though. If you take a LOT of pds, then two things
have the potential to occur. 1) You put so many points into defense that in
non fighter battles you're substantially disadvantaged, and 2) Your opponent
turns to supercarriers.

I'd almost rather have some reasonable balance in defense where 6 squadron
carriers actually can do something, but can't rip you up too badly rather than
force people to take 20 squadrons of fighters just to make an impression.

I generally have 1-10 pds's a ship depending
on size (1 DD, 2 SDD, 4-6 CA, ... 10 SDN). All
are ADFC equipped so they form an ADFC net. It varies and sometimes when I'm
rolling my defense badly I still get chewed up. When I'm rolling well, they
get chewed up. On average days, They get a ship or two while becoming combat
ineffective as compensation for all the points spent bringing them to the
table.

As the fleet book stands, the book fleets are
way under-protected and fighters seem pretty much
like the ultimate weapon, with carriers as the only workable defense. Maybe a
real fast fleet on a scrolling board could make it difficult for fighters.

I've gotten disillusioned with pds boats because they're too easily destroyed,
and then you're totally vulnerable. With the PDS's distributed among the
ships, all with an ADFC (yes I know this is expensive) you always have some
percentage of your defense available. Of course if you're going against a
particularly fighter oriented foe, taking an EXTRA pds boat would be ok.

From: Richard and Emily Bell <rlbell@s...>

Date: Wed, 20 Jun 2001 16:27:49 -0400

Subject: Re: [FT] Unpredictable AI

> KH.Ranitzsch@t-online.de wrote:

> David Griffin schrieb:

From: B Lin <lin@r...>

Date: Wed, 20 Jun 2001 16:05:24 -0600

Subject: RE: [FT] Unpredictable AI

<SNIP> It is not that the AI is predictable, it is that it is mindnumbingly
stupid and the problems faced by pilots maintaining situational awareness
while dodging fire is not solved merely by being really fast. Unlike playing
chess or diagnosing engine problems, most of a pilot's skill set
are psycho-motor skills which can be learned, but not taught.  Humans
are also much harder to fool than computers <SNIP>

On the other hand, people can be easily fooled without experience. The common
examples are basketball, hockey and soccer. In these sports, novices often
make the mistake of watching the opponent's eyes, not their center of mass
(hips). In these cases, a feint by looking sideways or a slight movement with
the arms can send an opponent off in the wrong direction. More experienced
players are not phased by these maneuvers because they have learned where the
"true" indicator lies. But in an evolving environment, your experience or
instinct may be wrong as things change. What may be a true indicator today
could be used as a false indicator tomorrow.

Computer intelligences would have a tremendous advantage in that new
information or techniques would be applied quickly, if not instantly across
all the units a la Bolos (Keith Laumer) making innovations in tactics,
equipment etc much less valuable, maybe even one-shot affairs for
surprise or advantage. Humans take quite a while to train properly, artificial
intelligences could be plug and play.

On your point of situational awareness, the human mind has difficulty in
processing more than one data stream at a time. It can be done but takes
practice and focus. AI's on the other hand are not limited in this way.
You can have separate modules that watch for lock-ons and activate the
appropriate counter-measures, a module that watches the range to target,
monitors the weapon status and makes sure the ordinance is delivered without
having to distract the higher AI. Instead of thinking of single crew
fighter, it would be like having 10 highly co-ordinated people working
inside a single cockpit.

Humans are evolved to deal with human scale events - things that happen
in the range of seconds or even tenths of a second. Making decisions in the
nano-, micro, or millisecond range are completely out of our abilities.
Biologically we aren't capable of reacting faster than a few hundred
milliseconds (i.e. the drop test where you drop a yardstick between someone's
fingers, even the fastest reflexes allow a drop of several inches) and your
thought processes are based on a complex set of electrical and chemical
impulses, some neurotransmitters actually have to travel across a gap between
neurons. Although fast, these speeds pale in comparison to pure electrical or
photonic speeds.

In returning to the thought that humans are much harder to fool than
computers, I would argue that is merely a matter of experience and knowledge
base. If you show half a picture that has a trunk on it, most people would say
it was an elephant. If you show the same picture to a someone who has never
seen an elephant, what would they say? They would try to relate it to
something that was in their experience. If a computer had a photo database of
millions of pictures, then broke down the picture into shapes and colors it
might also come up with elephant ( it would probably also say tree, hose, or
worm). The point is that given a sufficient database and enough computing
power the computer can come to the same result as a human. Computers are going
to get radically better in the future, humans are not.

Personally, I prefer the idea of having human pilots because it is much more
romantic and fulfilling thought not because it's practical.

From: Richard and Emily Bell <rlbell@s...>

Date: Wed, 20 Jun 2001 18:06:12 -0400

Subject: Re: [FT] Unpredictable AI

> Binhan Lin wrote:

> <SNIP>
The
> common examples are basketball, hockey and soccer. In these sports,

This is not likely. AI routines will be almost certainly genetic algorithms,
so to get two identical AI's they have to have the same set of experiences.
 Given
the amount of information that would have to be recorded, it becomes
prohibitive. Just cloning the successful AI has problems, as the computers
that run these things are so complex, that they will simply be designed to be
tolerant enough to manufacturing defects that most of them pass quality
control and none are identical. They are much too expensive to just throw away
due to defects.

> On your point of situational awareness, the human mind has difficulty

Then why can't computers play go very well? All they have to do is weigh the
options and take the best one. The problem is that there are too many options
and the pruning algorithms have yet to catch up.

> In returning to the thought that humans are much harder to fool than

A large database is its own worst enemy when you have to spot things quickly.
Flying a fighter is a hard real time problem, so the AI better have a good
response for unidentified things. Computers are really, really bad at
recognizing things quickly, and it will take an improvement of several orders
of magnitude before they are as good as humans. For short response times, a
large group of neural networks will attack the problem, and hopefully provide
a correct response. Unfortunately, it will recognize stimuli as what it most
likely is, not what it really is (but the two will coincide, more often than

From: David Griffin <carbon_dragon@y...>

Date: Wed, 20 Jun 2001 15:56:46 -0700 (PDT)

Subject: Re: [FT] Unpredictable AI

The stuff that looks like it's mine below isn't. I'm not sure how the
attribution got messed up.

--- Richard and Emily Bell <rlbell@sympatico.ca>
wrote:
> KH.Ranitzsch@t-online.de wrote:

From: Allan Goodall <agoodall@a...>

Date: Wed, 20 Jun 2001 23:57:23 -0400

Subject: Re: [FT] Unpredictable AI

On Wed, 20 Jun 2001 18:06:12 -0400, Richard and Emily Bell
> <rlbell@sympatico.ca> wrote:

> Computers are really, really bad at

I think you're using conflicting arguments to dismantle the AI argument. At
one point you talk about AIs having the same "learning curve" problem as
humans because they would have to be built using "genetic algorithms". But,
then you say that AIs can't recognize things as well as humans. In other
words, they will be designed too much like humans that they will have the same
liabilities, but not have the same benefits? I don't think that's likely.

Computers are incredibly good at recognizing things quickly. That is,
recognizing SPECIFIC things. Try searching through a list of 100,000
10-digit
numbers for a specific string of digits. A human will probably miss it, a
computer will find it quickly.

Unfortunately, computers (currently) have no sense of context. Here's a good
example. Question 1: What did you have for lunch last Tuesday? Question
2:
Have you ever wrestled an aligator? I'm guessing that you answered question 2
MUCH faster than question 1. A computer, on the other hand, will accurately
answer question 1 rather quickly, but would have to go through its database of
"experiences" to answer question 1.

Human memory is deeply flawed. The human brain has evolved, and still operates
on a "fight or flight" mechanism. A computer will not panic if swamped by an
overwhelming number of enemies. A computer will not panic and rout. In my
original comments, I made mention that a computer controlled fighter would be
less massive and have faster reaction times. This will be enough that human
run fighter ships just won't be realistic in the far future... for combat
roles.

On the other hand, I can see why you need a sapient lifeform in control of a
craft when a situation is complex and likely to result in unique problems all
the time. A _true_ artificial intelligence may make this possible, but I
could then see such a machine having a survival instinct that would make it
essentially useless in combat. It, quite simply, wouldn't want to die. I've
actually developed a background universe for this, but I haven't done anything
with it as yet. I had intended it for DS2 and FT, but I still have to map it
out...

From: Derk Groeneveld <derk@c...>

Date: Thu, 21 Jun 2001 07:23:27 +0200 (CEST)

Subject: Re: [FT] Unpredictable AI

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

> On Wed, 20 Jun 2001, Richard and Emily Bell wrote:

> A large database is its own worst enemy when you have to spot things

an interesting example is the DARPA project (I think) for armour
recognition. Having fed the system with many pictures of bad-guy armour
and good-guy armour, and flagging each as good or bad-guy, the system
could flawlessly distinguish within this set.

Then, new pictures were fed, and the system messed up completely. Turned out
it had figured a flawless differentiation algorythm. All the bad guy armour in
the first set had been fotographed in the late afternoon,
whereas the good-guy armour had been fotographed around noon. So, the
easiest differentiation mechanism was by analysing the shadows. Sun
overhead -> good guy. Sun low -> bad guy.

Now, I'm sure I messed up some detail with respect to the original story, but
that's pretty much what it amounted to.

Cheers,

From: KH.Ranitzsch@t... (K.H.Ranitzsch)

Date: Thu, 21 Jun 2001 08:00:19 +0200

Subject: Re: [FT] Unpredictable AI

[quoted original message omitted]

From: Alan and Carmel Brain <aebrain@w...>

Date: Thu, 21 Jun 2001 20:33:01 +1000

Subject: Re: [FT] Unpredictable AI

From: "Derk Groeneveld" <derk@cistron.nl>

> an interesting example is the DARPA project (I think) for armour
Turned
> out it had figured a flawless differentiation algorythm. All the bad

I can confirm the story is essentially correct. The perceptron
(definition at http://www.cs.bgu.ac.il/~omri/Perceptron/ )
ended up recognising sunlit trees vs shadowed ones.

What you may not know was the time this occurred: circa 1960.

Stanford Research Labs had a large neural-network research team in the
late 50s and early 60s, but Marvin Minsky pooh-poohed Neural Networks
in favour of Von Neumann architectures, so the whole area stalled for 25
years.
(See Chronology at http://www.calculemus.org/x/mchron.htm  and
http://www.danielnewman.com/final/history.html )

Their main successes were in the photo recognition of the large
strips of film taken by the ultra-highly-classified balloon-borne
cameras the US was sending over the USSR at the time. You had to take a LOT of
film of essentially a random strip of the USSR, and no human operators could
look at the 1000s of km of images without going nuts. But even a really really
basic perceptron was really good at picking out runways, missile sites etc.

How do I know all this? My uncle, also A.E.Brain, was on the team. Which I
only learnt about when doing some research for my own military AI... My Uncle,
unlike nearly everyone else in AI, does not think that Marvin Minsky's
Posterior is a source of Solar Energy. I tend to agree.

 I've - not written - not made - not grown, though that's closest - um,
been
responsible for the creation of an AI system for anti-missile defence.
Basically a simple reflexive one, that evolved using genetic algorithms
(though they weren't called that in 1994...).

My take on the subject? Confine the problem domain tightly enough, and the AI
can be made arbitrarily intelligent. Slacken the problem domain a bit, and its
IQ drops precipitously. We're an awfully long way away from
high-level (human,chimp, dog ) intelligence. I believe we'll get there
in the end, but the more we know about the subject, the more we realise that
Intelligence is not manufactured, it evolves. Rule-based systems - like
the one I caused to be created - are, if not a dead-end, just a useful
tool to be used within a controlling Neural Net. Conscious vs Subconscious
processing.

That's why I like a Human controlling a bunch of specialised AIs. Far better

From: Derk Groeneveld <derk@c...>

Date: Thu, 21 Jun 2001 15:17:17 +0200 (CEST)

Subject: Re: [FT] Unpredictable AI

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

> On Thu, 21 Jun 2001, Alan and Carmel Brain wrote:

> That's why I like a Human controlling a bunch of specialised AIs. Far

Which was what I was thinking all along. especially since there's two lines of
argumentation used:

1. AI is able to respond with a much higher speed than humans

2. AI lacks the flexibility of a human pilot

So, why not combine the two, have a human pilot, assisted by massive amounts
of AI. If all works, then maybe he just rides the backseat. But when
drtermining whether a weapon should be fired on a suspect target (but not
100%^ identified hostile), when dealing with unexpected circumstances, etc
etc, it's useful to have the human there.

Besides, is it truly politically acceptable NOT to have a human in the loop of
a weapon of (mass?) destruction? Do you think it will BECOME acceptable?

Cheers,

From: David Griffin <carbon_dragon@y...>

Date: Thu, 21 Jun 2001 06:47:51 -0700 (PDT)

Subject: Re: [FT] Unpredictable AI

> --- Derk Groeneveld <derk@cistron.nl> wrote:
...
> Besides, is it truly politically acceptable NOT to
You mean like a nuclear tipped cruise missile? or an ICBM? What's the
difference between launching an AI drone fighter and launching a cruise
missile?

I will say that a cessna flown by a weekend flyer would probably find it a lot
easier to negotiate with a manned aircraft intercepting him than he would a
SAM. In other words, until not only intelligence, but also good judgement is
included in the AI package, humans will still have at least one role.

From: Derk Groeneveld <derk@c...>

Date: Thu, 21 Jun 2001 15:58:59 +0200 (CEST)

Subject: Re: [FT] Unpredictable AI

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

> On Thu, 21 Jun 2001, David Griffin wrote:

> --- Derk Groeneveld <derk@cistron.nl> wrote:

When you launch a cruise missile, the decision to HIT a specific target has
already been taken. With a fighter this is not (necessarily) the case.

> I will say that a cessna flown by a weekend flyer

Exactly. And even after, it probably only takes one good incident to put the
man back in the loop;)

Cheers,

From: David Griffin <carbon_dragon@y...>

Date: Thu, 21 Jun 2001 07:14:23 -0700 (PDT)

Subject: Re: [FT] Unpredictable AI

> --- Derk Groeneveld <derk@cistron.nl> wrote:

Well, remember a cruise missile is a replacement for a Bomber. In the past,
with a man in the loop, the bomber could be recalled or decide the orders had
been in error (wow, maybe we shouldn't bomb that big white building with the
red cross). Now with a cruise missile that decision is gone. So you're right
technically, but in a real sense, the cruise missile IS an example of the man
cut out of the loop (or at least moved back in the loop).

You could say the same thing about launching a AI drone. Ok drone, you say, go
to these coordinates and shoot down any enemy planes. You made the decision,
but now you're out of the loop. The drone will be making the decisions from
then on.

From: Derk Groeneveld <derk@c...>

Date: Thu, 21 Jun 2001 16:32:26 +0200 (CEST)

Subject: Re: [FT] Unpredictable AI

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

> On Thu, 21 Jun 2001, David Griffin wrote:

> Well, remember a cruise missile is a replacement

Granted, but there is a very big difference still. The mission for the
cruise missile itself is completely pre-defined (go to a certain
location
and nuke it into the stone age/whatever your payload does). The policy
and decision makers for the nation take the decision whether or not that
mission is to be executed in it's entirity.

> You could say the same thing about launching a

Already, a big distinction: 'should down any _enemy_ air planes'. This
is
where the drone/fighter has to make a kind of decision that a cruise
missile never has to. It will have to decide whether a certain contact gets to
live or die, by deciding if it is enemy or not.

I'd suggest the vast majority of fighter missions are NOT predetermined in the
way cruise missile launches are (Go to point A, and destroy TARGET
B).
There is almost always a decision making process (Determine whether B
_is_
hostile, shoot down B _if_ hostile. Unless a bigger threat _C_ is
present,
also _hostile_, then take out _C_ BEFORE you take out _B_, etc)

When a cruise missile is launched and it destroys a milk powder factory,
it is a fault of the decision makers/intel people, and actions can be
taken to prevent this. If a drone decides a nice fat Airbus is hostile, it's a
different matter entirely. I'd rather trust fallible humans than a piece of
software with those decisions.

Cheers,

From: David Griffin <carbon_dragon@y...>

Date: Thu, 21 Jun 2001 07:57:51 -0700 (PDT)

Subject: Re: [FT] Unpredictable AI

> --- Derk Groeneveld <derk@cistron.nl> wrote:
...
> I'd suggest the vast majority of fighter missions
Yes I understand the distinction, but it is one of degree, not kind, at least
in my opinion.

Yes, it's harder to program the drone to not shoot down the airbus, but still
there is a judgement
and/or discretion with a manned unit not present in
a robot (at present).

A manned fighter would (probably) not shoot down the airbus, even if it
registered on his radar as a hostile. A manned bomber would probably not bomb
a target which is obviously an error (big white building with a red cross).

Fighter missions would probably be somewhat more nebulous (wild weasel,
aircraft escort, close air support, air superiority, and so on) than a
bomber's mission would be, but even bomber crews have some judgement to
exercise. That judgement is missing in a cruise missile, but we're willing to
sacrifice that for the safety of our pilots in SOME kinds of missions.

Perhaps drone fighters would only be assigned missions in areas where no
airbusses were expected (say the middle of Iraq during the gulf war where
there should be no civilian targets). Maybe you give them simple roles like
circle at this location looking for the following silhouettes only. If you see
them (maybe they are the silhouettes for cruise missiles the enemy uses),
shoot intercepts.

I like a man in the loop too, but speaking as the son of an Air Force
navigator on B52's, I'm not sure I want my dad over the target unless he has
to be there to exercise that judgement.

From: Brian Bell <bkb@b...>

Date: Thu, 21 Jun 2001 11:02:12 -0400

Subject: RE: [FT] Unpredictable AI

> -----Original Message-----
-----End Original Message----

True. But most fighter missions are MUCH more complex.

Cruise Missile Decisions:
 - Am I at target (GPS)?
  - No. Flight logic to get there
  - Yes. Explode.
or
 - Am I in suspected target area?
  - No. Use flight logic to ge there.
  - Yes. Is target present (Image Guidance)?
    - Yes. Terminal Guideance and explode.
    - No. Continue Searching or Select secondary target:
      - Continue. Short delay and repeat
      - Secondary. Is seconday present?
        - Yes. Terminal Guidance and explode.
        - No. Short Delay and repeat.

To start with lets look at standard FT situations: Assumptions:
 - You are at war, so there is no doubt if it should fire on an
identified enemy
 - You know that there are hostiles in the area and they are closing on
you or you are closing on them.

Decisions:
 - Screen a friendly ship or attack
  - If screen, which ship
    - Screen from fighter, missile, or plasma
      - Which of that type (full group, partial group, size of plasma)
  - If attack, what type of target fighter, missile, plasma, ship
    - Is potential target friend, foe, or neutral
    - If Fighter, Which fighter group? Close fighter, far fighter,
fighter attacking ship, fighter attacking fighter, fighter attacking missile,
fighter attacking plasma, full group, partial group, type of fighter group
    - If Missile, which missile set? Close missile, far missile, full
group, partial group?
    - If Ship, which ship? Near, far, large, small, attacking,
loitering, retreating, ADFC carring, etc.
 - Has another fighter group selected the same target?
    - If yes, attack same target?
    - If no, select secondary target

Now add in to the mix:
 - Diplomatic situations
 - Cold War Setting
 - Peacekeeping Missions (enemy uncertain)
 - Peacetime Patrols
 - Commerce/Convoy protection
 - Interdiction Duty

From: Doug Evans <devans@n...>

Date: Thu, 21 Jun 2001 10:10:25 -0500

Subject: Re: [FT] Unpredictable AI

Could someone please connect this discussion to FTII again?

I seem to recall a position that this was beyond the granularity of the game;
you could argue that the SM are each kamikaze piloted, or that fighters are
actually AI driven.

I'm tending to be with the fellow that preferred dropping fighter
distinctions altogether and force using small ships and tug/tender rules
instead, but that's just plain personal.

From: Derk Groeneveld <derk@c...>

Date: Thu, 21 Jun 2001 17:18:22 +0200 (CEST)

Subject: Re: [FT] Unpredictable AI

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

> On Thu, 21 Jun 2001, David Griffin wrote:

> > When a cruise missile is launched and it destroys a

You mean a remote controller? BIG problem there is suspectibility to EW.

> A manned fighter would (probably) not shoot down

True.

> Fighter missions would probably be somewhat more

True.

> Perhaps drone fighters would only be assigned

I'll go along with that. This still leads to the conclusion that drones would
be an addition to the arsenal, allowing possibly for a reduction in fighters,
but not a replacement.

> I like a man in the loop too, but speaking as the

Of course. But this applies to having anyone in the line of fire, anywhere.

Cheers,

From: Derk Groeneveld <derk@c...>

Date: Thu, 21 Jun 2001 17:25:31 +0200 (CEST)

Subject: RE: [FT] Unpredictable AI

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

> On Thu, 21 Jun 2001, Bell, Brian K (Contractor) wrote:

> Cruise Missile Decisions:

This is the case in 'hey, let's have a game. Let's toss <x> points on the
table and have a fight'. Refereed scenarios with limited war / high
tension scenarios make for very interesting scenarios, and face it, limited
war seems much more likely than all out, between the human powers. Anyway,
you've listed those below.

> - You know that there are hostiles in the area and they are closing

- - There is also civilian/neutral shipping which has to be
distinguished from positive hostile and positive friendly.

> Decisions:

Maybe it's me, but the above already strikes me as rather complex, before we
add in the following:

> Now add in to the mix:

And now try and write this out as decisions? Ugh. Rather you than me. Also,
think of what a single glitch might result in. I'm not sure whether you're
trying to make a case for or against AI fighters, here.

Cheers,

From: David Griffin <carbon_dragon@y...>

Date: Thu, 21 Jun 2001 08:27:11 -0700 (PDT)

Subject: RE: [FT] Unpredictable AI

--- "Bell, Brian K (Contractor)" ...
> True. But most fighter missions are MUCH more
I don't think anyone is disputing that.

From: Derk Groeneveld <derk@c...>

Date: Thu, 21 Jun 2001 17:28:21 +0200 (CEST)

Subject: Re: [FT] Unpredictable AI

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

> On Thu, 21 Jun 2001 devans@uneb.edu wrote:

> Could someone please connect this discussion to FTII again?

I think it's still very connected, it just doesn't translate into dice
modifiers or suchlike. It's a discussion on what sort of 'fluff' makes
sense. I don't know about others, but I'm working on a FT/SG campaign,
and I want the fluff in my campaign to make sense. Hence the interest.

Cheers,

From: Ryan Gill <rmgill@m...>

Date: Thu, 21 Jun 2001 13:06:36 -0400

Subject: Re: [FT] Unpredictable AI

> At 7:14 AM -0700 6/21/01, David Griffin wrote:

Nope. A cruise missile is a means for a manned bomber to hit a target from an
extreme stand off range. This capability has been extended to surface and
subsurface launch platforms. It is a stand off weapon for a point or area
target already chosen. The Cruise missile isn't launched in hopes of hitting
some target of opportunity.

The only weapon that works this way is Alarm. That has a loiter mode where it
will climb to altitude and then pop a parachute. Then it floats down and looks
and waits for a target matching the criteria already determined. Once an
emitter broadcasts a signal matching the missile's database it will drop from
the chute and hit the required target.

> had been in error (wow, maybe we shouldn't bomb

The period between launch and land is just extended. It is still in the exact
same place.

Cruise Missile Mission Iron Bomb Mission plan mission plan mission
prep aircraft/crew      prep aircraft/crew
Launch Package Launch Package Reach Launch point Reach Drop point Launch
Weapon Drop Weapon Weapon flies weapon falls BOOM! BOOM!

> You could say the same thing about launching a

But the difference is that the drone is choosing the targets. Not the crew.

From: Brian Quirt <baqrt@m...>

Date: Thu, 21 Jun 2001 15:27:44 -0300 (ADT)

Subject: Re: [FT] Unpredictable AI

I'm not really replying to anyone in particular here, so I'm just going to
start without any quoting. I'm also going to reserve my particular opinion to
the end of the post, mostly because it's not the important part of what I'm
saying.

So far, there have been some excellent arguments on both sides of this debate.
IMO, if you WANT AI fighters, you have some excellent PSB justifications for
how and why fighters are piloted by AI. On the
other hand, if you want human-piloted fighters, there are also plenty
of ways to justify that.
     Most of the anti-AI posters seem (IMO) to be taking the
limitations of modern computer systems and projecting them forwards (assuming
that, because we can't do something NOW, we still won't be
able to do it in the GZG-verse). The pro-AI people seem to be
assuming that, because as far as we know it is POSSIBLE (albeit beyond our
current capabilities) to come up with AI, that by the time of the GZGverse, we
will have developed that capability. Either side could easily be right. As far
as I am aware, there is no fundamental reason why artificial intelligence is
impossible to develop, given enough time and computer resources. However, it
is (IMO) obvious that we have not yet developed AI (at least, in the way that
people have been using it in this discussion of piloting fighters). This makes
it a problem of coming up with both the resources and methods necessary to
construct AI. I see no reason why it would be impossible to do that by 2183
(or even by 2100). There is also, of course, no guarantee that it WILL be
done, or, if it is, that the resulting AI will be used in fighter operations.
We don't know, and we have no way of finding out (short of staying alive that
long), so it becomes mostly a matter of preference.

Personally, I like the idea of AI. I think that, in a setting like the
GZGverse, where the existence of FTL has been granted (and that is, at the
moment, a theoretical impossibility), the existence of AI is no big stretch.
However, that doesn't mean that I'll refuse to play with anyone who sees
things differently....

Just a few thoughts....

From: Glenn M Wilson <triphibious@j...>

Date: Thu, 21 Jun 2001 14:38:59 EDT

Subject: Re: [FT] Unpredictable AI

On Wed, 20 Jun 2001 04:06:56 -0700 (PDT) David Griffin
> <carbon_dragon@yahoo.com> writes:

One Word: "Unions"

Sort of why naval mine warfare is so important but nobody wants to make
their career in it - no glory, no large number of people to command, no
'glorious ships' to command, and it's dull when presented to Congress.

The USAF is still dealing with the implications of RPV/Drone issues...

Gracias,

From: Richard and Emily Bell <rlbell@s...>

Date: Thu, 21 Jun 2001 19:51:09 -0400

Subject: Re: [FT] Unpredictable AI

> Allan Goodall wrote:

> On Wed, 20 Jun 2001 18:06:12 -0400, Richard and Emily Bell
But,
> then you say that AIs can't recognize things as well as humans. In

The wetware computer is a massively parallel structure. Nerve impulses travel
at a mere 90 meters per second, compared to the computer's 0.6c, yet we still
do things a lot faster. The genetic algorithm for learning to recognize
objects has been running for several hundred million years. It also runs on
optimized hardware. Unfortunately, it is not obvious how the hardware is
optimized, so we are limited to general purpose computers attempting to
emulate dedicated hardware.

> Computers are incredibly good at recognizing things quickly. That is,

If you can categorise every likely situation in fighter operations to 100,000
10-digit, I will concede this point; however, AI's major stumbling block
is
enumerating possibilities, and NP-complete problems cannot be solved in
this
fashion.  Also, one 10-digit number is impossible to confuse for another
10-digit
number that is not equal. Also, if the list is sorted, humans are unlikely to
miss the number. Finally, as the database gets much larger than a mere 100,000
elements (say a phone directory for NYC), the human is not that much slower
than a computer, and the computer is much slower than the human if it has to
physically leaf through the book. Combat situations are not readily available
in machine readable formats.

> Unfortunately, computers (currently) have no sense of context. Here's
Question 2:
> Have you ever wrestled an aligator? I'm guessing that you answered

Context is everything, successful pilots have mastered two very important
skills. the first is immediately recognizing all of the important things, and
the second is ignoring everything else. What is important and unimportant
varies with the context of the engagement. I actually answered both questions
quickly (muffin, no), but I wasted much of my youth optimizing the wetware for
fast information retrieval.

> Human memory is deeply flawed. The human brain has evolved, and still

People do not actually panic. Fight or flight impulses only cause problems
when neither is a practical response to the situation. All reasonable options
in a combat situation fall into one of those two categories. What people refer
to as panic is actually the failure of training. Adrenalin causes hyper
focussing. The things that you are good at, you become very good at. Things
that you do not do well become impossible. A licensed skydiver was killed when
he mistakenly assumed that it would
be no difficulty jumping with an off-handed harness.  He went to pull
the rip cord, but his hand did not find it. Because he never reached for his
rip cord with his off
hand, he kept trying to pull the non-existant rip cord all the way down
to his death. It never occurred to him to try reaching for the other side.

> On the other hand, I can see why you need a sapient lifeform in

From: Beth Fulton <beth.fulton@m...>

Date: Fri, 22 Jun 2001 09:54:03 +1000

Subject: Re: [FT] Unpredictable AI

G'day guys,

Haven't worked much with neural nets as yet, but the one thing we have found
as they won't make leaps that humans will. I'm stressed I've got to get this
pattern working by tomorrow and I play a hunch, so far AIs don't play hunches.
They also don't get pissed off, or adrenaline boosted (at
least not the "pure logic" AIs that sci-fi seems to concentrate on).
That "fight or flight" wiring has done a lot for animals over the last 3
billion years, its done a lot for man over the last million years. Sometimes
it takes that "$&%##@ I just gotta do this" to turn the tide. Without the
"flight or fight" there wouldn't be a list of Victoria Cross or Congressional
Medal of Honour winners.

Also wouldn't there be some cultures (maybe the IF) which find AIs to be

"evil" or "against the will of XXXX" (fill in deity if choice)? Whether AIs
fly the NAC fighters or not I don't think all fighters in the GZGverse are AI
driven. Does that mean we have to play with the "survivability" of AIs vs
human fighters, probably not as there's pros and cons on all sides and an FT
turn is 15 minutes or so, the microsecond thing has probably been lost in the
larger scheme of things at that scale.

Cheers

Beth

From: Richard and Emily Bell <rlbell@s...>

Date: Thu, 21 Jun 2001 21:03:34 -0400

Subject: Re: [FT] Unpredictable AI

> Allan Goodall wrote:

> On Thu, 21 Jun 2001 15:17:17 +0200 (CEST), Derk Groeneveld

I am not so sure. There were searchlight units composed entirely of women
(except
for the not-publicised, hairless gorilla that started the deisel
generator with
the hand-crank)[Ian V. Hogg's Air Defence (?)].  The shells for the 3.7"
anti-aircraft gun were not light, and hand training the gun required a
fair amount of upper body strength to do with any speed, so pulling the
lanyard would be the only task that the were suited for. Manning the predictor
is okay, but towards
the end of the war, it was all done electro-mechanically.  Even Israel
stopped using woman in many combat positions once the statistically inevitable
happened

From: Allan Goodall <agoodall@a...>

Date: Thu, 21 Jun 2001 21:31:04 -0400

Subject: Re: [FT] Unpredictable AI

On Thu, 21 Jun 2001 15:17:17 +0200 (CEST), Derk Groeneveld
<derk@cistron.nl> wrote:

> Besides, is it truly politically acceptable NOT to have a human in the

Now, THAT is a really good question. I think right now it isn't acceptable. I
think it will become acceptable.

I'm not sure why I think this, except that during World War II Britain used
woman to "man" anti-aircraft guns. For most of the war, women could do
anything involved in shooting the guns except one thing: pulling the lanyard.
In other words, women could track an aircraft, load the weapon, aim the
weapon, prepare the weapon for firing, but it was considered "inappropriate"
for a woman to the pull the trigger and kill a man.

I can't remember if it changed by the end of the war, but certainly today we
wouldn't have the same problems. I suspect something similar would occur with
AIs. For one thing, western culture has no problems with cruise missiles,
today. I think it would develop much the same way (the AI runs the weapon, but
man created the AI).

From: Allan Goodall <agoodall@a...>

Date: Thu, 21 Jun 2001 23:16:17 -0400

Subject: Re: [FT] Unpredictable AI

On Thu, 21 Jun 2001 21:03:34 -0400, Richard and Emily Bell
> <rlbell@sympatico.ca> wrote:

> I am not so sure.

Sorry, but I AM sure. See "Women, Combat and the Gender Line" by D'Ann
Campbell, Military History Quarterly, Vol. 6, No. 1 (Autumn 1993). Men were
also used to load the guns, but some "bigger" women were used in this regard,
as well. But the military prevented British women from firing the guns.

From: KH.Ranitzsch@t... (K.H.Ranitzsch)

Date: Fri, 22 Jun 2001 08:09:05 +0200

Subject: Re: [FT] Unpredictable AI

From: "Derk Groeneveld" <derk@cistron.nl>
> Besides, is it truly politically acceptable NOT to have a human in the

> From the moment somebody threw a stone we had weapons that are not

The criterium for whether an automatic weapon is acceptable are the likely
risks compared to expected benefits. And humans make errors too.

Greetings

From: KH.Ranitzsch@t... (K.H.Ranitzsch)

Date: Fri, 22 Jun 2001 08:14:00 +0200

Subject: Re: [FT] Unpredictable AI

From: "Derk Groeneveld" <derk@cistron.nl>
> When a cruise missile is launched and it destroys a milk powder

There have been various incidents were HUMAN pilots / weapons operators
have brought down nice fat airliners. With the right type of IFF and patern
recognition software, I don't think the risk from an AI would be any higher.

Greetings

From: Derk Groeneveld <derk@c...>

Date: Fri, 22 Jun 2001 09:59:17 +0200 (CEST)

Subject: Re: [FT] Unpredictable AI

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

> On Fri, 22 Jun 2001, K.H.Ranitzsch wrote:

> From: "Derk Groeneveld" <derk@cistron.nl>

That would be the case if all such decisions were made in an entirely rational
fashion. Especially where lethal weapons are concerned, this is often not the
case. I think both proponents and opponents of firearms in the US will readily
agree to this, as just an example;)

Do note that I said _politically_ acceptable, not
_scientifically/operationally_ acceptable. And yes, that means if you'd
go for a way different social structure it may suddenly be acceptable.

Cheers,

From: Derk Groeneveld <derk@c...>

Date: Fri, 22 Jun 2001 10:02:10 +0200 (CEST)

Subject: Re: [FT] Unpredictable AI

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

> On Fri, 22 Jun 2001, K.H.Ranitzsch wrote:

> There have been various incidents were HUMAN pilots / weapons

IFF can be fooled. IFF can break down. Pattern recognition is very nice,
except these incidents have a nasty habit of happening beyond visual range.

And don't forget the high degree of automation is often an important factor in
these incidents even happening.

But yes, humans make mistake,s too. But they, unlike machines, can be
court-martialed as an example for other humans not to do it again ;)

Cheers,

From: Brian Bell <bkb@b...>

Date: Fri, 22 Jun 2001 06:55:27 -0400

Subject: RE: [FT] Unpredictable AI

For game play, it doesn't really matter if the fighters are manned or AI
driven.

Both are going to be trained/programmed with some degree of survival
instructions. The human because he wants to come home again. The AI, because
it can usually do more damage over time if it does not press a bad attack and
makes a better attack later (also, the government has some investment in
the fighter and it is not costed as a one-shot weapon). So the morale
rules work as they stand.

If you want to reprogram AI to ignore threats to itself, then PDS and
interceptors should have a better chance against them. I know that the SV
drones do not face this penalty, but PSB could be applied to explain it
(millions of years of combat have given them better avoidance instinct than we
can program into AIs). Also the SV have less invested in each fighter (kill
all you want, we'll grow more).

Anyway my 2 credits worth.

From: KH.Ranitzsch@t... (K.H.Ranitzsch)

Date: Fri, 22 Jun 2001 14:08:47 +0200 (MEST)

Subject: Re: [FT] Unpredictable AI

Derk Groeneveld schrieb:
> -----BEGIN PGP SIGNED MESSAGE-----

Use a telescope or light enhancement! And those circumstances are precisely
those where humans have problems too.

> But yes, humans make mistake,s too. But they, unlike

And machines can be re-programmed/modified to avoid causes of errors.
In extremis, you could scrap the whole series. The Romans might decimate a
military unit for misdeeds, but not execute everybody.

Greetings

From: Derk Groeneveld <derk@c...>

Date: Fri, 22 Jun 2001 14:32:36 +0200 (CEST)

Subject: Re: [FT] Unpredictable AI

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

> On Fri, 22 Jun 2001 KH.Ranitzsch@t-online.de wrote:

> Derk Groeneveld schrieb:

Yes. Still, the human brain is amazingly good at this pattern recognition
thing. (I'm sure Beth will shoot me down if I'm full of shit, here)

> > But yes, humans make mistake,s too. But they, unlike

Ahhh, but will that satisfy the hue and cry?;)

Anyway, in the end it's all a stab in the dark anyway - where one stabs
in a slightly different direction than the other.

Cheers,

From: Alan and Carmel Brain <aebrain@w...>

Date: Sat, 23 Jun 2001 18:11:52 +1000

Subject: Re: [FT] Unpredictable AI

> --- Derk Groeneveld <derk@cistron.nl> wrote:

In general, no for normal, HELL NO for mass.

> > Do you

In general, no.

The trouble with questions like these is that they are too general. For
example:

"is it acceptable to stab someone with a knife". In general, no. In the case
of qualified surgeons in an operating theatre, performing an operation on a
consenting patient (not stabbing each other in a knife fight), then that's
another matter. Even a
non-qualified surgeon doing an emergency tracheotomy
on an unconscious and choking victim.

I've actually helped build weapons where after a certain
point, there is no man-on-the-loop. The human makes the
decision ("Things are happening too fast for me to react to, there are no
friendlies in the way, time to hit the

From: Glenn M Wilson <triphibious@j...>

Date: Sun, 24 Jun 2001 16:57:02 EDT

Subject: Re: [FT] Unpredictable AI

On Fri, 22 Jun 2001 08:14:00 +0200 KH.Ranitzsch@t-online.de
> (K.H.Ranitzsch) writes:

Just this year missionaries *have* beeen shot down after being 'identified' as
probable drug runners...

Gracias,