> Space is REALLY, REALLY BIG. And really empty. In fact, it is even
Jeez, that would make it as empty as Slough...
> is 25 hours; most humans work better on a 25 hour clock. On a
While ergonomically you are correct, I don't think we can overlook the value
of keeping a consistent clock throughout the fleet (presuming you
have trans-light comms). That being a given you would probably have the
clock of your centre of operations, which is 99% likely to be on earth..
> more nimble, with a longer crusing distance. The USAF believes that it
> atmospheric fighter (it should be harder to design an autonomous
Regardless of this, it doesn't matter a damn to FT games whether the fighters
are drones or not. It's possible some fleets have drones and some have real
pilots. Human pilots can be very predictable and drones can be sneaky if
programmed correctly.
> area. The extra weight to shield a fighter actually pushes it back so
Then of course there is the problem of shielding your drones AI computers from
external interference. I'd love to know what a close detonation EMP warhead
would do to a pack of drone fighters:).
> yesterday you were a navigator. This allows for a cheap supply of
Again the problem of course is predicability. If every pilot has access to
exactly the same knowledge, every pilot will react to a given situation in
more or less the same way. In combat of course this is A BAD THING. Part of
the problem with these mass prdoced drones/pilots is getting the m to
react in an individual manner.
> Of course, how is this cyberware affected by those gravitic
PFFT. If you ask me, organic/nano 'cyberware' is much more likely to be
around than silicon by the 24th century. Does raise a question about EMP
weapons in general though. If your dreadnaught main 'core' is actually a
whalloping great organic brain sort of thing, no reason for an EMP pulse to
affect it. Although I see no reason why parts of a ship shouldn't be
electronic.
> Despite
Depends on yor story. Didn't train them for very long during WWI. And early
WWII/ Battle fo Britain there were some VERY inexperienced people flying
for the RAF. Deepnds how far your 'organic technology' goes. If current
theory of memorize things is true, nanomachines could actualyl re-wire
your memeory to give you a new set of cognitive skills. Actually learning the
physical processes involved though I don't think you could rush.
> SHIP ALERT STATES:
All seems fair- but why not make 'yellow two' and 'orange' the same
state -
otherwise yellow two seems pretty superfluous.
> Question: how do interstellar communications work? Is there some form
Hmm.. Current physics research implies translight communication is actually
feasible, much more than translight travel anyway, what with electron
tunnelling, particle spin reversion and things like that. I would say that
translight comms was actually a requirement of any large interstellar
society - could the US manage it's fleets without radio?
TTFN
Jon
> On Tue, 10 Feb 1998, Jonathan white wrote:
> >is 25 hours; most humans work better on a 25 hour clock. On a
Moot point where the headquarters is. US nuclear submarines are obviously
based on Earth and their daily schedule for the crews while underway is 18
hours, 6 hours on and 12 off with 3 shifts. When your environment is isolated
you can run any schedule you'd like. Plus in the military as with any large
organization that requires rapid response you are more likely to have people
on duty all the time regardless of whether there is daylight outside or not.
All the key stations would be manned round the clock and for front end units
there might even be three full crews on
board so that the ship can be run under combat conditions 24/7 barring
mechanical failures.
--Binhan
Ah a very interesting thread....
> Jonathan white wrote:
> Then of course there is the problem of shielding your drones AI
Not that I'm a proponent of AI fighters, (Some pundit always comes along
predicting the demise of human invlvement in fighting wars), but an EMP burst
against a human fighter would be just as bad. Sure he might be OK, but what
about all the circuits in his ride? They'd suffer as bad as the AI core of a
non-human fighter (assuming similar shielding).
> >yesterday you were a navigator. This allows for a cheap supply of
Part of
> the problem with these mass prdoced drones/pilots is getting the m to
Exactly, there's a hellofalot more to any of these jobs besides knowledge.
There is experience and intuition. Not just collective experience, but
personal experience
> >Except that it still takes a good couple of years to create a fighter
Maybe fighters will get easier to fly 200 years, what with computer
assiatance, better spatial sensors, and nueral interfaces. Sort of like cars.
Right now they're very complicated because, theres still a difficulty in
getting all that ammassed information assimilated by the pilot in the few
senses he can use. This was training can actually focus on combat operations,
SCM, and tactics.
> Hmm.. Current physics research implies translight communication is
Today, no. But in the age of sail, huge empires managed without instantaneous
communications. There are plenty of scifi stories written along these lines.
X-ListName: Full Thrust Combat Game Mailing list
<FTGZG-L@bolton.ac.uk>
Warnings-To: <>
Errors-To: owner-ftgzg-l@bolton.ac.uk
Sender: owner-ftgzg-l@bolton.ac.uk
Date: Tue, 10 Feb 1998 13:02:17 -0500
From: Los <los@cris.com>
Reply-To: FTGZG-L@bolton.ac.uk
X-Mailer: Mozilla 4.04 [en] (Win95; I)
MIME-Version: 1.0
References: <l03010d01b1039aa68e56@[195.89.161.10]>
<l03010d01b1039aa68e56@[195.89.161.10]>
<3.0.3.32.19980210090247.00995e60@rincewind.sar.bolton.ac.uk>
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Ah a very interesting thread....
> Jonathan white wrote:
> Then of course there is the problem of shielding your drones AI
Not that I'm a proponent of AI fighters, (Some pundit always comes along
predicting the demise of human invlvement in fighting wars), but an EMP burst
against a human fighter would be just as bad. Sure he might be OK, but what
about all the circuits in his ride? They'd suffer as bad as the AI core of a
non-human fighter (assuming similar shielding).
Not that this has much to do with FT but...
EMP can be pretty nasty to people as well. Especially if you want enough EMP
to hurt a shielded piece of electronics, basically you are going to generate a
lot of high energy electrons. People don't deal with lots of ionizing
radiation (basically its just like giving someone a big dose of
radioactivity.) They are gonna get sick and die. This should be true for big
organic brains. Heck, its probably easier to big a fiber optic
computer that is emp-proof then it is to make a person EMP proof.
cheers
> >>Space is REALLY, REALLY BIG. And really empty. In fact, it is even
> >But not by much...
> Jeez, that would make it as empty as Slough...
Space is NOT emptier than southern Saskatchewan. I've been there. If you look
in the dictionary under 'empty', that is what you see.
That being a given you would probably have the
> clock of your centre of operations, which is 99% likely to be on
Why? We have a number of time schemes, which can be converted to GMT. Why not
just have another "Universal Biological Time" (UBT) based on a 25 hour clock.
Computers don't reckon in human time, and they wouldn't reckon in this new
human time either, but they could easily convert to either.
Ask any AI expert and they'll tell you were quite a way
> away from a working AI system for something as complex as ACM. I think
Yep, and in 1949, I think it was Turing said they'd only ever need four or
five computers in the entire world. Oops. We totally fail to estimate progress
correctly especially where tech is concerned. I have a classmate doing PhD AI
work. They are a long way from having
AI fighters, but not 300 years necessarily. And BTW, computerphobes -
computers are here to stay, they will continue to grow in utility, ubiquity
and economy. And I can see a future AI that can beat a human pilot due to the
fact it can follow option trees (like chess) down 300 moves, with 1000
permutations, within microseconds. The more we make microprocessors capable
of, the more practical even brute force solutions are, and there is no reason
that in the future one cannot conceive of a fighter capable of outflying a
human ace (the human with all his intuition etc. still follows subconscious
patterns which can be evaluated and acted on) and a good AI with a good
'brain' could easily best the human in thought, and definitely in reflex or
capability to take Gs. (Humans can take about 9 in suits.... a computer can
probably take 40Gs.... that makes up for a lot of skill).
HOWEVER, that might not be interesting for our gaming purposes, and we're here
(or wherever) to have fun (I'd guess) and not to predict the future and live
it.
> PFFT. If you ask me, organic/nano 'cyberware' is much more likely to
Can you say 'brain tumor'? Look at how low power they are trying to
make cell phones, CRTs, TVs, and radio and x-ray equipment. Why?
Humans and EM do not mix too well. Only in super vast quanity will EM kill you
outright, but a small amount can cause long term problems like cancer and
other diseases.
> >Except that it still takes a good couple of years to create a fighter
> Depends on yor story. Didn't train them for very long during WWI. And
And in the parts of the war where one side or the other did not have technical
superiority to such an extent that pilot Q was irrelevant, the better pilots
still won. And part of piloting is 'selection'. Some people innately have
flying skills, others not so.
Deepnds how far your 'organic technology' goes. If current
> theory of memorize things is true, nanomachines could actualyl re-wire
Hmm. I can have machines operate my muscles. (I have had in physio), so it
seems you could build a 'muscle memory conditioner' to teach (maybe while
sleeping) the muscles the needed moves.
Tom:)
/************************************************
> > Ask any AI expert and they'll tell you were quite a way
Actually it was Thomas Watson, chairman of IBM who said that. As you said
Oops.
(I know this because I set it as a bar quiz question once...)
> > > Ask any AI expert and they'll tell you were quite a way
Okay, wrong guy, right idea. I knew it was someone important in computing
before it was what it is today, and Turing was my
shot-in-the-dark (that was why it was an "I think"). Thanks!
> (I know this because I set it as a bar quiz question once...)
Heh. I too have indulged in such vices. Just shows you how much truly trivial
knowledge we possess....
Tom
/************************************************
> tunnelling, particle spin reversion and things like that. I would say
What it amounts to in concrete terms is that the central government must grant
a *lot* more power to its commanders (and governors) in the field, giving them
much more leeway to conduct or refuse to conduct actions, set policies,
enforce laws, etc. At a tactical level, it wouldn't affect the game, but it
would have a drastic effect on the political structure of the game background,
and would help to define scenario ideas. Personally, I kind of like the lack
of FTL communications, a la Traveller. Makes holding systems important just
for the sake of communications links, supply and information conduits, and the
like.
Plus, gives that good, old-fashioned 'High Colonial' feel to the game.
:)
> Why? We have a number of time schemes, which can be converted to GMT.
Hmm.. fair enough.
> capability to take Gs. (Humans can take about 9 in suits.... a
That's where the 'gravitic compensators' come in. If you can overcome gravity
I would presume you wouold have some system to reduce the effect of
G-Forces on the human system.
> HOWEVER, that might not be interesting for our gaming purposes, and
I think (in game terms) the problem we have is that 'selection of the fittest'
works VERY fast. If drone fighters turned out to be demostrably
better than Human/Kravak flown fighters, you could bet within a couple
of weeks the majority on this list woudl be flying drone fighters with their
fleets. Having said that I have probably had more fun playing games where one
side is significantly disadvantaged than with 'evens' battles. You just have
to set your victory conditions appropriately.
> PFFT. If you ask me, organic/nano 'cyberware' is much more likely to
One would presume that if your ship computing core was something analogues to
an organic brain, you would ensure regular scans to detect such anomolies.
Afterall, your PC does basically the same thing every time you switch it on.
As to the effect of EM, a callous point to make might be that while a large
dose of EMR would kill a human pilot it's unlikely to do so before
his/her
mission is over. With a drone craft any damage you do becomes evident fairly
immediately.
> >Except that it still takes a good couple of years to create a
Not sure there was that much selection going on and technological differences
weren't apparent across the board, at least WWII. Presumably they had some
sort of aptitude test but it may only have been very simplistic. 'Flying
skills' are actually a large subset f skills, some of which are pretty hard to
test for accuratley (in fact, I could go on about how innaccurate most
aptitude tests are even today, but I can tell you're all bored already).
> Deepnds how far your 'organic technology' goes. If current
THis is something that as of today we aren't sure about. We're fairly sure
memory is encoded chemically within brain neurons, but physical skills we
arent' so sure about. When you get to the point where a function of a
skill is unconcious - for example when playing a musical instrument you
don't have to /look/ where your hands are as you know when they /feel/
in
the right place - we aren't sure how (or indeed where) that's encoded.
Mind you, 300 years on we might.
TTFN
Jon
> >capability to take Gs. (Humans can take about 9 in suits.... a
Hmm. I *suspect* that the reason unified field theory is escaping us,
and the reason we've never seen evidence of an anti-graviton or
something like that is that it DNE. It appeals to us to think that such a
thing exists as a balance for gravity (just like
positron/electron and other matched particles produce matched or
counterbalancing forces in EM and other radiation domains), but that might
just be us wishing.
But it does make high G sci-fi and interstellar colonization more
interesting.
> I think (in game terms) the problem we have is that 'selection of the
Nah.... what do you think we are - a bunch of min-maxer tacticians?
(Sarcasm meter on overload....)
Having said that I have probably had more fun playing games where
> one side is significantly disadvantaged than with 'evens' battles. You
Yep. Mind you, its nice to have a chance to win, even if it isn't a good one.
I like games where good clear thought and a bit of luck can compensate for at
least a 1.5 to 1 advantage.
> One would presume that if your ship computing core was something
True. But such an organic technology might have expensive support requirements
and hence such tumors translate in high cost terms. PLUS the brain is complex
and do you really want to risk your core system going wonky while you are 100
LY from home.
Afterall, your PC does basically the same thing every time you
> switch it on.
Speak for yourself. Mine does POST tests, which rarely diagnose serious
problems, and even high tech diagnostics often fail. The only real diagnostic
tool you can depend on is your own human brain.
> As to the effect of EM, a callous point to make might be that while a
Campaign considerations for any lenghty term whould make this pilot loss a
problem.
> Not sure there was that much selection going on and technological
Arguable back and forth.
Presumably
> they had some sort of aptitude test but it may only have been very
Hmm. I think most militaries do a reasonable job of pilot selection and the
testing is long and intensive. I'm not sure how you'd *rate* it though...
> >Hmm. I can have machines operate my muscles. (I have had in physio),
> THis is something that as of today we aren't sure about. We're fairly
Mind
> you, 300 years on we might.
We know there IS such a thing as kinesthetic memory. We know that this
underlies much military training (to reduce actions to the
physical/instinctual level). We know that these tasks eventually
operate at a level you aren't aware of. (Ever walked away from your house,
went back to check if you locked it? It was locked because you had such a
habit of this that you do it automatically and you
actually have to think to observe it - so sometimes you do it and
then go "Did I do it? I don't remember..." Why? Because it happened at such a
low level conscious mind memory wasn't involved.) Now, whether we can use
somnabalist teaching to condition muscles faster and more effectively than
current 'waking' repetitions, well that is the question of the day....
Tom.
/************************************************
> On Wed, 11 Feb 1998, Thomas Barclay wrote:
> We know there IS such a thing as kinesthetic memory. We know that
Well, if people are willing to accept that insect-like aliens are "born"
to fight and even have the capability to use energy weapons from
hard-wired reflexes there shouldn't be any stretch of the imagination to
apply that to humans. I would suspect that as systems get more complex, human
pilots would in fact become phyically redundant as a part of the ship system
since there would be some type of "neurosensor" a la Firefox that would
translate the pilot's thought into a maneuver or action and that in reality
the pilot would be encased in some nasty highly oxygenated gel with tubes
going every which way. So there would actually be no need for the pilot to
move or even wiggle an eyeball to control the ship. Obviously it would be much
easier to train just the human brain rather than having to build up all the
required physical reflexes. In fact a computer combat simulator would perform
all the necessary mental exercises. The down side would be that pilots would
mentally fatigue much quicker since every thought could be interpreted by the
computer to be meaningful and thus produce some odd results and so a pilot
would have to be at a high level of awareness the whole time he was plugged
in.
But while encased in a gel it might be possible to pull dozens of G's ( a la
Forever War) if properly encased. And I would suspect that the
electronics/neural nets would be heavily shielded/redundant such that
EMP/radiation effects are minimal except for a direct hit. The only
reason for there to be a pilot within the craft would be the ability of the
enemy to block communications with the craft or a time delay.
--Binhan
On Tue, 10 Feb 1998 09:02:47 +0000, Jonathan white <jw4@bolton.ac.uk>
wrote:
> While ergonomically you are correct, I don't think we can overlook the
It was just a neat SF-ism. But don't US nuke subs have some weird time
periods? I seem to remember it was 15 or 18 hours to the "day". Made for a
more alert crew.
> The USAF also believed dogfighting was an obselete concept as soon as
But they changed their tune in Vietnam. What they've seen since the Gulf War
seems to justify autonomous fighters.
> Maybe by the FT period you
I guess we'll have to agree to disagree. But since this is the era of Kasparov
losing to a big, powerful (albeit dumb) computer, I reserve the right to gloat
"I told you so!" in 2020 (or sooner) when they become a reality. You, of
course, will have the same right if they don't show up.
> Regardless of this, it doesn't matter a damn to FT games whether the
Agreed. However, the discussion was in regard to Jon's FT background
guidelines for fiction writers, in which case it does matter.
> Then of course there is the problem of shielding your drones AI
Hmmm. You've got a good point there. Still, space is a pretty harsh
environment. Operations near a star will make an EMP burst look like a sneeze.
I think they'll already be fairly well shielded. Besides, you just need an
electrical shield, not an armoured one. A computer in an electrified metal
sphere will survive most EMP problems. Oh, and anything that will take out the
computer will also take out the avionics package of a manned fighter. Weight
advantage goes back to the unmanned fighter...
> Again the problem of course is predicability. If every pilot has access
Oh, I STRONGLY disagree. Here's an example. We all have access to exactly the
same knowledge: the Full Thrust rulebook. Does everyone react the same way
with the same ship designs? No! The chip gives the human the knowledge needed
to fly a ship. How he applies that knowledge is his business.
I've been rethinking this, though. It probably wouldn't work for another
reason. Manual dexterity (such as touch typing) requires the formation of
neural pathways in the brain. So does fighter piloting. Even if you knew
exactly how to fly the fighter, the rest of the brain hasn't developed the
neural pathways to guide your hands. Slotting in a chip would only work if you
overrode most of the brain. We're talking cyborgs here, which is way off what
Jon is suggesting.
> Depends on yor story. Didn't train them for very long during WWI. And
Other way around. WWI they got something like 25 hours of training, often less
than 10 in an actual airplane. Result: staggeringly high death rates. Canada
had one of the lowest death rates of new pilots than any other nation. The
reason was that Canada looked for recruits who had civilian flight experience.
The British took people based on class. As fighters have become more
sophisticated, pilot training times have increased. Jon's universe would
probably see training times closer to that around the Vietnam era, which was
still much longer than WW2.
> If current
Now THAT is a neat concept. That could cut the learning time drastically. But
wouldn't the nanomachines rewire the neural pathways the same way, thus giving
you the "everyone thinks alike" problem you mentioned above?
> All seems fair- but why not make 'yellow two' and 'orange' the same
One reason: weapons lock. Yellow two is everyone on edge but with weapons on
command lock. Orange actually has the weapons turned off. It's safer to have a
different alert level. That's why I called it ORANGE. Kind of like yellow or
red, but not quite. Could have been something like Purple or Blue, I guess.
> Hmm.. Current physics research implies translight communication is
I think so. Nations managed their fleets without radio for many, many years,
even after the invention of steamships.
On Tue, 10 Feb 1998 15:59:50 -0500, Thomas Barclay
> <Thomas.Barclay@sofkin.ca> wrote:
> Space is NOT emptier than southern Saskatchewan. I've been there. If
Yes, but space is curved while Saskatchewan is flat. Isn't Saskatchewan's
motto: Can't Die from Falling?
> The more we
Thank you!
> HOWEVER, that might not be interesting for our gaming purposes, and
Which is the reason I came up with the interference reason for not putting
computers on a fighter. I've been working on an SF background of my own which
has entirely DIFFERENT reasons for putting
humans--instead of AI--in combat situations.
But if Jon comes back with, "I like humans in fighters. No reason, I just like
it. Live with it," I'll mutter under my breath and just skirt the issue if it
comes up.
On Tue, 10 Feb 1998 23:26:31 -0600, jfoster@kansas.net (Jim 'Jiji'
> Foster) wrote:
> At a tactical level, it wouldn't affect the
Well put. Conflict is interesting. It creates scenario and story ideas. I like
the inherent conflict of nations using 23rd century technology with 19th
century communications. It allows for things like the Battle of New Orleans
(war of 1812 was over before the battle was even fought).
> >Depends on yor story. Didn't train them for very long during WWI. And
A number of VERY high scoring German pilots can support this Statement.
Bye for now,
Would one of you folks be so kind as to send me the original background, or
repost it to the list? I seem to have overlooked it, or not received it
somehow.
Cheers!
> tunnelling, particle spin reversion and things like that. I would say
> society - could the US manage it's fleets without radio?
[cut]
> Plus, gives that good, old-fashioned 'High Colonial' feel to the game.
:)
One of my favorite fiction backgrounds if the Polsoltechnic League (early
fiction by Poul Anderson). This background has multipule sentient species, and
a galatic range. But no FTL communications. They use small courier torpedos
(micro FTL engine, guidance system, and a recording device) to send messages
FTL distances.
More rambles....
> Allan Goodall wrote:
There's a joke about Saskatchewan I once heard: "A farmer living in
Saskatchewan watched his dog run away for three days."
(8-)
> >The more we
I just thought of something; brute force solutions are only practical when the
algorithm has access to all the data required for processing. (Kasparov vs.
Deep Blue II comes to mind, where DB II was optimized to match and beat
Kasparov at his own game.)
Now, what happens to computer pilots when their radar is jammed? Or fails? Or
returns ghost signals?
I guess my point is: in a 1 v 1 scenario, head-on, no positional
advantage, the computer pilot does have the edge. But how many times does this
happen in warfare?
I don't know; if their was a radical breakthrough in AI perhaps (and, once
again, since this is all future speculation, we can speculate
whatever the hell we want (8-) ), but brute force computations aren't
the answer for a combat AI. Perhaps some kind of optimized neural
net...
The nightshift is taking its toll... (8-)
J.
> I guess my point is: in a 1 v 1 scenario, head-on, no positional
Well in B5 it happens all the time. <g> but then again max effective range of
TV sciFi weapons is only about 400 meters.
> Again the problem of course is predicability. If every pilot has
Your example isn't extreme enough. For the idea of 'chip memory' it's more a
case of 'having read the same rulebook in the same room at the same time and
all played the same games rolling the same dice for the same result'.
This sort of plug in memory is that level - If I and you plug in the
same chip, we not only have the same knowledge, but we also have the same
experiences and (possibly) the same thought patterns. Small variations in the
processing of that mean I might veer 25 degrees left when you veer 30, but we
both veer left. Which is pretty bad when facing a scattergun equipped Kra'Vak.
> I've been rethinking this, though. It probably wouldn't work for
This is the point I have always made about this - skills which have a
non-cognitive component (say ballet rather than french vocab) require
more
than just knowledge. You could give the the 'knowledge' of a world-class
javelin thrower but I couldn't compete at an olympic level because I don't
have the muscle mass for it. These 'implant' devices can speed training and as
I said nanotechnology could possibly even build me the muscle mass (folowing
the example) in short time, but I don't think we will get to the point where
Joe Bloggs can plpug in a chip and be a champion fencer all of a sudden.
> Now THAT is a neat concept. That could cut the learning time
yes. But then what's the difference between nano-rewired memory and
silicon
on a plug-in. Hmm.
> All seems fair- but why not make 'yellow two' and 'orange' the same
But wouldn't that mean changing the bulb?:)
> I think so. Nations managed their fleets without radio for many, many
Gaaahck. Someone pointed out to me that the two largest 'empires' this planet
has seen both happened before the combustion engine was invented, never mind
the radio. D'oh.
TTFN
Jon
> At 17:17 11/02/98 -0500, you wrote:
SNIP
> One would presume that if your ship computing core was something
Then again, an organic computer is only going to get cancer if its cells have
a replication system; most brain tumours involve the support system (glial
cells etc.) rather than the neurons themselves, and these
could be replaced with non-cellular, essentially machine, components.
Personally, I prefer human/whatever fighter pilots for the same
reason I prefer cinematic ship movement- fun. Drone fighters don't have
the same kind of involvement (unless they're fully sentient types such as
those of the Culture).
Rob
> On Thu, 12 Feb 1998, Jerry Han wrote:
> Now, what happens to computer pilots when their radar is jammed? Or
Well, if its one of the reactive networks/subsumpive I spend a lot of
time hacking on and developing at work and for my own amusement, it reacts a
whole /heck/ of a lot faster than a human operator to the change in its
imput quality (after all, you'd be an idiot to not put in a detection module
that does nothing but recognize when a sensor input is of diminished quality,
which would then remove some of the activation energy that modules depending
on that data receive...) and proceeds to 'open a
keg of whup-ass' on its opponents.
> I guess my point is: in a 1 v 1 scenario, head-on, no positional
Never, which is why reactive/subsumptive/spreading-activation networks
are becoming such a hot topic in robotics these days along with hybrid designs
which incorporate traditional AI techniques alongside the newer
less-hierarchial styles.
A human is just a really big parallel neural network, after all. There's no
reason to believe that there's some mystical falderall which will
forever seperate the performance of humans and machines, /especially/ in
high-stress, limited-field areas like combat via machines themselves
which depend heavily on computational power in the first place.
> whatever the hell we want (8-) ), but brute force computations aren't
The AI community hasn't done simple brute-force computation of over a
decade now. Wake up and smell the 90's.:)
> > >Space is NOT emptier than southern Saskatchewan. I've been there.
If
> > >you look in the dictionary under 'empty', that is what you see.
... but you can die from Boredom. (heh. I lived in Southern AB so I can speak
with some authority....):)
> Now, what happens to computer pilots when their radar is jammed? Or
Assuming something like modern tech, where you either use A) structured
programming or B) neural nets, then "when the radar is jammed" scenario played
out in A), the routine for "radar jammed" would be invoked and the fighter
would carry out appropriate code in there, which involved using other sensors
or being just as blind and random as a human. It would include a true
white-noise random number generator to be used in helping to make
decisions (to simulate human randomness). It would include a lot of options
with weightings. If it had the compute cycles, it could evaluate each and
every one of these options and follow the best course, or choose randomly
between half a dozen that give near equal results.
In case B), the neural net would have been trained with some arbitrarily large
sample of these types of events, and
would do its neural net magic and pick an appropriate response - note
that with neural nets you do not program in responses, you train the net and
it responds as it thinks it should. You then evaluate the results until you
think you have a good neural net. Then you implement that net. Same result
with the other options. Basically, what you need is a lot of development
money, and a bunch of smart fighter jocks and engineers to figure out every
scenario that could happen (or at least 95% of them). The interesting thing
about a neural net is that if you have an unknown scenario, based on ones it
has seen it can still come up with a response.
Unspoken issue: How fast can you train a neural net in 2300?
> I guess my point is: in a 1 v 1 scenario, head-on, no positional
In gaming, all the time. (heh)
> [quoted text omitted]
/************************************************
Oooo, a debate! (8-)
> Alexander Williams wrote:
I guess my point above was that computer AI (granted, this can be 'done away
with the Universe, as much as anything else), will probably never achieve the
same amount of flexibility as a human mind. (This is not the same as
'intuition' or 'creativity, but closely related.)
> From what work I've done in AIs (granted, more theoretical than
Once you go out of parameters, though, they go boom. Even 'reactive'
networks (what's your design model, if you don't mind me asking (8-) )
have points where the system fails, due to stress, programmer error, invalid
inputs, what have you.
Don't get me wrong; I'm a big fan of automation. Personally, I prefer
automated combat systems when possibile. However, I also believe in the
'friction' of war, which means you stick in redundancies wherever possible.
That means you're always probably going to have human pilots, until you can
get AIs that think like humans.
At that point, well... Dahak (from Weber's 'Fifth Imperium trilogy')
can command my fighters anyday! (8-)
> A human is just a really big parallel neural network, after all.
There's
> no reason to believe that there's some mystical falderall which will
Agreed. But, given the high hopes AI had in the 50s, and the rather bitter
reality they face now, it's going to be a while before science fact matches
science fiction. (Of course, since it appears current research in AI is
proceeding in specialized as opposed to generalized lines, I may have just put
egg on my face especially when it comes to combat AI, something that has been
an active field of research for the last fifteen years or so.)
> > whatever the hell we want (8-) ), but brute force computations
Oh, hell, I know, I was just answering somebody elses comment.
I haven't kept current, but, at least the time I did my education/
research, Neural Nets seemed to be the most promising, if you could get around
their long training time. Of course, since this was about three or four years
ago, I'm already hopelessly out of date.
(8-)
> "Here at Ortillery Command we have at our disposal hundred megawatt
I like. (8-)
J.
> A human is just a really big parallel neural network, after all.
There's
> no reason to believe that there's some mystical falderall which will
Hmm. I'm in favor of your POV, but it may be the human is a tad more complex
than the model suggested. For us to assume we know what the brain is or what a
human is (as of yet) inherently questionable.
But in the long run machines COULD be made smarter than man. People tend to be
making machines like SMARTER men. What isn't being asked enough is WHY? Or
whether it is a good idea or a desirable thing? Or
if we make self-aware machines, other than to say we've done it,
don't we have to treat them as sentients, give them rights, etc. and thus
remove their utility as slaves? I don't want to see us build robots and AI
systems to REPLACE men (or more importantly, women!) or make BETTER than human
creatures (although some seem set on this). It would seem more to the point to
make smart windows that won't slam on kids heads, smart stoves that cook you
dinner but don't burn stuff, medical expert systems which help diagnose and
treat people so they live a long healthy life, etc. which collectively improve
our quality of life. If we're stuck on the morality of cloning and
regeneration, we're equally far behind on the ethics and morality of AI and
self aware machines. (Yes this is tangential, but relevant if you are
discussing AI in the future.... maybe universal sentient rights make it
immoral to involuntarily involve another intelligence in conflict without its
permission, hence the only AI fighters could be voluntary ones... rather than
just ones you built....). Remember, the smarter you make a machine, the closer
it will be to going "Who the heck are you to be giving me orders?" or "Why
should I do this for you who just created me to die or to do labour for you?".
> The AI community hasn't done simple brute-force computation of over a
And yet a number of large scale computing problems (Deep Blue, some math
proofs) have recently taken advantage of massively parrallel supercomputers to
solve by exhaustive cases things that could not be done any other way or to
beat a human opponent. Don't rule out brute force if you have the compute
cycles. The phrase KISS still has meaning, even in 2300.
Tom.
> On Thu, 12 Feb 1998, Jerry 'Ghoti' Han wrote:
> Oooo, a debate! (8-)
Nah, just an exchange of broadsided opinions at long range.
> I guess my point above was that computer AI (granted, this can be
I just don't see that; in the big picture? Certainly correct. In a limited
domain, in which there are a limited number of responses? (And, let's be
honest, there are only so many things you can do in command of a fightercraft
in 0g.) Certainly incorrect.
This does imply I think fightercraft combat is an essentially 'solvable'
problem, yes. That is to say, I believe that for any set of initial
conditions, a set of actions can lead to consistant victories within minor
variances between initial conditions.
> >From what work I've done in AIs (granted, more theoretical than
Currently, I'm putting together (for fun; I can't tell you about the
at-work stuff, DEC'd have my intestines :) some modular
spreading-activation-based networks with functional sub-modules for a
sort
of Crobots-inspired combat/tank game. The idea being that one
impliments
a set of sensor-units (functions which put/remove state info to an
inter-tank blackboard), a set of module/expertise-units (functions which
trigger based on activation energy and which carry out some function and which
may be an activation network themselves) and a set of Goal units (states which
feed energy backwards through the network just as
sensed-states send energy forwards).
Choosing a sufficently general set of expertise modules and goals allows such
a network to 'generalize' about its situation and respond appropriately. They
don't end up having 'parameters' as such to have a
scenario /match/, the idea is to give them the tools for them to work
out
locally best solutions and have an emergently /best/ solution come up
out
of that. Its bottom-up logical construction rather than traditional
top-down, and I find that, if its pursued vigorously, you'll find that
it, as a logical framework, leads to good solutions to lots of typically 'hard
AI' problems, generalized 0g fighter combat being one of those.
> redundancies wherever possible. That means you're always probably
Why think /like/ humans? Given that humans are just big AIs themselves,
shouldn't you strive to go 'outside their parameters' and give up alien
solutions to the current situation, so that humans are at a disadvantage?
We have millions of years of evolution to /unlearn/ about space combat,
drone minds do not.
> Agreed. But, given the high hopes AI had in the 50s, and the rather
GA Tech has some almost frightening research on automated combat AIs; they
currently have a HUMMV which drives itself and can follow general directives
from command and pursue their accomplishment. More, they have
a simulator that /you can run on your Linux box at home/. That's fairly
low-horsepower compared to systems the military /might/ be putting in
drone craft.
Someone once said 'specialization is for insects, humans are meant for
better.' They neglected to notice that insects make up more biomass on
this planet than any other phylum. Specialization /works/.
> Oh, hell, I know, I was just answering somebody elses comment.
NNets are a right bugger unless your problem is almost purely a reactive
one, they don't do changing goal-seeking well at /all/. On the other
hand, they make great expertise modules for activation networks since they're
quite good at 'making things happen' which can then be handed over to another
module who does a different thing that leads, eventually, toward the goal.
> On Thu, 12 Feb 1998, Thomas Barclay wrote:
> Hmm. I'm in favor of your POV, but it may be the human is a tad more
You're making appeal to a line of reasoning that is inherently self defeating.
Saying, 'I don't think we can know what this is' says nothing about our
ability to build something that does a function better. We
still don't know everything about the wing-operation of birds and yet I
can hop a plane to Colorado Springs in an hour and be there soon after, faster
than Canadian Geese can wing their way.
> But in the long run machines COULD be made smarter than man. People
See my previous post; I don't advocate making a computing center that's a
'better man,' but a 'better pilot,' which is a far more specialized thing.
I'll be happy if I can write an AI which uses laser-beaconing to
communicate short-range with its flock-mates who swarm upon the enemy
and use utterly alien tactics to destroy them.
> if we make self-aware machines, other than to say we've done it,
There is a line of philosophical thought (and one I largely share) that
suggests that self-awareness is a dead-end evolutionary track, that it
is not a sufficently powerful tool, in and of itself, to continue indefinitely
in the human genome save as a vestigal reminder of what we once were (are,
temporally).
Why build a self-aware fighter pilot who can say 'no' when I say 'blow
up
that hospital?' Its a straw-man argument, since self-awareness is /not/
a necessity for an entity to be better at a task than a human.
> the smarter you make a machine, the closer it will be to going "Who
Depends on how that machine-mind's constructed. Consider that its
perfectly possible to brainwas a human into /not/ asking the above
questions. How much easier a mind that never had an opportunity to think
anything other?
> And yet a number of large scale computing problems (Deep Blue, some
If you have the compute cycles, being smart /and/ fast beats just being
smart.:)
> Alexander Williams wrote:
Mr. London, fly "Engage Enemy More Closely" (8-)
> > I guess my point above was that computer AI (granted, this can be
(And,
> let's be honest, there are only so many things you can do in command
I'm not so sure. We have a fighter, with vectors indicating thrust, velocity,
and direction (assuming thrust and direction can be generated
off-axis), with six axes of freedom. We have a selection of weapons and
defences. We may be flying by ourselves, or we may be flying in teams.
We may engage multiple targets, or single targets. These multiple targets
may be escorting a must-kill target, or it might be a fighter sweep, or
it might be a full up alpha strike.
While you can probably reduce the options a fighter has at any given moment to
a given set of maneuvers (which strikes me as being limiting), thus
simplifying the AI problem there, you then face the thousands of different
possible combat scenarios, many of them unforseen. This is where the human
flexibility comes in.
A possible compromise here would be the use of the AWACS or Ground Control
model, where the actual fighters are drones, but they are under the close
supervision of a human aboard some sort of control ship. This raises
problems of C-cubed, but it can be dealt with, and offers the best
advantages of both. (This reminds me of another short story I read in another
anthology, about a border outpost commander who had at his
command 40 'doggies' - autonomous combat drones that he could direct,
but the actual engagement details were left to the drones' AI.)
> This does imply I think fightercraft combat is an essentially
Essentially then, you're assuming that from a given engagement scenario, each
side will use the same actions, because those actions are, in some sense
'optimal.' I'm worried about divergence though; I believe that combat, by
nature, is a chaotic system and extremely sensitive to perturbations in the
initial conditions.
Damn, this is starting to remind me of the USSRs 'Battle Calculus' studies
back in the 60s. (8-)
> > redundancies wherever possible. That means you're always probably
Point taken here. I should have been clearer; AIs that can emulate human
intuition, creativity and flexibility, even if the solutions developed by the
system are considered 'alien' to human mindsets. For example, I fully expect
AIs to be capable of 3D analysis very easily (especially in
terms of situational awareness) whereas, in humans, 3D (or even n-D)
spatial analysis is a particularly tough skill to develop.
> Someone once said 'specialization is for insects, humans are meant for
Agreed. If my AI is supposed to design starships, I don't give a damn
if it can appreciate Mozart. (8-)
J.
> On Fri, 13 Feb 1998, Jerry Han wrote:
> Mr. London, fly "Engage Enemy More Closely" (8-)
As a dedicated TOGgie, I'm smart enough to keep my firing line looping back
further away so I can stay at range and apply proper broadsides.
:)
> While you can probably reduce the options a fighter has at any given
Which is why you build the reactive system bottom up, so that larger 'maneuver
clusters' arise spontaneously from reactive acts. If you design the system to
recognize successful results and weight them a bit more heavily in the future,
then you begin to cut down the space of possibilities to a managable level.
Remember, we're not talking about
old-style traditional AI here but reactive networks; you don't /have/
'routines' of reactions, you react to environmental conditions and work toward
goals.
> A possible compromise here would be the use of the AWACS or Ground
At some point, of course humans will be involved, but its far more likely to
be at the strategic level than the tactical once the amount of data that has
to be juggled surpasses human capacity. In many cases, we're
getting awfully close to that /now/, leaving more and more in the hands
of automated systems given overall directives instead of micromanaged orders.
> Essentially then, you're assuming that from a given engagement
Not necessarily; a given set of tactical choices lead to a retaliatory set of
actions. These very well may be very different from the attacker's
set, in fact, odds are they /will/ be.
> Agreed. If my AI is supposed to design starships, I don't give a damn
And if my AI is supposed to fly a fighter, I don't care if it writes haiku in
its downtime.
Of course, in an obFT sense, this sidesteps the issue: What's flying a
fightercraft doesn't matter. FT is a descriptive, not proscriptive system.
Whether piloted by a single human tucked into a command vessel safely off the
map (represented by the player) and all ships following his dictates or
whether the fiction is that each is seperately piloted and commanded, FT gives
no difference.
(Personally, if FT is really about humans piloting, it needs, first and
foremost, /morale rules/ like SGII and DSII, to represent human foible.)