Alan's Satellite Facts

2 posts ยท Dec 23 2002 to Dec 24 2002

From: Thomas Barclay <Thomas.Barclay@s...>

Date: Sun, 22 Dec 2002 21:42:17 -0500

Subject: Alan's Satellite Facts

Killer Electrons? Sounds like the name of a Punk Band.

Thought I totally agree with you about how tough stuff will *have* to be to
not make space travel a crapshoot or a deathwish, this also tells me that the
weapons of the day (in order to disable or destroy this level of system
redundancy and capability) will have to be fairly potent. They will have to be
able to at least partially overcome this superlative recovery capability.

Tomb

PS - Triple redundancy is good, if implemented
correctly. I recall hearing about redundant hydraulic control lines on some
planes where all three sets route through the same area of the plane at one
point (and guess what happens when that area of the plane is damaged?). Good
to see someone that understands distributed systems and the dangers of
homogeneity (one method of doing things used all over becomes a real nasty
situation if that one method has a flaw.... reference: MS IE, MS OSs, etc).

From: Alan and Carmel Brain <aebrain@w...>

Date: Tue, 24 Dec 2002 13:02:21 +1100

Subject: Re: Alan's Satellite Facts

From: "Thomas Barclay" <kaladorn@magma.ca>

> PS - Triple redundancy is good, if implemented

..and such a mistake is really easy to make. One way of avoiding it is to come
up with "threat scenarios". In the case above "what happens if *this* area
gets damaged?". But even there, there are pitfalls.

An interesting Operational Research analysis of
planes downed in WW2 led to a dramatic re-distribution
of armour. At first, seeing so many planes coming home with damage to area X,
they beefed up the protection there. But this didn't help, because: a) Planes
hit in area X usually got home safely, even if damaged. b) Planes hit even
slightly in area Y were lost without trace. Eventually though, they saw the
light, actually took armour off area X and put it on Y, and losses dropped
markedly.

> Good to see someone that understands

Actually, homogeneity is something I use a lot:
get something working, and then re-use it everywhere.
The idea is that any error will quickly be found in testing, it will be
totally obvious. Too much heterogeneity means there are just so many different
things to test that you don't get good test coverage. Do something unusual,
and you're in uncharted ( i.e. untested) territory.

Of course, you then add the redundancy. e.g. on FedSat we can get stored
telemetry data off the satellite by: a) (Preferred Method) DMA access of
message blocks b) (Backup when transmission is spotty) Individual extraction
from telemetry data stores of individual messages. (these 2 are the usual
methods) c) (Tertiary #1) Generalised Data Dump of a section of memory, using
the diagnostics routines. Needs to be done 3 times for triplicated memory. d)
(Tertiary #2) Hardware access to individual bytes, bypassing the software.
Slow. Difficult.
e) (Tertiary #3) Re-map the telemetry stores as large
data transfer stores, then use the large data transfer service code. This can
be done by just poking in 3
values in a table, using the diagnostics - though you'd
temporarily lose the use of the LDT Store you're misusing. Needs to be done 3
times too. f) (Tertiary #4) Upload a new program, depending on the
circumstances.

Essentially, make things homogenous: then have every
part capable of doing 2-3 different jobs at a pinch.