Computers in Spaceflight: The NASA
Experience
- - Chapter Four -
- - Computers in the Space Shuttle
Avionics System -
-
-
- Evolution of the shuttle
computer system
-
-
- [88] Planning for the
STS began in the late 1960s, before the first moon landing. Yet,
the concept of a winged, reusable spacecraft went back at least to
World War II, when the Germans designed a suborbital bomber that
would "skip" along the upper atmosphere, dropping bombs at low
points in its flight path. In America in the late 1940's, Wernher
von Braun, who transported Germany's rocket knowhow to the U.S.
Army, proposed a new design that became familiar to millions in
the pre-Sputnik era because Walt Disney Studios popularized it in
a series of animated television programs about spaceflight. It
consisted of a huge booster with dozens of upgraded V-2 engines in
the first stage, many more in the second, and a single-engine
third stage, topped with a Shuttle-like, delta-winged manned
spacecraft.
-
- Because the only reusable part of the von
Braun rocket was the final stage, other designers proposed in its
place a one-piece shuttle consisting of a very large
aerospacecraft that was intended to fly on turbojets or ramjets in
the atmosphere before shifting to rocket power when the
atmospheric oxygen ran out. Once it returned from orbit, it would
fly again under jet power. However, the first version of the
reusable spacecraft to actually begin development was the Air
Force Dyna-Soar, which hand a lifting body orbital vehicle atop a
Titan III booster. That project died in the mid-1960s, just before
NASA announced a compromise design of desirable features: the
expensive components (engines, solid rocket shells, the orbiter)
to be reusable; the relatively inexpensive component, the external
fuel tank, to be expendable; the orbiter to glide to an unpowered
landing3.
-
- The computer system inside the Shuttle
vehicle underwent an evolution as well. NASA gained enough
experience with on-board computers during the Gemini and Apollo
programs to have a fair idea of what it wanted in the Shuttle.
Drawing on this experience, a group of experts on spaceborne
computer systems from the Jet Propulsion Laboratory, the Draper
Laboratory (renamed during its Apollo efforts) at MIT, and
elsewhere collaborated on an internal NASA publication that was a
guide to help the designer of embedded spacecraft
computers4. Individuals contributed additional papers and
memos. Preliminary design proposals by potential contractors also
influenced [89] the eventual computer system. In one, Rockwell
International teamed up with IBM to submit a
system5. Previously, in 1967, the Manned Spacecraft Center
contracted with IBM for a conceptual study of spaceborne
computers6 and two Huntsville IBM engineers did a
shuttle-specific study in 19707. Coupled with IBM Gemini and Saturn experience, the
Rockwell/IBM team was hard to beat for technical expertise. NASA
also sought further advice from Draper, as it was still heavily
involved in Apollo8. These varied contributions shaped the final form
of the Shuttle's computer system.
-
- There were two aspects of the computer
design problem: functions and components. Previous manned programs
used computers only for guidance, navigation, and attitude
control, but a number of factors in spacecraft design caused the
list of computable functions to increase. A 1967 study projected
that post-Apollo computing needs would be shaped by more complex
spacecraft equipment, longer operational periods, and increased
crew sizes9. The study suggested three approaches to handling
the increased computer requirements. The first assigned a small,
special-purpose computer to each task, distributing the processes
so that the failure of one computer would not threaten other
spacecraft systems. The second approach proposed a central
computer with time-sharing capability, thus extending the concepts
implemented in Gemini and Apollo. Finally, the study recommended
several processors with a common memory (a combination of the
features of the first two ideas). This last concept was so popular
that by 1971 at least four multiprocessor systems were being
developed for NASA's use10. * The greater appeal of the multiprocessors, and the
production of the Skylab dual computer system, replaced the idea
of using simplex computer systems that could not be counted on to
be 100% reliable on long-duration flights.
-
- On a more detailed level than the overall
configuration, experts also realized that increased speed and
capacity were needed to effectively handle the greater number of
assigned tasks11. One engineer suggested that a processor 50% to
100% more powerful than first indicated be
procured12. This would provide insurance against the capacity
problems encountered in Gemini and Apollo and be cheaper than
software modifications later. A further requirement for a new
manned spacecraft computer was that it be capable of
floating-point arithmetic. Previous computers were fixed-point
designs, so scaling of the calculations hand to be written into
the software. Thirty percent of the Apollo software development
effort was spent on scaling13.
-
- [90] One holdover
component from the Gemini, Apollo, and Skylab computers remained:
core memory. Mostly replaced by semiconductor memories on IC
chips, core memory was made up of doughnut-shaped ferrite rings.
In the mid-1960s, core memories were determined to be the best
choice for manned flight for the indefinite future, because of
their reliability and nonvolatility14. Over 2,000 core memories flew in aircraft or
spacecraft by 197815. The NASA design guide for spacecraft computers
recommended use of core memory and that it be large enough to hold
all programs necessary for a mission16. That way, in emergencies, there would be no delay
waiting for programs to be loaded, as in Gemini 8, and the memory
could be powered down when unneeded without losing data.
-
- By 1970, several concepts to be used in
the Shuttle were chosen. One of these was the use of buses, which
Johnson Space Center's Robert Gardiner considered for moving large
amounts of data17. Instead of having a separate discrete wire for
every electronic connection, components would send messages on a
small number of buses on a time-shared basis. Such buses were
already in use in cabling from the launch center to rockets on the
launch pads. Buses were also being considered for military and
commercial aircraft, which were becoming quite dependent on
electronics. Additionally, there would be two redundant computer
systems-though no decision had been made as to how the systems
would communicate. In the LEM, the PGNCS had an active backup in
the Abort Guidance System (AGS). This was not true redundancy in
that the AGS contained a computer with less capacity than the AGC,
and so could not complete a mission, just safely abort one. True
redundancy, however, meant that each computer system would be
capable of doing all mission functions.
-
- Redundancy grew out of NASA's desire to be
able to complete a mission even after a failure. In fact, early
studies for the Shuttle predicated the concept of "fail
operational/fail operational/fail-safe." One failure and the
flight can continue, but two failures and the flight must be
aborted because the next failure reduces the redundancy to three
machines, the minimum necessary for voting. In the 1970 computer
arrangement, one special-purpose computer handled flight control
functions (the fly-by-wire system), and another general-purpose
computer performed guidance, navigation, and data management
functions. These two computers had twins and the entire group of
four was duplicated to provide the desired layers of
redundancy18.
-
- More concrete proposals came in 1971.
Draper presented a couple of plans, one fairly conservative, the
other more ambitious. The less expensive version used two sets of
two AGCs. These models of the AGC would contain 32K of erasable
memory and magnetic tape mass memory instead of the core rope in
the original19. Redundancy would be provided by a full backup that
would be automatically switched into action upon failure of the
primary (an idea later abandoned since [91] a software fault
could cause a premature switch-over)20. Draper's more expensive, but more robust, plan
proposed a "layered collaborative computer system," to provide
"significant total, modest individual computing
power"21. A relatively large multiprocessor was at the heart
of this system, with local processors at the subsystem level. This
had the potential effect of insulating the central computer from
subsystem changes.
-
- Unlike Gemini and Apollo, NASA wanted an
off-the-shelf computer system for the Shuttle. If "Space rating" a
system involved a stricter set of requirements than a military
standard22, starting with a military-rated computer made the
next step in the certification process a lot cheaper. Five systems
were actively considered in the early 1970s: The IBM 4Pi AP-1, the
Autonetics D232, the Control Data Corporation Alpha, the Raytheon
RAC-251, and the Honeywell HDC-70123. The basic profile of the computer system evolved
to the point where expectations included 32-bit word size for
accurate calculations, at least 64K of memory, and
microprogramming capability24. Microprograms are called firmware and contain
control programs otherwise realized as hardware. Firmware can be
changed to match evolving requirements or circumstances. Thus, a
computer could be adapted to a number of functions by revising its
instruction set through microcoding.
-
- Despite the fact that Draper Laboratory
favored the Autonetics machine, and a NASA engineer estimated that
the load on the Shuttle computers would "be heavier than the 4Pi
[could] support," the IBM machine was still
chosen25. The 4Pi AP-1's advantages lay in its history and
architecture. Already used in aircraft applications, it was also
related to the 4Pi computers on Skylab, which were members of the
same architectural family as the IBM System 360 mainframe series.
Since the instruction set for the AP-1 and 360 were very similar,
experienced 360 programmers would need little retraining.
Additionally, a number of software development tools existed for
the AP-1 on the 360. As in the other spacecraft computers, no
compilers or other program development tools would be carried
on-board. All flight programs were developed and tested in
ground-based systems, with the binary object code of the programs
loaded into the flight computer. Simulators and assemblers for the
AP-1 ran on the 360, which could be used to produce code for the
target machine. In both the Gemini and Apollo programs, such tools
had to be developed from scratch and were expensive.
-
- One further aspect of the evolution of the
Shuttle computer systems is that previous manned spacecraft
computers were programmed using assembly language or something
close to that level. Assembly language is very powerful because
use of memory and registers can be strictly controlled. But it is
expensive to develop assembly language programs since doing the
original coding and verifying that the [92] programs work
properly involve extra care. These programs are neither as
readable nor as easily tested as programs written in FORTRAN or
other higher-level computer languages. The delays and expense of
the Apollo software development, along with the realization that
the Shuttle software would be many times as complex, led NASA to
encourage development of a language that would be optimal for
real-time computing. Estimates were that the software development
cycle time for the Shuttle could be reduced 10% to 15% by using
such a language26.
-
- The result was HAL/S, a high-level
language that supports vector arithmetic and schedules tasks
according to programmer-defined priority
levels.** No other early 1970s language adequately provided
either capability. Intermetrics, Inc., a Cambridge firm, wrote the
compiler for HAL. Ex-Draper Lab people who worked on the Apollo
software formed the company in 196927.
-
- The proposal to use HAL met vigorous
opposition from managers used to assembly language systems. Many
employed the same argument mounted against FORTRAN a decade
earlier: The compiler would produce code significantly slower or
with less efficiency than hand-coded assemblers. High-level
languages are strictly for the convenience of programmers.
Machines still need their instructions delivered at the binary
level. Thus, high-level languages use compilers that translate the
language to the point where the machine receives instructions in
its own instruction set (excepting certain recently developed LISP
machines, in which LISP is the native code). Compilers generally
do not produce code as well as humans. They simply do it faster
and more accurately. However, many engineers felt that
optimization of flight code was more important than the gains of
using a high-level language. To forestall possible criticism,
Richard Parten, the first chief of Johnson's Spacecraft Software
Division, ordered a series of benchmark tests. Parten had IBM pick
its best assembly language programmers to code a set of test
programs. The same functions were also written in HAL and then
raced against each other. The running times were sufficiently
close to quiet objectors to high-level languages on spacecraft
(roughly a 10% to 15% performance difference)28.
-
- [93] By 1973, work
could begin on the software necessary for the shuttle, as hardware
decisions were complete. Conceptually, the shuttle software and
hardware came to be known as the Data Processing System
(DPS).
-
-
-
* These were: EXAM (Experimental Aerospace
Multiprocessor) at Johnson Space Center, the Advanced Control,
Guidance, and Navigation Computer at MIT, SUMC (Space
Ultrareliable Modular Computer) at Marshall Space Flight Center,
and PULPP (Parallel Ultra Low Power Processor) at the Goddard
Space Flight Center.
-
- ** The origins of the name of the language are
unclear. Stanley Kubrick's classic film 2001: A Space Odyssey
(1968) was playing in theaters at about the time the language was
being defined. A chief "character" in the film was a murderous
computer named HAL. NASA officials deny any relationship between
the names. John R. Garman of Johnson Space Center, one of the
principals in Shuttle onboard software development, said it may
have come from a fellow involved in the early development whose
name was Hal. Others suggest it is an acronym for Higher Avionics
Language. For a full description of the language and sample
programs, see Appendix II.

