Computers in Spaceflight: The NASA Experience

- Chapter Seven -
- The Evolution of Automated Launch Processing -
The shuttle launch prosessing system
[223] When NASA began planning for the Space Transportation System (STS), it espoused ambitious requirements, such as an eventual launch rate of 75 per year. A projected fleet of three orbiters would be limited to a maximum 2-week turnaround between flights and a 2-hour countdown in order to achieve that many firings39. Compared to the 5-month checkout of a Saturn V and its 3-day countdowns, this seemed outrageous, especially since the Shuttle would be no simpler than an Apollo/Saturn. NASA put considerable effort into examining commercial aircraft maintenance techniques to see what could be adopted for Shuttle use. One study indicated that only 53% of the tests done on a Saturn V would need be repeated if the spacecraft were reusable40. Even with this reduction, nearly 46,000 measurements have to be made and monitored in real time in the process of preparing a Shuttle for launch41. Clearly, there was no way NASA could do the Shuttle checkout with Apollo concepts42. As Henry Paul, who headed the Launch Processing System development for NASA, said, "Automation...becomes a requirement for operations, not an elective"43. Still, some engineers needed to be convinced that hardwired testing could be successfully eliminated, even though the last 20 hours of a Saturn countdown was 85% automated44. Building the present system, during which almost all preflight testing and preparation is done under control of software, and in which much of the countdown, sometimes even including the calling of "holds," is done by computing machinery, was a remarkable effort45. One of the biggest changes from the Apollo/Saturn preflight checkout systems is that Kennedy Space Center became responsible for the development of the Launch Processing System. Given the organization of NASA at the time, this was one of the biggest surprises as well.
Kennedy Space Center Gets the Job
During the late 1960s NASA began studies of the configuration of the eventual STS. Most designs were predicated on a winged booster, which would return to the launch site immediately after separation from an orbiter with internal fuel tanks. Such a design could theoretically be launched from anywhere in the United States isolated enough to handle aborts safely. Project staff examined a number of sites and made projections of the cost of an "ideal launch [224] site" that would have all the facilities necessary for handling the Shuttle. Chief among these were a hypergolic and cryogenic fuels facility, a hangar for the orbiters and boosters, a mating building, a control center, the launch pads, and a runway with a safing bay for emptying residual fuels after landing. One study placed the cost of a new facility with these characteristics at $1.9 billion. On the other hand, modifying existing Apollo/Saturn facilities at Kennedy and adding new equipment where needed would cost $355 million, a significant savings46. In March 1972, NASA selected the solid rocket booster/external tank configuration for the Shuttle. All inland launch sites were thus eliminated, and just Kennedy Space Center, Vandenberg Air Force Base, and a site in Texas remained under consideration47. Since existing facilities could be modified at both Vandenberg and Kennedy, the cost-conscious administrators settled on those two launch sites. Vandenberg was expected to handle polar orbit launches and most military payloads. Kennedy would launch eastward, continuing the established situation and giving Kennedy the opportunity to try for the development of the checkout system.
Phase B Shuttle studies conducted by a number of contractors included concepts of the checkout system48. Some hinted at the direction the eventual system would take. One pointed out the need for efficient and simple man-machine interfaces, and called for having ATOLL, FORTRAN IV, and COBOL compilers available to the engineers49. Kennedy's own early study, based on Rockwell and McDonnell-Douglas Shuttle configurations, called for a central data processing facility connected to every part of the Shuttle handling equipment, including a mission simulator on site and by communications link to Mission Control in Houston50. Meanwhile, remnants of the old ACE group at Johnson had started work on a Shuttle checkout system.
A "Checkout Systems Development Lab" at the Johnson Space Center did research on new concepts of preparing manned spacecraft for flight51. Individual BIC, for "built-in checkout," cells would be located at test points throughout a spacecraft, each cell with I/O registers. Automated tests would read and write to these cells. Johnson's development team wanted a single central computer to be connected to several sets of Universal Test Equipment consoles and thence to the Shuttle52. General Electric built a prototype of a "Universal Control and Display Console" for the Laboratory. Each control console would have two color display tubes and be capable of supporting tests on any specified parts of the spacecraft53. The system was similar to earlier Apollo/Saturn concepts, with a big computer in the middle doing all the testing and displays, communicating with the spacecraft, and so on. One improvement was that the Universal Test Equipment meant that units could be mass produced and assigned to different checkout tasks without significant hardware changes. When [225] the time came for the Shuttle project office at Johnson to make a decision about preflight checkout, the hometown lab made a proposal that was "underdeveloped" and vague54. Most likely, the engineers in the Development Lab thought the job was theirs because in all previous programs the center responsible for the spacecraft was responsible for checkout. When Kennedy had tried to do some ACE development, it was moved to the center responsible for the Apollo. Therefore, a full- blown proposal did not seem necessary. They were in for a surprise.
Impetus to make Kennedy the development center for the Launch Processing System came from many levels. The center's director, Kurt Debus, made his support clear to his engineers in 197255. Walton saw a chance to do another ACE, but this time as a fully integrated system for all parts of the spacecraft. The consensus was that by having Kennedy Space Center do the development, much money would be saved and civil servants would be more actively involved56. Even though the work originated with Walton's Design Engineering Directorate, talent for developing the Launch Processing System came from across the Center57. A study group of about half a dozen engineers, led by Theodore Sasseen and including Henry Paul, Frank Bryne, George Matthews, and others who had key roles in the later implementation of the system, met and began work on a prototype58.
Making the prototype turned out to be one of the key factors in landing the Launch Processing System development job for Kennedy. The engineers made a small model of a liquid hydrogen loading facility, with real valves and tanks. Using a Digital Equipment Corporation PDP 11/45, they devised software that graphically displayed a skeletal view of the piping and valves, with actual pressures printed next to the appropriate valve. The prototype could transfer fuel to the model spacecraft under software control, with the user able to monitor flows and pressures at the console. Confidence in their ability to create automated procedures encouraged the engineers, and they also now had a physical version of their system to help in selling it. Johnson's Universal Test Equipment had no counterpart in terms of functionality.
The prototype represented a single, and complete, part of the total system, a system quite different in concept from previous ideas. Launch processing and mission control prior to the Kennedy developments were based on using a minimum number of mainframe computers. Frank Byrne had the technical vision to develop a distributed computing system, in which dozens of small computers would do the checkout functions. Walton provided the leadership and tenacity to hold to the concept and see it put into place59. Several important advantages result from using distributed computers. First, the tasks more closely fit the power of the machine. Using a mainframe computer for relatively simple procedures such as solid rocket booster checkout would be overkill60. Second, a distributed system would free software [226] developers from worrying about fitting their programs in with others in a big machine's memory. Each discipline, such as engines, cryogenics, and avionics, would have a separate console61. Third, parallel testing could be done62. A mainframe would have to be inordinately large to contain all the checkout programs. Therefore, they would have to be loaded and run serially, as in the RCA 110As, defeating the short countdown requirement. Finally, Paul was convinced that overall hardware costs would be reduced compared with mainframe configurations63.
In 1972 Robert F. Thompson was the head of the Shuttle project office at Johnson and in charge of deciding where to place the checkout development. Faced with a choice between a homegrown system similar to tried and true predecessors and a new concept developed at Kennedy that even had opposition there, he ruled in favor of Kennedy's proposal against the opinions of his advisors. The winners are gracious toward Mr. Thompson, calling him an "honest manager" and a "nonterritorial individual"64. Thompson judged Kennedy's to be the best proposal, but he also thought it more efficient for NASA to develop the Launch Processing System where it would eventually be used and by the people who would use it.
Getting Started: Contracting For the Launch Processing System
Due to the earlier site studies and the building of the prototype, Kennedy Space Center had a good idea of what it wanted in the Launch Processing System. Reflecting the detailed requirements developed for the Shuttle on-board computers, the Design Engineering Directorate's engineers started in March 1973 to prepare the "Launch Processing System Concept Description Document"65. Released in October, the document specified the architecture and concepts of the System in detail, before any major contractor involvement66. Kennedy's efforts on the Launch Processing System are reflected by the fact that nearly 100 civil servants were involved in the planning between 1973 and the March 1976 freeze of the design67.
Plans for the System included extensive remodeling of Saturn facilities. The Processing System itself is largely contained in the Launch Control Center. Hardware is divided into the Checkout, Control, and Monitor Subsystem (CCMS), the Central Data Subsystem (CDS), and the Record and Playback Subsystem (RPS). Small, task-dedicated computers are in the four firing rooms of the Control Center and are the primary component of the CCMS. Large mainframe computers located on the floor below the firing rooms make up the biggest part of the CDS. NASA's Joseph Medlock, Thomas Purer, and Larry [227] Dickison envisioned test engineers developing their own procedures using an engineer-oriented language like ATOLL in concept but better and easier to use68. These procedures would then be developed on the mainframes and tested against simulations stored on the mainframes. When verified, they would be included in the system and stored on disk. When a firing room became active to support a vehicle, the engineer would load his test procedure from the mainframe to the minicomputer attached to his console and execute it from the console.
Depending on which spacecraft subsystem is involved, the tendrils of the Launch Processing System may follow it wherever it goes on the Space Center site. The firing rooms are connected to the Vehicle Assembly Building, the launch pads, the Cargo Integration and Test Equipment, and the new orbiter Processing Facility, a two-bay horizontal hangar. At each location, hardware interface modules make it possible to test and monitor the orbiter and other parts of the spacecraft from the firing rooms. So the System is locally distributed computationally, but physically centralized-especially compared with the RCA 110As and GE 635s of the Saturn era.
One critical side effect of using a mix of mainframe data base machines and minicomputers for individual system checkout, as well as the need to talk to a pervasive on-board computer system, was that for the first time, several different network architectures had to be combined into one69. The inherent difficulties involved led NASA to award the software contract before choosing hardware so that the software contractor could help in the computer selection70. Further, the minicomputers were chosen apart from the contract for the consoles and other hardware associated with the CCMS. Four source selection boards eventually convened: one each for software, minicomputers, the CDS, and the CCMS71.
Since test engineers would write the applications software, the software contractor would be primarily responsible for the operating system under which the applications would run, the new test language, GOAL (for Ground Operations Aerospace Language), and any modifications to the microcode for the minicomputers and other equipment needed to successfully connect them. Interfacing largely became a software problem because the changes were to be implemented in microcode. Six contractors tried for the job, with IBM beating out General Electric, TRW, Computer Sciences Corporation, McDonnell-Douglas, and Harris Computer Corporation72. The initial $11.5 million contract ran from May 1974 to March 197973. This contract was extended several times due to delays in launching the first Shuttles, but IBM's involvement ceased in the operations era. The company did its usual good job, and users of the eventual system believed it fulfilled the requirements74. IBM used a top-down structured approach in designing the software, holding weekly formal reviews during the development stage so that NASA could closely monitor activities75.
[228] By winning the software contract first, IBM found itself in the unusual position of having to program other people's computers. One IBM employee said that his company was encouraged not to bid on the hardware contracts76. According to Byrne, IBM was not kept out of the hardware bids so much as they lacked a suitable minicomputer to offer. The System 34 was under development at that time, as was the Series/1, but IBM chose not to make its new minicomputers public77. Three companies made the final round of bids on the minicomputers: Prime, Varian Data Machines, and a small new company called Modular Computers, Inc78. Design Engineering had built a prototype of a launch processing console set for the solid rocket boosters using Prime computers. (Later shipped to Marshall for awhile [sic], it finished its career in the Vehicle Assembly Building nearly 10 years after construction.79) Because of this, many thought Prime had the contract won, but it was edged out by Modular Computers, much to the surprise of Byrne and Walton80. ModComp initially contracted for 60 machines at a cost of $4.2 million, a number later extended considerably as console sets were placed in all four firing rooms, the cargo integration facility, the Shuttle Avionics Integration Lab (SAIL) at Johnson, and the hypergolic maintenance facility, as well as at Vandenberg. Two months after the computer contract was let in June of 1975, Martin-Marietta defeated Grumman Aerospace Corp., Aeronutronic Ford, and General Electric for the remaining CCMS hardware.
By November 1976, IBM received the first minicomputer for software development, and by February of 1977, the first station for GOAL applications development was delivered81. Honeywell won the CDS hardware contract in the fourth quarter of 1975, and John Conway of NASA managed the acquisition of equipment and personnel for that Subsystem during l976-197782. By 1977, the Launch Processing System began to take physical shape.
The Common Data Buffer: Heart of the System
Most diagrams of the physical components of the Launch Processing System show an inordinately large rectangle at the center of the drawing, with all other components either directly or indirectly connected to it. That rectangle represents the common data buffer, which Thomas Walton called the "cornerstone of the system"83. The biggest problem with creating distributed computing systems is devising a method of intercomputer communication that is reliable, fast, and simple. In a system such as the Launch Processing System, which depends on a number of computers "knowing" the same data about the spacecraft, some method of protecting and centralizing the common....


Figure 7-7.

Figure 7-7. Shuttle Launch Processing System hardware structure. (Courtesy IBM) is needed. Frank Byrne, who was involved in the planning for the Processing System from the start, took on the job of designing a device to keep track of commonly needed data that also made it possible for the various computers to communicate with each other. Kennedy followed a plan to use commercially available equipment in as many parts of the Launch Processing System as possible. Minicomputers and mainframe computers are used largely unchanged. However, since no organization had tried to closely connect such large numbers of machines, some of which were quite different in architecture from the others, there was no commercially available solution to the common data problem. Byrne had to design one on his own: the common data buffer.

Byrne noted that as the number of computers in a distributed system increases, the complexity of intercomputer communication increases. He wanted to remove the complexity. By placing the common data at a central location he eliminated the need to update multiple copies of the data in separate machine memories. Possibly he got the idea for a common data area from his work on the GE 635s in the Central Instrumentation Facility. Those machines had a "data core" [230] which could be accessed by both84. ACE stations also used common data areas. Basically the common data buffer provides each machine in the system with a set of "post office boxes." Specific parameters, such as valve pressures, voltages, and fuel levels, are assigned a location in the buffer memory. Each machine can read any location in the memory, but only the machines explicitly assigned to the task of maintaining a certain parameter can write to that parameter's location. That way a secondary machine cannot spuriously change a parameter's value. Programmers do not have to worry about where any particular parameter is kept. As long as it is referred to by its proper name in a GOAL program, the system build process will assign it its correct address as the program is compiled and integrated. In addition to acting as a common storage area for data, the buffer maintains the entire system interrupt stack and flags and status variables. Since it is centrally located, it is also used to temporarily store application programs as they are loaded from the repository in the CDS to the individual minicomputers. As such, it acts as a "way station."


Box 7-1: Inside the Common Data Buffer
Even though the common data buffer is a unique design, it uses standard commercial chips and boards. Nothing was custom built85. Memory chips are made of negative metal-oxide semiconductors (N-MOS), with each section consisting of 64K of one bit and 32 sections making up 64K of 32-bit words, matching the word size of the ModComp computers in the firing rooms and the error-correcting code used in transmissions86. Memory can be read in 200 nanoseconds, very fast by any current standard. Motorola 6800 microprocessors are used in the buffer as controllers, each with 2K of read-only memory and 1K of read/write memory87. The 64K main memory has the first 1K words set aside for interrupts, the next 1K as a common read/write area for flags and other variables, and the remainder as the protected memory area88. Data can move through the temporary storage areas and out to the computers at a rate of 8 megabytes per second89.
A common data buffer can have up to 64 devices (computers or other buffers) hooked to it at one time. Each device is connected to a buffer access card. The cards are scanned by the buffer in rotation, looking for incoming data or requests for data. If a device is needing the buffer and its request is noted on the access card, the device then has a slice of time to do its work, after which the scanner (which has been "looking ahead") goes to the next card indicating a usage request90. In this way, when one machine is writing or even reading, all other machines are shut out, preventing both contention for resources or simultaneous attempts to update data91.

[231] An early criticism of the common data buffer concept was that it would be a single point of failure92. Standard protections were built into the buffer, such as dual power supplies and the use of triple modular redundancy in some components93. However, the biggest problem in a system of this type is protecting against communication errors. Millions of bits are speeding throughout the network each second, providing considerable opportunity for lost or garbled data. Byrne's answer was to include a powerful set of error-correcting codes, which he created with the help of Robert W. Hockenberger of IBM, who was brought in specifically to work on the problem94.
The resulting codes enable the data buffer to successfully operate with any 2 bits of the 16-bit words in error! When a word is being transmitted between computers and the buffer or vice versa, it is sent as a 32-bit message. The first 8 bits are data, the second error-correcting code, then the next word's first 8 bits are code, and the last 8 data. Data are alternated in this way to protect against "big" signal losses95. Individual bits are checked using the correcting codes at each end of a transmission. One hundred per cent of the 1-bit errors can be corrected, 99%+ of the 2-bit errors can be fixed, 70% of 3-bit errors, and even half of the four bit errors. Since the memory chips are arranged in 64K by 1-bit banks, the loss of an entire sector of memory means the loss of just 1 bit per word, which can then be corrected. The error-correcting codes themselves are generated by software on read-only memories96. Even though such extensive protection is provided, in a decade of operation there has been no failure of a common data buffer, and internally never more than 1 bit has been garbled97.
In terms of the architecture of distributed systems, the common data buffer was a pioneer. Currently, many distributed systems exist partly because of the proliferation of minicomputers and microcomputers. Micros, especially, can be connected to common data bases on shared hard disks. NCR Corporation briefly marketed a system called Modus from 1982 to 1984 that featured the ability to connect with different types of microcomputers, a shared data base, and microprocessor control of communications that effectively locked out other computers from corrupting data being updated by another one on the network. In general, though, no commercial system is as effective as the Launch Processing System in terms of speed, simplicity, and reliability. Most intercomputer communication is clouded by different protocols, nonadherence to declared international standards, and lack of speed. Frank Byrne's work stands as an original and brilliant solution to the key problem in implementing the Launch Processing System. Fittingly, Byrne received proper recognition for his achievement. NASA granted a $10,000 bonus and an award98. The buffer itself was a rarity for the government side of the space program99.
CCMS Hardware
[232] The hardware of the CCMS consists of the common data buffer and everything else in the four firing rooms of the Launch Control Center. Even though the buffer appears on the charts as the largest item, in reality it is one of the smallest, filling two electronics racks in the back of a firing room. Most of the equipment in a room is blue-colored consoles and boxes: the ModComp computers and their consoles. The number and arrangement of consoles are dependent on the function of the particular room. Firing rooms one and three are for flight operations, with three capable of being made secure for DOD launches. Rooms two and four are for software development and testing, number four used for secure operations100. Firing room two has three buffers to facilitate multiple parallel software development101. Operations firing rooms normally are confided for 12 consoles, plus a master, integration, and a backup console102.
During a countdown, consoles in the adjacent software development room are kept active as a further backup103. Each ModComp has three display terminals, and those three make up one console. These are mounted in a half semicircle, so two computers and their attached consoles located side by side look like a "D" with the rounded part facing the front of the room. Each of the computers contains either a 5-megabyte hard disk or 80-megabyte hard disk for storing applications programs uploaded from the mainframes in the CDS104. Early in the program each engineer had his own disk, and could carry his programs to different computers, but when configuration control began to be needed the removable disks were replaced105. Loading an entire firing room through the buffer to the ModComps takes a full shift. Each computer can run up to six GOAL programs concurrently.
Individual consoles have marvelous capabilities. NASA commissioned Mitre Corporation to do a human factors study for the Launch Processing System106. Some of the resulting concepts make the usability of the system outstanding, and it is superior to many workstations in existence today. Color displays, programmable function keys that make it possible to replace long strings of keystrokes with a single push, full cursor control, and other features make it possible for an engineer to create applications programs that can be run without using the keyboard107. This concept antedates by 10 years the now ubiquitous "mouse" found on such machines as the Apple Macintosh. In addition, the consoles can be switched to become a terminal to the CDS for procedure development and to examine data recorded during operations. Special keys on the console enable program execution to be temporarily halted or single-stepped, aiding debugging of GOAL applications108. As a further convenience, each console has hard-copy capability; "snapshots" of displays can be made, with all [233] the graphics intact, but in black and white. Graphics use is assisted by special keys that provide corners, standard symbols for valves, transducers, and other components that can be put at cursor positions on the screen. Thus, the engineers can build pictorial skeletons of the systems they are testing for greater clarity. In general, these consoles are among the best available in any computer installation and are ideally suited to the purpose of the Launch Processing System.
Besides the use of ModComps attached to consoles, other ModComps are used as front end processors to provide interfaces between the spacecraft and ground service systems and the buffer. ModComps used for applications programs have 64K words of memory, but the front end processors have 48K, 64K, or 304K, depending on their connections to other devices109. Hardware interface modules at the actual points of entry to the ground support equipment plugged into the spacecraft send and receive data from the front end processors. Those, in turn, examine the data for parameters that are approaching their test limits. If a parameter nears a limit, the processor issues an interrupt and calls in a "control logic" program to handle the matter110. Control logic is a subset of GOAL used for making sure things are not done outside their proper order and within specific time constraints. For orbiter communication, a launch data bus front end processor communicates directly with the on-board general-purpose computers. Other "downlink" front end processors only receive pulse code modulated data to be processed for orbiter, main engine, and payload components.
Between the console computers and front end processors, a typical operations firing room contains over 30 minicomputers, each interconnected through the buffer. These computers can control tests and monitor the Shuttle anywhere hardware interface modules are available to connect it to the firing room, whether in the orbiter processing facility the Vehicle Assembly Building, or the pad. Since each console can do the functions of any other console simply by changing its software load, the system has tremendous flexibility.
CDS Hardware
Supporting the CCMS is the CDS. Two sets of two Honeywell 66/80 mainframe computers are the heart of the CDS. NASA purchased the original pair of these 36-bit machines with half a million words of main memory each and an additional half a million words for sharing. As software for the Launch Processing System grew in size, the memories were upgraded to 1.5 million words each and 1 million words of shared memory111. One hundred seventy-two disk drives are connected to the machines as mass storage, with a total capacity of almost 30 billion bytes. Originally, the computers used.....


Figure 7-8.

Figure 7-8. Typical firing room layout for the Shuttle. Less than 50 engineers are needed for the countdown. (NASA photo 108-KSC-78PC-240)


....Honeywell's 4JS1 operating system, which is no longer supported by the company. One NASA computer scientist said that "we have taken almost every piece of standard software and modified it" to meet the unique needs of the Launch Processing System 112. Most often the first pair of Honeywells support an operations firing room, whereas the second set is being used for software development. If one of the pair notices that it has failed self-tests for 10 machine cycles, it automatically switches control to its partner.

The third part of the Launch Processing System is the RPS. Initially implemented with Apollo-era equipment, it was later modernized with new recorders and computers. The RPS records most data telemetered from the spacecraft for later playback and produces records and printouts in real time for immediate analysis by system engineers conducting tests113. Firing room engineers can play back tests or other data directly to their firing room consoles for problem resolution or trend analysis114. At first, the RPS only had enough equipment to support one Shuttle at a time, so to switch from one to another required rearranging a number of connections 115. This situation was corrected during the modernization so that the RPS can now handle multiple Shuttle data flows.

Figure 7-9.
Figure 7-9. Hard copy of a display from one of the firing room consoles.

Launch Processing System Software
[236] Software for the Launch Processing System is of three types: applications software, written in GOAL, performs the test and integration functions; simulations software enables engineers to verify their GOAL programs before using them on the real equipment; and systems software controls the execution of the other types. In the Launch Processing Division at the Kennedy Space Center, two branches support the hardware of the Launch Processing System, one for each major subsystem, whereas the Applications and Simulations Branch supports software by developing simulations and assisting test engineers in GOAL procedure writing. One of the reasons for the utility and success of the System is that civil service operations and maintenance personnel have been included in software planning and design from the beginning in order to make the System better meet their needs116. They essentially built their own tools117. That policy continues in the Applications and Simulations Branch, which helps the engineers refine their test requirements118.
GOAL applications programs are the largest part of the Launch Processing System software, totaling 14.2 million words by the early 1980s. As a comparison, the displays, control logic, and test control software added up to less than 700K words119. Despite the early resistance of engineers to the automation of testing, they found that they learned more about their assigned part of the spacecraft by writing ATOLL or GOAL programs120. In instructing a computer, saying "pressurize the tank until the pressure is high enough" is too vague. Engineers writing programs are forced to think through the proper parameters and values and to account for anomalies ahead of time.
Kennedy engineers abandoned ATOLL because it had deficiencies in ease of use and in comprehensiveness: Too often assembly languages had to be used to do something ATOLL could not. Henry Paul assigned ATOLL veteran Joseph Medlock of Kennedy to head the GOAL development group121. Medlock and his team of civil servants received help from Martin-Marietta Corporation in defining the language, and then IBM implemented GOAL. The result was a highly readable, self-documenting procedural language. Just over four dozen statements are available, and training time is short, taking half days for 3 weeks (see Appendix III for an example of a GOAL program)122. IBM designed GOAL's compiler to disallow any undefined branches or procedures, making it more strict than FORTRAN compilers123. GOAL is highly flexible and permits engineers to decide for themselves the degree of interaction required to do a test124. GOAL programs are run within the computers in time slices of 10 milliseconds125.
[237] When an engineer is developing a GOAL procedure, he writes the procedure on his console using it as a terminal to the CDS. After the procedure is complete, it is tested against a simulation in the Honeywell computers, if a simulation is available. A Shuttle Ground Operations Simulator, consisting of GOAL-like statements, is available for developing models to test programs relating to ground equipment such as fueling systems and external power126. However, due to the lack of an AP-101 processor and Shuttle on-board software, procedures for checking out the flight equipment are limited or nonexistent. There is no way at Kennedy to test those procedures except against an actual spacecraft, so they must be sent to SAIL at the Johnson Space Center127. In addition to that restriction, simulations programs are limited to 256K words, the largest program a Honeywell 66/80 can run, since it is not a virtual memory machine. As a result, some models have to be run in parts128.
A subset of GOAL is used to write control logic. Control logic prevents things from being done out of the proper order and within specific time constraints, avoiding disaster. It is necessary because of the parallel nature of testing. For example, before liquid oxygen can be moved through pipes and valves, they must be prechilled to near the temperature of the liquid, or the oxygen will flash evaporate. Control logic of the "prerequisite sequence" type checks to make sure the prechilling has been done. Or, if a valve pressure or voltage is nearing a dangerous level, "reactive sequence" control logic programs are automatically called by the front end processors to eliminate the anomaly129. Control logic thus makes parallel operations safe.
GOAL and control logic procedures must be integrated with the rest of the System before use to resolve potential conflicts and assign real addresses in the buffer to logical addresses in the programs. Integration is done in a laboratory containing the "Serial 0" ModComp/console, the first set delivered130. Integration requirements led NASA designers to abandon some of the flexibility envisioned in the early stages of the program131. Originally, they thought the engineers would have more responsibility for their programs and changes, but the complexity of the system required some measure of configuration control. Some 200 GOAL programs are needed just to load the liquid oxygen tank automatically132. With thousands of GOAL procedures to integrate, engineer autonomy had to be limited.
Cargo Integration and Test Equipment
One part of the Kennedy Space Center with an important role in the Shuttle program and also a user of Launch Processing System resources is the Cargo Integration and Test Equipment (CITE). [238] With hundreds of Shuttle flights planned to carry a variety of payloads from all over the world, the process of properly integrating cargo with the orbiter is a large task. Both electronic and physical interfaces must be checked in order to verify, for example, that a Spacelab module built in Germany will properly fit to a vacuum-proof seal in the cargo bay and be able to "talk" to the Shuttle computers as well.
Soon after beginning work as the prime contractor, Rockwell International and NASA did a study to find out how much and what kind of Interface Verification Equipment (IVE) would be needed for the operations era. By doing things the "traditional" way, in which the payload supplier did the interface verification, an estimated 20 sets of very expensive equipment were required. Robert Thompson favored sense over politics and decided in early 1976 to let Kennedy develop a centralized version of the IVE and do all the final interface testing for all the customers133.
During June, July, and August of 1976 the formal requirements for the cargo facility were baselined. But when a source selection board met to begin choosing equipment, the members realized that they were about to violate the basic principles of the Launch Processing System by bringing in new equipment and doing things in a unique way instead of using existing contracts and computers134. Accordingly, Kennedy stocked the cargo facility with the same physical and electronic interfaces present in the orbiter, permitting the same contractors and maintenance contracts to be used. In 1978, an AP-101 was added to provide a means to test software interfaces. Equipment in the cargo facility can also directly connect with the Launch Processing System so that payloads can be further integrated. CITE is another user of the simulations kept on the CDS 135.
Payloads delivered to Kennedy are checked out and further prepared in either the horizontal facility (Spacelab would be worked on there) or the vertical facility (communications satellites are integrated vertically). After completion of the integration tests, a special transporter with cargo space as large as the Shuttle's bay moves the payloads to the Vehicle Assembly Building for installation in the orbiter.
The Launch Processing System in the Operations Era
Originally, Henry Paul had a goal of reducing the number of technicians in a firing room from the 250 of the Apollo era to about 45. Although he succeeded, in the early 1980s, the Launch Processing System was still labor intensive, with 75 civil servants and 700 contractors involved136. However, in late 1983, NASA awarded the shuttle maintenance contract to Lockheed, which is now responsible for physical equipment and software relating to Shuttle launch [239] processing. That award marks the end of the multicontractor development era and the beginning, for the first time in NASA's history, of an operations era for a manned spacecraft. Before the Shuttle, each flight and the preparations beforehand were idiosyncratic. Now some degree of standardization and routine is possible, largely because of the nature of the Launch Processing System. Carl Delaune, a NASA engineer in the Applications and Simulations Branch, is exploring ways of applying artificial intelligence to checkout procedures, such as creating a program that makes suggestions to test engineers if strange values occur137. If his inquiry bears fruit, eventually the amount of human interaction during checkout will shrink even further.
As the development effort at Kennedy matured, the purposely staggered development of a Launch Processing System at Vandenberg Air Force Base began. Plans were to build the military facility after most of the developmental bugs were out of the NASA model. The Air Force saved money at its installation by modifying facilities built for the Gemini-technology Manned Orbiting Laboratory program in 1966138. Originally designed as a Titan III launch site, the complex provides for mating orbiter, tank, and boosters at the pad, as no Vehicle Assembly Building exists there. Ground checkout facilities are split between locations at North Vandenberg and South Vandenberg, so the CDS, CCMS, and RPS are physically separated139. With the commissioning of the western launch site in the early 1990s, the Shuttle program will have reached its full flowering.
Distributed computing, connecting different vendors' equipment successfully, good user interfaces, and automation are all topics of continued concern and research in the computer industry. The Launch Processing System solves all those problems in a specific arena. It is difficult to think of a system better suited to its task. A marvel of integration, efficiency, and suitability, it reflects the ingenuity and clear-sightedness of its originators. Lessons learned in the 1960s during the first attempts at automating checkout were applied in toto [sic] to the Launch Processing System. Rarely has a second system so completely eliminated the deficiencies of its predecessor.

link to previous pagelink to indexlink to next page