SubscribeFeedbackInsights ArchivesTable of Contents

 

Photo NASA ImmersaDesk
The ImmersaDesk provides semi-immersive virtual reality in a drafting table format. The system requires stereo glasses and magnetic head and hand tracking.
70K size

 

Andrea Malagoli Andrea Malagoli, Principal Investigator at the University of Chicago's Department of Astronomy and Astrophysics, describes the Turbulent Convection and Dynamos in Stars project. 25K size


Anne Hammond connects the switched network within the Laboratory for Computational Dynamics to the University of Colorado's local ATM infrastructure. 33K size


 



 


 

 

U of Colorado meeting via high-performance network

Fausto Cattaneo (on the bottom half of the photo) of the University of Chicago joins a meeting at the University of Colorado via the high-performance network. NASA HPCC "Turbulent Convection and Dynamos in Stars" researchers use teleconferencing to collaborate on scientific problems. Participants at the table (left to right) are Mark Miesch, GSFC; Juri Toomre, University of Colorado; and Paul Woodward, University of Minnesota; at the console (from left) are Nicholas Brummell, Marc DeRosa and Julian Elliott of University of Colorado.

The challenge

How do you study the physics of a spinning sphere as formidable as our Sun as it shoots off boiling, exploding gases? Predicting the dynamic interaction of solar processes that affect not only the internal structure of the Sun, but the entire solar-terrestrial environment, is now easier thanks to the Earth and Space Sciences (ESS) project of NASA's High-Performance Computing and Communications (HPCC) Program. Understanding the physics of these processes by means of large-scale supercomputer simulations is only one of the challenges facing team members of the Turbulent Convection and Dynamos in Stars project. The team also has to share its information in real time with colleagues at remote, collaborating institutions so together they can visualize, manipulate and analyze the simulation data.

"Until seven years ago we were studying these problems using unsophisticated models," said Andrea Malagoli, the project's Principal Investigator at the University of Chicago's Department of Astronomy and Astrophysics. "We concluded we would never understand the complex solutions of the model equations until we could solve them numerically. In the past 10 years, computers have become sufficiently powerful to allow us to solve very large models. To perform the largest calculations that include more than 100 million variables, we need NASA supercomputers. To bring these simulations from Goddard to our universities," said Malagoli, "we need a very-high-speed network."

Managed by Jim Fischer at NASA's Goddard Space Flight Center (GSFC), the NASA/ESS Cooperative Agreement Notice includes Turbulent Convection and Dynamos in Stars as one of its nine main projects. The project involves the collaboration of three universities as members of the Grand Challenge team. "We are studying thermal convection," said Malagoli, "the process that transports the heat generated at the center of the Sun by thermonuclear processes all the way up to the surface where it is released in the form of radiation. Convection is important not only because it transports heat, but because it also creates strong stirring motions beneath the Sun's surface. The stirring motions are the main mechanism responsible for generating solar magnetic fields and all the phenomena associated with them. That includes Sun spots, big solar flares and all magnetic wave phenomena caused by solar activity. The magnetic field generated in the Sun extends all the way to the Earth, and convection-driven dynamo processes are the key mechanism for its generation."

In this Grand Challenge project, teams at the University of Colorado at Boulder, University of Chicago and University of Minnesota each modeled a different, but complementary, aspect of the behavior of the extreme temperature and pressure plasmas found in stars like the Sun. Generating up to 500 gigabytes of data each, the models were computed on the Cray T3E supercomputer at GSFC, the ESS testbed. (One gigabyte equals 1,000 megabytes, roughly the amount of data required to encode a human gene sequence.) Parts of the computed data were then transferred over high-performance networks back to the investigators' sites at each participating university so they could be analyzed and visualized on local machines.

So much data

The models encompass a three-dimensional area that is 10,000 kilometers on each edge and that resembles a plug in the side (and near the surface) of the Sun. This way, investigators can observe smaller details (approximately 20 kilometers) without saturating computer processing time and disk space. "We are trying to develop tools that make smarter use of the network," said Malagoli."Instead of sending lots of raw data, we are doing more image preprocessing on the supercomputer. This way, we can visualize data faster, and we can almost see the simulatio's evolution as it takes place."

"The entire simulation series generates several hundred gigabytes of data," said Malagoli. "We look at a frame sequence of a pre-selected physical quantity to see its evolution in time. Each frame requires us to download several gigabytes of information. We look at a number of evolutions requiring thousands of frames, so it takes several hundred hours before we have a full data set. Each hour we need to transfer about 200 gigabytes of data. Most of our problems are not so much in computing the model as in being able to transfer the data over the network. Today's Internet cannot provide the necessary bandwidth, the rate of data flow through the exchange points or the connection reliability that our project requires."

Getting from data to information

In the Turbulent Convection and Dynamos in Stars project, each university hosts a visualization session, transmitting image data to the other participants. Software developed at Argonne National Laboratory in Chicago displays the visualization on 3-D virtual reality tools (e.g., CAVE or ImmersaDesk) that can be linked across local or wide area networks. "Researchers at different sites can watch visualization of the numerical simulations simultaneously," said Malagoli. "We hope that one day they will all be able to participate actively in the data analysis process."

"High-performance networks are critical for the data-intensive computing and analysis conducted by the Turbulence in Stars team," said Anne Hammond, Networking and Systems Manager of the Laboratory for Computational Dynamics and NASA/SOHO (Solar and Heliospheric Observatory) Helioseismic Analysis Facility at the University of Colorado. "Networks are currently one tool in the evolving kit that includes terabyte (1024 gigabytes) disk systems, high-speed disk arrays and high-speed/high-capacity tape libraries‹tools not available at many university labs or even at some supercomputing sites. The tools are being pushed to their limits as researchers test methods of managing data and of interacting with each other and with the data to gain insight into the most challenging scientific problems."

"One of the key features in this project was the partnership of the networks," said Hammond. "The partnership included HPCC's NASA Research and Education Network/Next Generation Internet (NREN/NGI), the Energy Sciences Network (ESnet), the Metropolitan Research and Education Network (MREN, a high-speed research network local to the Chicago area directed by Joe Mambretti) and the National Science Foundation's very-high-performance Backbone Network Service (vBNS). The coordination by NREN's Marshall Deixler facilitated setup and analysis of several critical areas. Two such areas were unicast and multicast peering at Chicago between the NASA network and the MREN and vBNS networks. This peering networked the three universities, Argonne National Laboratory and the NASA sites (including the HPCC-ESS Cray T3E testbed) at asynchronous transfer mode OC12 (622 Mbps) and OC3 (155 Mbps) speeds."

True collaboration

High-speed networks also make it possible to cooperate with people at different locations. "Previously, we had to communicate by phone or fly to meetings," said Malagoli. "We now use multicasting capabilities to send the same data or images to several users in different locations simultaneously. You can also accomplish the same with voice communication to describe the content of your simulations while you are sending the images. Multicast technology is not standard on the regular Internet, so you need a specialized network capable of supporting it."

NREN is investigating alternate approaches to enable real-time, reliable multicast for collaborative visualizations. Such projects require delivery of very large data sets, teleconferencing, Quality of Service and video distribution within specific delay constraints. NREN is experimenting with a native multicast wide area network between the institutions involved in this project.

"The path from structure to infrastructure to a true facility requires the long-term commitment of institutions," said Hammond. "The National Laboratory for Applied Network Research now provides technical, engineering and traffic analysis support of vBNS for over 100 institutions. This is a long way from the not-so-distant early days of the vBNS (1995) when analysis to the university level was fragile. We will need analogous support services by partnering network engineering teams. These services will provide the framework for the complex networking and applications now evolving in the way scientists use the nation's supercomputing facilities and the way scientists work with each other. This cooperative structure is already taking shape as we see the early experiences of high-performance networking coalesce into NGI and Internet2. Internet2 is a nonprofit consortium of public and private companies and universities."

The promise

The success of complex simulations such as those in Turbulent Convection and Dynamos in Stars relies on teamwork and a broad spectrum of expertise. "We have to put the right physics in the model," said Malagoli, "but we also have to make sure that our application programs run efficiently on the large supercomputers, that we can write the data to a large disk, and that we can transport the data to a site where they can be visualized."

The combination of the scientific research, computer science and next-generation networking has opened vast new avenues for scientific exploration. In addition to primary research in astrophysics, improved analysis of 3-D space rendering and remote collaborative visualization tools, the Turbulent Convection and Dynamos in Stars project will foster technological advancement in high-performance computer architectures and networking to solve fundamental scientific issues. The infrastructure will allow this ESS scientific team to enhance its data analysis capabilities and use more effectively NASA HPCC supercomputing investments. The end result--enabling researchers to see and interact with the same data at different sites around the country simultaneously on virtual reality platforms--opens myriad possibilities for applications of the future. http://astro.uchicago.edu/Computing/HPCC

more feature articles