SubscribeFeedbackInsights ArchivesTable of Contents

Visualization

graphic Ray Gomez


John Vassberg

The OVERFLOW version utilizing Parallel Virtual Machine software helps John Vassberg and his associates compute the aerodynamic characteristics of the Boeing 717-200, shown in the background.


John Vassberg and the MD11

The corner flow between the upper surface of the wing and the fuselage was evaluated on the MD11 using OVERFLOW code.


Dennis Jespersen

Dennis Jespersen implemented a multigrid procedure in the code, a method that reduces computer time to prouce results.


Stuart Rogers and Pieter Buning

Stuart Rogers (left), NASA Ames Research Center and Pieter Buning, Langley Research Center.

OVERFLOW code empowers Computational fluid Dynamics by Judy Conlon

The overset grid flow solver OVERFLOW was developed as part of a collaborative effort between NASA Johnson Space Center in Houston, Texas and NASA Ames Research Center (ARC) in Moffett Field, Calif. The driving force behind this work was the need for evaluating the flow about the Space Shuttle launch vehicle. Developed in the early 1990s by NASA's Pieter Buning, now at Langley Research Center, Dennis Jespersen at ARC and others, the code is an outgrowth of earlier codes F3D and ARC3D, and a result of ARC's long history of flow-solver development.

Scientists use OVERFLOW to better understand the aerodynamic forces on a vehicle by evaluating the flowfield surrounding the vehicle. While wind tunnel testing provides limited data at many flow conditions, computational fluid dynamics (CFD) simulations provide detailed information about selected conditions. CFD also provides a much needed distribution of forces on the vehicle, aiding in structural design.

Most advanced CFD systems are based on the Navier-Stokes equations of motion for fluids. These partial differential equations express conservation of mass, momentum and energy. CFD codes usually incorporate some simplifications in these equations to reduce the computational burden to an acceptable level. Once the equations to solve a problem have been selected, they are converted to finite difference approximations in which the solution is determined on a grid of points spaced in some regular pattern representing the flow domain and boundary points.

OVERFLOW is a compressible 3-D flow solver that solves the time-dependent, Reynolds-averaged, Navier-Stokes equations using multiple overset structured grids. It provides an alternative, cost-effective means of simulating real flows. "A body-conforming mesh represents each object in the flow field or each aircraft component," explains Buning. "No limitations are placed on mesh interaction except that the entire volume of the flow field must be filled by grid points."

OVERFLOW aids CFD

Overset grid technology, or the process of dividing complex shapes into overlapping subdomains called blocks, was developed by the late Joseph Steger, Jack Benek and F. Carroll Dougherty. The user must first generate grids on the surface of each block, followed by volume grids within. Then, because the blocks can overlap in an arbitrary fashion, it is necessary to specifically determine how information is transferred.

Once the complete grid is generated, OVERFLOW calculates the conservation of mass, momentum and energy for each domain. Given the geometry and the flow conditions on the boundaries, OVERFLOW proceeds to solve for the flow quantities in the interior of the domain.

Grid-generation bottleneck

Most CFD packages require large amounts of CPU time to produce accurate results. A multigrid procedure was implemented in the OVERFLOW code by Dennis Jespersen. The multigrid method cuts down the computer time required by accelerating convergence of the flow solver.

A veteran in parallel computing, Jespersen is quick to point out that the High-Performance Computing and Communications Program provides the various computing platforms from workstations to highly parallel testbeds to meet daunting computational demands imposed by these computational fluid dynamic applications. For example, ARC's Jim Taft says, "Typical performance for the standard C90 version of the OVERFLOW code is around 450 million floating-point operations per second (megaflops) on a single CPU, making it one of the better vector codes in the CFD arena."

The meshes needed by OVERFLOW are a key factor in the overall computational process. "The grid generation process represents by far the largest amount of calendar time of a project, though the flow solver consumes all but a fraction of the computer time," states Buning.

OVERFLOW helps designers analyze whole vehicles more easily. "In an overset grid scheme, grids are generated about individual components (such as a wing or fuselage), and then the overlapped grids are tied together to create a grid system about the complete vehicle," Buning explains.

When asked what makes OVERFLOW stand out from other CFD software, Jespersen replied, "These are some of the benefits I've found helpful. The geometric flexibility meets users' needs. The accuracy of the results (compared to wind-tunnel experiments) speaks to the reliability of the tool. We've provided some good support, and the convergence of the code for steady-state problems is outstanding. OVERFLOW is a tool that can easily simulate flow conditions with inherent stability."

Transporting messages

The best balance of efficiency and CPU time comes from parallelizing CFD codes that might tie up single processor workstations for days. Versions of OVERFLOW utilizing Parallel Virtual Machine (PVM) or Message Passage Interface software can utilize multiple processors. Multicomputers that do not have shared memory must communicate by message passing. Messages are passed among processors that all collaborate on solving the same problem. Each processor handles the calculations for a complicated geometry in one grid block because it only needs information from a surrounding area.

"If some blocks are significantly smaller than others, those processors will finish early and be forced to wait," adds Buning. "This could be mitigated by either running several small blocks on a single node, or splitting larger blocks to achieve a more even set of block sizes."

Recently, the Numerical Aerospace Simulation Facility at ARC has been developing additional coarse grain parallel techniques that have shown significant promise in improving OVERFLOW's parallel scaling. The new method, termed Multi-Level Parallel, has demonstrated 6.3 billion floating-point operations per second (gigaflops) sustained performance on a 128-CPU Silicon Graphics Inc. Origin 2000.

Slightly different versions of the software are available for the CRAY C90, IBM SP2 and workstation cluster platforms. "OVERFLOW offers users the ability to select from a wide range of proven solvers, smoothers and turbulence models to solve the 3-D Navier-Stokes equations for fluid motion at a wide range of Mach numbers," states Taft.


"Currently our users save up to $120,000 per week by using OVERFLOW-PVM and have been processing this savings for a year now."

John Vassberg
Boeing


One motive for parallelizing a CFD code is that supercomputer time is expensive. Most users cannot justify the cost of a supercomputer to solve everyday design problems. Production runs on workstation cluster platforms by aerospace giant Boeing confirm the cost savings and accuracy of OVERFLOW-PVM specifically to compute the aerodynamic characteristics of its Boeing 717 and blended-wing-body aircraft. "Currently our users save up to $120,000 per week by using OVERFLOW-PVM and have been processing this savings for a year now. This savings is derived from using in-house workstation clusters instead of purchasing commercially available supercomputer time," says John Vassberg, Boeing senior principal engineer and scientist.

Grid choices

The OVERFLOW solver's gridding method has found a strong following among users in the aerospace and scientific community who are confronted with scores of complex problems. The grids shown below, which are made up of multiple overlapping rectangular or brick elements called curvilinear Cartesian grids, play a necessary role because it is impossible to discretize any reasonably complicated domain with a single curvilinear Cartesian grid.

A relatively recent CFD development that provides a competing strategy has been the incorporation of unstructured grids, meaning triangles and tetrahedra. In geometrically complex regions, this method gives great flexibility in the discretization. Unstructured methods also promise easy adaptive-mesh refinement such as modifying the grid by adding new grid points in important regions-where rapid changes in flow quantities such as pressure or density are determined automatically as the computation progresses.


"Adding an adaptive-gridding capability to OVERFLOW would speed up the grid-generation process and drive down the total solution time."

Dennis Jespersen
NASA


A challenge for OVERFLOW is to provide adaptive-mesh refinement to meet the needs of CFD designers. In calculating and analyzing the wake of a wing or a rotating helicopter rotor, for example, adaptive refinement eases the burden on the grid generation process by packing a lot of grid points in regions where the flow properties are varying rapidly. Currently, the researcher or scientist generating the grid must use experience and intuition to guess where to incorporate a high concentration of points. Jespersen states, "Adding an adaptive-gridding capability to OVERFLOW would speed up the grid-generation process and drive down the total solution time." 

more feature articles