IBM
Skip to main content
 
Search IBM Research
     Home  |  Products & services  |  Support & downloads  |  My account
 Select a country
 IBM Research Home
Deep Blue
Overview/Home
The Match
The Players
The Technology
 ·CERN
 ·96 Summer Olympics
 ·MHPCC
 ·NASA
 ·Charles Schwab
 ·Kasporav vs Deep Blue Rematch chess-cast
The Community

Related Links
 Press room
 Chess conference
 Site guide
 Search Research
 Feedback
 
 


Deep Blue game 6: May 11 @ 3:00PM EDT | 19:00PM GMT        kasparov 2.5 deep blue 3.5
cnasa   

The SPs at NASA research labs help tackle Grand Challenge

The Numerical Aerodynamics Simulation facility at NASA's Ames Research Center in Moffett Field, Calif., has a world-class supercomputing capability, accessible to the nation's aeronautical researchers in government, industry and academia.

The facility seeks to create a highly parallel computing environment for solving critical path problems, ones for which a supercomputer will yield an order-of-magnitude improvement in performance -- thereby reducing and eliminating obstacles for aircraft designers and manufacturers. Its objectives include acting as a pathfinder in advanced, large-scale computing and providing a national computational capability to help ensure continued U.S. leadership in computational fluid dynamics and related aerospace disciplines.

Led by the Numerical Aerodynamics Simulation facility, NASA's research centers established the Computational AeroSciences (CAS) program to address NASA's Grand Challenge in Aeronautics -- to create an environment in which a complete aerospace vehicle system can be simulated within a computing time ranging from one to several hours.

To bring the agency a step closer to meeting that challenge -- and to exploit the benefits of scalable parallel computing for competitive advantage in a global economy -- NASA awarded a three-year, $22 billion CAS cooperative research agreement to an IBM-led consortium. As part of the agreement, IBM has installed three Scalable POWERparallel Systems (SP2) as the hardware test beds for the research -- the first of them the 160-node SP2 at the NASA Ames Research Center.

Cooperative research
"We were interested in obtaining a parallel computer that had the best performance available for the dollar," says Thomas Lasinski, chief for the data analysis branch at the Numerical Aerodynamics Simulation facility. "NASA released a request for proposals that was broadly advertised, and there was a lot of interest in it. We received ten proposals, which a team of eight evaluated against a set of performance benchmarks. The decision, made on the basis of those numbers, was that the SP2 offered us the best price/performance."

Agreements like the one NASA signed with the IBM-led consortium are designed to foster cooperative research between industry and government labs by offering private firms advantageous rights to patents and other intellectual property from the joint research. The consortium includes Boeing Computer Services, Rensselaer Polytechnic Institute, Lockheed Missiles and Space Research, Centric Engineering Systems, Intelligent Aerodynamics and Rice University. Members will support NASA's investment in the agreement with equipment, research and facilities that will significantly increase the resources available.

"We found other features interesting as well," Lasinski continues. "The SP2 is based on workstation technology, for example, and a lot of our customers in the aerospace industry have large clusters of workstations. We're impressed with the RS/6000 Model 590 architecture, which has very good memory access. It was also important that the machine chosen be expandable to a reasonably large number of processors. And the SP2 can be updated with new RISC processors in a very timely and economical manner. But mainly we saw a machine that was competitive with the largest supercomputer NASA had -- the Cray C90 -- for a third of the price."

Benchmark winner
The SP2 at NASA Ames has 160 RS/6000 590 nodes, with 128MB of memory on most of them, and 512MB on several. The configuration is equipped with IBM's High Performance Switch, along with High Performance Parallel Interface (HIPPI) connections and FDDI fiber optic capability.

Parallel Systems Manager Toby Harness: "Important to us -- and increasingly to the parallel computing community -- are the benchmarks developed here in an attempt to come up with some objective measures of performance. Those benchmarks consist of five kernels and three synthetic computational fluid dynamics applications, which are the ones of most interest to us. In addition, we developed for this acquisition three additional I/O benchmarks -- network, disk and peak I/O. The SP2's performance in the I/O studies was extremely strong, and it came in better than our estimates on almost all the benchmarks."

IBM's T.J. Watson Research Center -- responsible for many of the SP2's hardware and system software technologies -- provided key contributions to the benchmarking efforts and will participate as a partner in the consortium.

"It took us about a month to get through the initial software installation and acceptance tests, and we had users taking advantage of the system while we were still figuring out how to tune it," says Charles Niggley, senior computer scientist of the parallel systems support group. "I was surprised at how well the High Performance Switch performed right from the start. We figured there'd be several, possibly major, hardware problems -- but we didn't encounter any. Compared to massively parallel systems previously installed, we're still amazed at how fast the SP2 came up, the reliability it's had to date and how quickly we're getting productive use out of the system."

Major payoffs
Numerical Aerodynamics Simulation technology offers major payoffs to the aeronautics industry in reducing cycle time and design cost and enabling designs otherwise not achievable. Using computational fluid dynamics, for example – the foundation for the work done at the facility -- led to a design change that resulted in millions of dollars of fuel saved by a commercial airline.

"Solving the coupled, partial differential equations that govern how fluid flows over an arbitrary body -- like a wing or a rotor blade -- is very computer intensive and very memory intensive. We use every byte of memory available on the SP2," points out Timothy Barth of the advanced algorithms and applications branch.

"It's the first machine that's really put us in the zone where we can think about understanding flow physics and how to design better high lift geometries. It's really changed the scale of the problems we can work on -- algorithms that parallelize quite well on a machine such as the SP2."

This page illustrates how one customer uses IBM products. Many factors have contributed to the results and benefits described. IBM does not guarantee comparable results. All information contained herein was provided by the featured customer and IBM Business Partners. IBM does not attest to its accuracy.



  
Related Information

      Next Horizon:

 
      Guest Essays:

 
      Meet the players:"It's the first machine that's really put us in the zone where we can think about understanding flow physics and how to design better high lift geometries." - NASA's Timothy Barth on the technology behind Deep Blue

 
      Chess Pieces
no. 8

Although far from an expert chess player, the exiled Lenin was so preoccupied with correspondence chess that he often rattled on about it in his sleep.
 
  About IBM  |  Privacy  |  Legal  |  Contact