Livermore supercomputers boost scientific progress
Computer simulations take their place alongside theory and experiments as essential elements of scientific research
DOE/Lawrence Livermore National Laboratory
![]() Click here for a high resolution photograph. |
Imagine trying to conduct an experiment that requires reproducing the conditions at the core of a thermonuclear weapon explosion - without actually exploding the weapon. Or replicating centuries of climate change on a scale ranging from a square mile to the entire planet. Or determining the effect of a major earthquake on buildings and underground structures - without waiting for an actual earthquake to hit.
Scientific problems like these can be beyond the reach of experiments, either because the experiment is too expensive, too hard to perform or evaluate, too dangerous - or, as in the case of nuclear weapons testing - against national policy.
When such barriers arise, scientists at Lawrence Livermore National Laboratory (LLNL) and elsewhere increasingly are turning to sophisticated, three-dimensional supercomputer simulations to suggest and verify their theories and to design, complement and sometimes replace experiments.
Powered by the dramatic increase in supercomputer speed over the last decade, today's simulations can mimic the physical world down to the interactions of individual atoms. They can take scientists into the interiors of stars and supernovae and reproduce time scales ranging from trillionths of a second to centuries. Simulations can test theories, reveal new physics, guide the setup of new experiments and help scientists understand past experiments. Within the past few years, simulations have taken their place alongside theory and experiments as essential elements of scientific progress.
"Livermore from its very founding has understood the value of computing and has invested in generation after generation of supercomputers," says Dona Crawford, LLNL's associate director for computation. "The computer, in effect, serves as a virtual laboratory."
"Simulation at the resolution now available represents a revolution in the process of scientific discovery," adds Mike McCoy, LLNL's deputy associate director for computation. "We're augmenting the 300-year-old Newtonian model of 'observation, theory and experiment' with 'observation, theory, experiment and simulation.'"
![]() |
Public-private partnerships are key
Many of the remarkable advances in high-performance computing over the last 50 years have resulted from collaborations between Livermore and its sister Department of Energy laboratories and the private sector. The Livermore Automatic Research Computer (LARC) project, a late 1950s collaboration with Remington Rand, is thought by many to represent the beginning of supercomputing. Working closely with industry leaders such as IBM, Control Data Corporation and CRAY, LLNL shaped and contributed directly to supercomputer architectures, data management and storage hardware. Livermore was the first to require a computer using transistors rather than vacuum tubes; developed new technology such as the first practical time-sharing system; and fashioned a number of hardware and software tools that have found their way into the private sector.
That legacy of computing leadership continues today as Livermore prepares to put the current world champion in computing power, IBM's Blue Gene/L, to work beginning this summer.
For the past decade, the driving force behind supercomputer development at Livermore has been the National Nuclear Security Agency's Advanced Simulation and Computing (ASC) program. ASC unites the resources of three national laboratories (Livermore, Los Alamos and Sandia), the major computer manufacturers, a host of networking, visualization, memory storage, and other vendors and researchers from top-level universities across the country. A common goal of these partnerships is to develop and deploy advanced computer and simulation capabilities that can simulate nuclear weapons tests, so that NNSA can ensure the safety and reliability of the nation's nuclear stockpile without underground testing.
In order to characterize nuclear reactions and the aging of materials, ASC has funded the design and construction of increasingly powerful scalable parallel supercomputers composed of thousands of microprocessors that solve a problem by dividing it into many parts. When it's completed later this year, the last of this series, a Livermore supercomputer named Purple, will run simulations at 100 trillion floating point operations per second (teraflops) - the equivalent of 25,000 high-end personal computers, and enough power to begin to model in detail the physics of nuclear weapons performance.
Still more processing power is needed, however, and that's where Blue Gene/L (BG/L) takes supercomputing in a new direction. Unlike traditional supercomputers like Purple, with up to fifteen thousand powerful enterprise class processors, the final BlueGene/L configuration will have more than 131,000 low-cost embedded commodity microprocessors like those found in control systems and automobiles, supplemented by floating point units. At one-half of its final configuration, BG/L is already the world's fastest computer based on the industry standard LINPACK benchmark, with a sustained performance of more than 135 teraflops out of a peak of 180 teraflops. The final configuration will have a peak of 360 teraflops when installation is completed this summer. Blue Gene/L consumes very little power (just 2.5 megawatts to run and cool the computer) and only 2,500 square feet of floor space. By comparison, the 100 teraflops Purple machine will require up to eight megawatts to run and cool the system and 7,000 square feet of floor space.
BlueGene/L is a collaborative effort between IBM research and Livermore scientists from the tri-lab ASC program. The computer is "worlds apart" from other high performance computers in performance, size, appearance and design, says Steve Louis, assistant head of Livermore's Integrated Computing and Communications Department, who is helping scientists from the three labs get early access for "first wave applications."
BG/L employs unique components, such as IBM's systems-on-a-chip technology, and offers unusual features, such as three different ways to interconnect computer nodes for applications instead of the usual one, Louis says. "The basic principles that drove the design of this highly scalable system were: 'keep it simple'; and a 'divide and conquer' approach to software scale-up."
Although BG/L is currently at only half its full configuration, early "first wave" scientific applications are yielding promising and programmatically relevent results, Louis reports. "The machine is working very well, and the code teams are jostling to get time allocations on it."
Scientists in the ASC program from Los Alamos and Sandia also have put codes on BG/L. "This is a tri-lab effort and we've put codes from all three labs on the machine in an effort to share the advantages this new platform provides," Louis says.
"We're achieving some important early results," he says, noting recent simulations with unprecedented detail of pressure-induced solidification of tantalum at high temperatures and pressures - calculations that provided critical new understanding to the process.
"We're enabling great science for a targeted group of applications that are very important to NNSA's stockpile stewardship program, including problems in classical and first-principles molecular dynamics, instability and turbulence, and 3-D dislocation dynamics," he adds. "It's clear that BG/L is becoming a trend-setter.
![]() Click here for a high resolution photograph. |
Moving technology to the marketplace
Just as Livermore has worked with a variety of computer manufacturers to develop new advances in supercomputing, those advances have generated new software tools to capitalize on the capability of the hardware - tools which, in turn, have a variety of commercial applications.
"Supercomputing has generated new software for such things as new computer codes, data storage, visualization and file sharing," says Karena McKinley, director of LLNL's Industrial Partnerships and Commercialization Office, "and these have also enabled advances in other software for commercial applications."
The DYNA3D ("Dynamics in 3 Dimensions") computer program, for example, was developed in the 1970s to model the structural behavior of weapons systems. It was later broadly released to research institutions and industrial companies and gained widespread acceptance as the standard for dynamic modeling. The list of companies that have used DYNA3D reads like a "Who's Who" of American industry: GE, General Motors, Chrysler, Boeing, Alcoa, General Atomics, FMC Corp., Lockheed and more. A 1993 study found that DYNA3D generates $350 million a year in savings for U.S. industry by allowing speedier release of products to market and enabling savings in costly physical tests such as automobile crash tests.
Chromium, another widely used technology developed at LLNL, makes it possible to create sophisticated graphics and visualizations from the output of "commodity clusters," which are dozens or hundreds of interconnected personal computers operating in parallel as a supercomputer. Taking its name from clustered rendering, or Cr (the atomic symbol for the element chromium), this free, open-source software allows PC graphics cards to communicate and synchronize their commands to create single images from their combined data. More than 20,000 copies of Chromium have been downloaded since its release in August 2003, and the software received a 2004 R&D 100 Award from R&D Magazine as one of the year's top 100 technological advances.
Much more to come
Thanks to the continuing computing partnerships between government labs, industry and academia, "newer examples of application software are being generated right now" says McKinley. "We can soon expect a whole new generation of these programs for medical simulations, genetic computing, global climate modeling, aerospace and automotive design, financial models and many other domestic applications."
Even BlueGene/L won't be powerful enough to simulate all the complexities of matter at extreme pressures and temperatures. Looking to the future, Livermore hopes to acquire a petaflops (1 quadrillion operations per second, or 1,000 teraflops) supercomputer by 2010. Such a computer, says Louis, "would require fundamental change in the way high performance systems are designed and managed."
"LINPACK numbers and Top500 lists are certainly exciting news," he says, "but applications results are what the excitement is all about. BlueGene/L applications are enabling revolutionary science and opening a cost-effective path to petaflop computing."
McCoy, the deputy associate director for computation, calls today's supercomputer simulations "science of scale" because they represent extreme efforts to unlock nature's secrets.
"These simulations are similar to very large experiments in terms of the manpower and investment required before one can do the simulation or 'experiment,' McCoy says. "In this sense, computing at this scale is perfectly aligned with the mission of a national laboratory: to provide and apply apparatus for unlocking nature's secrets that can be found nowhere else.
"The goal is to compute at a level of resolution and with a degree of physical accuracy that gives scientists confidence that the numerical error and inaccuracies in their simulations do not becloud the insights that they will enjoy from studying the results," he says. "This is an exciting time to be at Livermore."
Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.