ASCR Monthly Computing News Report - August 2010

In this issue:


Modeling Nickel Fractures and Next-Generation Reactors
A multidisciplinary team of physicists, chemists, materials scientists and computer scientists -innovated in simulation methods and parallel computing technologies to perform the largest-ever (48 million atoms), chemically reactive molecular dynamics simulation on 65,536 IBM Blue Gene/P processors at the Argonne Leadership Computing Facility. The researchers found that minute amounts of impurities segregated to grain boundaries of a material, nickel in this case, essentially alter the material’s fracture behavior. The researchers simulated the introduction of small amounts of sulfur into the boundaries between nickel grains to investigate a material property known as “embrittlement.”
Seeing how different configurations of nickel function at these exceptionally small scales helps researchers understand some of the basic chemistry necessary to expedite the development of next-generation nuclear reactors. The study was published in Physical Review Letters (April 2010). Read more about the research.
NERSC Calculations Increase Speed of Designing Experimental Accelerators
Researchers computing at the National Energy Research Scientific Computing (NERSC) Center have sped up by a factor of hundreds the modeling, and thus the design of experimental laser wakefield accelerators. The team, lead by principal investigator Warren Mori of the University of California at Los Angeles and his collaborator Luis Silva of the Instituto Superior Tecnico (IST) in Lisbon, Portugal, published its findings in Nature Physics.

Laser wakefield acceleration works by shooting powerful laser pulses through a cloud of ionized gas (plasma). The pulse creates a wave (or wake) on which introduced electrons "surf," much as human surfers ride ocean waves. Using this method, researchers have demonstrated acceleration gradients 1,000 times greater than conventional methods.

Jaguar Helps Researchers Find Path of Least Resistance
A dash of barium added to a lanthanum-copper-oxygen mixture (called a cuprate) produces an especially promising superconductor. There is, however, one catch: The material is a great superconductor in two dimensions, not three. A team led by Thomas Schulthess of the Swiss National Supercomputing Centre corroborated this discovery using Oak Ridge National Laboratory’s Jaguar supercomputer. Its findings appear in the June 16 Physical Review Letters.
The team found that when an eighth of the lanthanum atoms in the cuprate are replaced with barium, stripes appear in the conducting copper-oxygen layer consisting of electron holes (locations where expected electrons are missing). If the conducting layer is kept to a single layer of atoms, the material remains superconducting at a relatively high temperature. Fortunately, according to team member Thomas Maier of ORNL, manufacturing technology has advanced nearly to the point at which we are able to lay down layers so thin they are essentially two-dimensional.
Contact: Thomas Maier,
ORNL Scientists Explain Graphene Mystery
Nanoscale simulations and theoretical research performed at the Department of Energy's Oak Ridge National Laboratory (ORNL) are bringing scientists closer to realizing graphene's potential in electronic applications. A research team led by ORNL's Bobby Sumpter, Vincent Meunier and Eduardo Cruz-Silva has discovered how loops develop in graphene, an electrically conductive high-strength, low-weight material that resembles an atomic-scale honeycomb.
Structural loops that sometimes form during a graphene cleaning process can render the material unsuitable for electronic applications. Overcoming these types of problems is of great interest to the electronics industry. The team used quantum molecular dynamics to simulate an experimental graphene cleaning process, as discussed in a paper published in Physical Review Letters. Calculations performed on ORNL supercomputers pointed the researchers to an overlooked intermediate step during processing.
Simulating Brain Blood Flow to Better Diagnose, Treat Aneurisms
Patient-specific brain blood flow simulations are aiming to improve diagnosis and treatment of aneurisms. Researchers from University College London have made significant progress in studying three patients’ internal carotid artery aneurysms. In conducting the simulations, the researchers used HemeLB, a sparse-geometry optimized Lattice-Boltzmann code, in standalone mode on Intrepid, the 557-teraflops IBM Blue Gene/P, at the Argonne Leadership Computing Facility. Intrepid allows flow calculation at speeds fast enough to be clinically useful. The simulations involved a number of steps—acquiring angiography data, transferring it to local resources, pre-processing locally, staging to remote resources for simulation, and reporting (using interactive steering and visualization).
Contact: Peter Coveney,
Dancing in the Dark: LBNL Scientists Get a New Look at Protein-Salt Interactions
Scientists using supercomputers at NERSC and X-ray absorption simulation software developed at Berkeley Lab's Molecular Foundry are getting a new look at how proteins interact with simple salts in water, and what impacts these interactions may have on protein structures at the atomic level. Traditional crystallographic techniques, such as X-ray diffraction, provide a profile of ordered materials with static structures. However, for dynamic or complex systems in which the atomic structure is rapidly changing, researchers need more sophisticated methods.
David Prendergast's molecular dynamics technique can model X-ray spectra of a biological system with known structure to determine its local interactions, what causes it to form a particular structure, and why it takes on a particular conformation-all by simulating the spectra of a series of individual snapshots and comparing with experimental results. These simulations are computationally intensive and rely heavily on the large-scale supercomputing infrastructure provided by DOE's National Energy Research Scientific Computing Center (NERSC).
Researchers Conduct Prototype Fully Coupled Community Climate Model Simulation
Robert Jacob, a computational climate scientist in Argonne’s Mathematics and Computer Science Division, recently collaborated on a two-decade simulation of a prototype version of the Community Climate System Model (CCSM4). This is believed to be the first study where all components have fine-enough horizontal resolution for both the ocean and atmosphere to explicitly simulate turbulent instabilities of the large-scale circulation. Simulating the Earth system is a major scientific challenge. Current simulations typically suffer from missing or inadequately represented processes in component models, as well as feedback among the components.
The prototype CCSM4 results demonstrated the computational feasibility and model readiness to run coupled high-resolution simulations in order to examine features such as decadal variability and the contribution of ocean mixing in tropical storms to ocean heat transport. Moreover, the simulation shows the potential for simulating important coupled mesoscale features previously impractical in a global coupled model, particularly the generation of intense tropical cyclones.  The simulation was conducted with researchers from Los Alamos, Lawrence Livermore, and Oak Ridge national laboratories, NCAR, and the Scripps Institution of Oceanography. Future work will focus improving the atmospheric dynamics core and atmospheric physics parameterizations.
Contact: Rob Jacob,
Pounding Away at the Mysteries of the Nuclear Landscape
Approximately 3,000 nuclei are already known, and twice as many could, in principle, still be discovered experimentally. Providing a comprehensive and unified description of all these nuclei is the goal of the DOE-funded SciDAC-2 project “Low-Energy Nuclear Physics National HPC Initiative: Building a Universal Nuclear Energy Density Functional” (UNEDF).
To this end, a team of applied mathematicians in Argonne’s Mathematics and Computer Science recently added to their computer code POUNDERS (Practical Optimization Using No DERivatives for Sums of squares) the ability to restrict the search space to finite ranges for some of the parameters. The results have been excellent, with good fits to diverse data on 72 nuclei in much shorter times than with traditional approaches. Moreover, the POUNDERS algorithm has also proved significantly better than standard optimization methods in terms of reliability, accuracy, and precision.


LBNL’s Per-Olof Persson Receives Air Force Early Career Award
Per-Olof Persson of Berkeley Lab’s Math Group and assistant professor of mathematics at UC Berkeley was one of 38 researchers who submitted winning research proposals through the Air Force's Young Investigator Research Program. He will work on efficient and robust high-order methods for fluid and solid mechanics. Read more about his research.
According to the Air Force Office of Scientific Research, the award is open to scientists and engineers at research institutions across the U.S who received Ph.D. or equivalent degrees in the last five years and who show exceptional ability and promise for conducting basic research.
Berkeley Lab’s Jon Wilkening Wins NSF Early Career Award
Jon Wilkening of Berkeley Lab’s Math Group and assistant professor of mathematics at UC Berkeley received an NSF Faculty Early Career Development Award (CAREER) to conduct research in optimization and continuation methods in fluid mechanics. As part of his five-year award, Wilkening also plans to host students in DOE’s Computational Science Graduate Fellowship program. He was a fellow in the program and completed his Ph.D. in math at UC Berkeley in 2002.
Read more about Wilkening’s award
Argonne’s Ian Foster Honored for Contributions to Computational Grid
Ian Foster received an honorary doctorate August 16 from the Research and Advanced Studies Center of the National Polytechnic Institute of Mexico. Foster, a Distinguished Fellow at Argonne National Laboratory and director of the University of Chicago/Argonne Computation Institute, was honored for his contributions to the computational Grid.
Al Gore Explores Green Technologies at ORNL
Former Vice President Al Gore visited Oak Ridge National Laboratory on Aug. 10 in his role as a partner of a venture capital firm providing early-stage investments to accelerate solutions to the climate crisis. Gore, who shared the 2007 Nobel Peace Prize with the Intergovernmental Panel on Climate Change for disseminating knowledge about man-made climate change, was briefed about renewable technologies, nuclear technologies and climate research. His tour included visits to the Spallation Neutron Source, which provides the world's most intense pulsed neutron beam, and the Oak Ridge Leadership Computing Facility, home to the world's fastest supercomputer.


ALCF Advances to Next-Generation Machine
Come 2012, the Argonne Leadership Computing Facility (ALCF) will be home to Mira, a 10-petaflop/s IBM Blue Gene/Q supercomputer that will give scientists a new tool for scientific discovery. Mira, after the Latin root meaning to wonder or marvel, will be used for a wide range of research, including designing ultra-efficient electric car batteries, predicting fluid flow and convective heat transport in advanced nuclear reactor designs, understanding global climate change, improving combustion efficiency, and exploring the evolution of our universe. At 10 PF/s, Mira will be 20 times faster than Intrepid, Argonne's current Blue Gene/P supercomputer, running programs at 10 quadrillion calculations a second. Mira will also be one of the greenest high-performance computers, due to a combination of innovative new chip designs and efficient water cooling.
Between now and when Mira becomes operational, the ALCF will be conducting an "Early Science Program" designed to engage researchers in fine-tuning their codes and in finding the most effective ways to leverage Mira's power. Sixteen projects have been selected from around the world to take part in the program, spanning applications that represent a large fraction of the ALCF's current and projected computational workload.
ESnet Expertise Tapped at the 2010 Interagency IPv6 Conference
As a pioneer in IPv6, the most fundamental protocol of the Internet, ESnet's Kevin Oberman was invited to give a presentation about how the facility uses and implements IPv6. Over 120 agencies were invited to attend the interagency conference sponsored by the Department of Veterans Affairs. Visit ESnet's Network Matters blog for Oberman's take on the IPv6 conference and all the latest news from the facility.
HPC Source: A Big Picture Approach to Scientific Data Storage
Increasingly sophisticated supercomputers and networks are changing the way science is done, whether allowing researchers to scrutinize the last 13 billion years of cosmic expansion or to better understand subatomic particles. Meanwhile, advancements in high-performance networks are facilitating remarkable levels of scientific collaborations. These trends are leading to unprecedented scientific productivity and forcing supercomputing centers to reevaluate how they manage data, writes NERSC's Jason Hick in an editorial for HPC Source magazine. Hick heads the Storage Systems Group at NERSC.


Researchers Gather at ORNL to Explore Petascale While Looking to Exascale Future
About 70 researchers working on some of the nation’s most pressing scientific missions gathered at ORNL for the Scientific Applications (SciApps) Conference and Workshop August 3-6. An interdisciplinary team of computational scientists shared experiences, best practices, and knowledge about how to sustain large-scale applications on leading high-performance computing systems while looking toward building a foundation for exascale research.
SciApps 2010 was funded by the Recovery Act. The OLCF’s Scientific Computing Group Leader Ricky Kendall and Director of Science Doug Kothe co-hosted the conference. “While many of the scientific disciplines have little in common, there is a tremendous algorithmic commonality among some of them, and they all share a need for ever expanding computational resources to help them meet their scientific goals and missions,” Kendall said. “One finding was that all disciplines represented at the meeting had a strong use case for sustained petascale computing and many had well thought-out ideas about the next steps towards exascale computing.”
Berkeley Lab Hosts 11th Annual ACTS Workshop
The 11th Workshop on the DOE Advanced Computational Software (ACTS) Collection, “High Performance Software Tools to Fast-track the Development of Scalable and Sustainable Applications,” drew about 60 participants to the program held Aug. 17-20 at the University of California, Berkeley.
The four-day workshop, organized by Tony Drummond of LBNL’s Computational Research Division (CRD), presented an introduction to the DOE ACTS Collection for application scientists whose research demands include either large amounts of computation, the use of robust numerical algorithms, or combinations of these. The workshop included a range of tutorials on the tools currently available in the collection, discussion sessions aimed to solve specific computational needs by the workshop participants, and hands-on practice using state-of-the-art supercomputers at NERSC.
Sherry Li of CRD gave a tutorial on the SuperLU solver library. Researchers from Pacific Northwest National Laboratory provided a short tutorial and hands-on session on the Global Arrays toolkit (GA). The tutorial focused on presenting an introduction the GA programming model to computer and application scientists.
High School Students Build Their Own Supercomputer - Almost - at OLCF
For the third straight year, students and teachers from around Appalachia gathered at ORNL this summer for interactive training from some of the world’s leading computing experts. The summer camp, a partnership between ORNL and the Appalachian Regional Commission (ARC) Institute for Science and Mathematics, took place July 12-23. The OLCF hosted 10 students from various backgrounds and parts of the region.
The course was titled “Build a Supercomputer—Well Almost.” And that they did. With the help of ORNL staff, collaborators, and interns from universities, the high-school students went to work building a computer cluster, or group of computers communicating with one another to operate as a single machine, out of Mac mini CPUs. The students’ cluster did not compute nearly as fast as the beefed-up cluster right down the hall—ORNL’s Jaguar—but successfully ran the high-performance software installed. Through the program students received a foundation in many of the things that make a supercomputer work.
“They get to learn HPC [high-performance computing] basics, and it’s a chance for them to live on their own for a couple of weeks,” said Bobby Whitten, an HPC specialist at ORNL and facilitator of the OLCF program. ORNL first partnered with ARC on a program of this type in 2008. Whitten happily notes that one of his students from that year is heading off to Cornell University in the fall to study biomechanical engineering.
PNNL Researchers to Present Global Arrays Tutorial at October Workshop
A proposal by researchers from Pacific Northwest National Laboratory to give a tutorial on the Global Arrays toolkit (GA) and the underlying Aggregate Remote Memory Copy Interface (ARMCI) was accepted for the Partitioned Global Address Space conference that will be held in New York in October. This session will be the first time a comprehensive tutorial on ARMCI has been presented.
Contact: Daniel Chavarria,