ASCR Monthly Computing News Report - August 2009

In this issue...


SciDAC 2009 Proceedings Are Now Online

The proceedings of the SciDAC 2009 conference, held June 14-18 in San Diego, are now available online as Volume 180 of the open-access Journal of Physics: Conference Series. Hardbound copies of 89 papers published in the proceedings will start shipping in the next few weeks to conference participants.

Efficient Low-Rank Updating Algorithms for Quantum Monte Carlo Simulations at ORNL

Phani Nukala from Oak Ridge National Laboratory (ORNL) has developed two efficient low-rank updating algorithms for quantum Monte Carlo (QMC) simulations. These algorithms are based on low-rank updating of underlying linear algebra problems and result in significant computational savings, often in the range of three to ten times faster than competing algorithms. The first algorithm is used for updating the Slater determinants in QMC simulations and was described in a paper published in the May 28, 2009 edition of the Journal of Chemical Physics (130, 204105). The second algorithm enables computation of transition probability in QMC simulations of strongly correlated systems using the Hubbard model and will appear in an upcoming issue of Physical Review B.

Contact:  Phani Nukala,
LBNL Experimentalists, Computational Scientists Track Single Atom

Following the path of a single atom or point defect as it diffuses inside a solid remains one of the most sought after but undemonstrated feats of microscopy. In a paper in the July 27 edition of Physical Review B, Damien Alloyeau, Sefa Dag, Lin-Wang Wang and Christian Kisielowski at Lawrence Berkeley National Laboratory in the US, and Bert Freitag at FEI Company in the Netherlands report their use of one of the new aberration-corrected transmission electron microscopes and conclude that their investigation demonstrates that imaging of single atoms can now be utilized to directly visualize single-defect formation and migration in Germanium. Such defects in elemental semiconductors are broadly studied because they have a large impact on electrical properties and diffusion phenomena, which are the basis for integrated circuit technology. In experimental studies, the application of spectroscopic methods dominates the field, but such techniques do not permit the direct observation of defect structures and diffusion paths. On the theoretical side, the calculations often yielded inconsistent results.

The team used computing resources at NERSC to generate predictions of the material using density-functional theory, and the experimental observations were re directly comparable with their predictions. As a result, the researchers conclude they are opening up new ways to study native defects and migration problems in semiconductors.

Size Effects in Statistical Fracture Simulations at ORNL

ORNL researcher Phani Nukala, along with Stefano Zapperi (Università di Modena e Reggio Emilia) and Mikko Alava (Helsinki University of Technology) used large-scale numerical simulations, which they developed and are the largest systems of this kind ever analyzed, to develop a novel scaling law for material strength that captures for the first time the experimentally observed crossover from a stress concentration controlled scaling regime to a disorder controlled scaling regime. The scaling law is in excellent agreement with the experimental data on notched paper samples. This universal scaling law extends our understanding of size dependence of material strength, and is relevant for the design and analysis of engineering structures made up of quasi-brittle materials such as concrete. The work is described in the paper "Size Effects in Statistical Fracture" published in the August 2009 edition of the Journal of Physics D: Applied Physics.

Contact:  Phani Nukala,
Three New Supernova Discoveries Enabled by LBNL's Deep Sky Project

On August 25 in The Astronomer’s Telegram, Peter Nugent of Berkeley Lab’s Computational Research Division and the Analytics Group at NERSC, along with colleagues from the Palomar Transient Factory (PTF), announced the discovery of three nearby Type Ia supernovae. Two were discovered on August 17 and one on August 18; confirmation spectra were taken on August 19. All three supernovae are quite young, and follow-up observations are being taken by the Hubble Space Telescope and others. These discoveries were made possible in part by NERSC’s Deep Sky/PTF processing pipeline, which identifies candidate transients (possible supernovae) from the images taken by PTF each night. Nugent is Realtime Transient Detection Lead for the PTF.

ORNL and SNL Researchers Accelerate Combustion Simulation using GPUs

Recent work by a team of researchers from ORNL (Kyle Spafford, Jeremy Meredith, Jeffrey Vetter, and Ramanan Sankaran) and from Sandia National Laboratories (SNL) (Jacqueline Chen and Ray Grout) has explored the performance benefits and accuracy tradeoffs of using graphics processors (GPUs) to accelerate S3D, one of DOE’s leading computational science applications that simulates turbulent combustion. Although they were initially designed for 3D graphics, GPUs have evolved to be an exciting platform for scientific computing due to their impressive processing capabilities and relatively low cost. The results show that computation on the GPU is able to preserve accuracy by using double precision, and executes the application’s most time consuming code up to nine times faster than a traditional CPU. These results will be presented in a paper entitled “Accelerating S3D: A GPGPU Case Study,” at the upcoming International Workshop on Algorithms, Models, and Tools for Parallel Computing on Heterogeneous Platforms (HeteroPar 2009).

LBNL's New Astrophysics Code Highlighted at Supernovae Conference

The first applications of Berkeley Lab’s CCSE's new compressible astrophysics code, CASTRO, to Type II core collapse supernovae were presented at the Conference on “Stellar Death and Supernovae,” held August 17-21at the Kavli Institute for Theoretical Physics (KITP) at the University of California, Santa Barbara. Simulations using CASTRO, developed by Berkeley Lab’s Center for Computational Sciences and Engineering, were highlighted in an invited talk given by Adam Burrows from Princeton University and in a poster presented by his postdoc, Jason Nordhaus. The results focused on new 3D adaptive simulations of the collapse of the cores of massive stars, one that was induced to explode by neutrino heating and one that was allowed to fizzle.  These simulations were performed on Franklin, the Cray XT4 supercomputer at NERSC.

According to Burrows, "Both were characterized by hydrodynamic instabilities whose nature was novel and complex. Though preliminary, these simulations suggest that the traditional view of a supernova exploded by neutrino heating may be viable, but perhaps only if carried out in three spatial dimensions at high resolution with careful attention to the neutrino transport".

Global Arrays Toolkit Version 4.2 Released

At Pacific Northwest National Laboratory (PNNL), a team of experts just released a new version of the award-winning Global Arrays (GA) toolkit software, which dramatically simplifies writing code for supercomputers. Version 4.2 of GA helps scientists to translate their ideas into highly efficient software that allows mathematical computations to run independently using subsets of processors of the supercomputer. The GA Toolkit Version 4.2 includes significant updates to the previous GA version and supports users of the Environmental Molecular Sciences Laboratory (EMSL) and others who do large-scale computational research. Also, GA 4.2 lends support for several new platforms, including an optimized port for Cray XT5, a petaflop Linux supercomputer, and BlueGene/P, the second generation of Blue Gene supercomputer.

The GA provides high-level interfaces for writing parallel programs critical to scientific research such as the NWChem software package and also has been used in the development of the STOMP, SCALABLAST, parallel version of IN-SPIRE, and several other scientific applications. A tutorial of GA has been given at the ACTS workshop at Berkeley (August 19). Another tutorial of the GA toolkit was given at the IEEE Cluster 2009 conference in New Orleans (September 4). And a BoF (Birds of Feather) of GA, “Extending Global Arrays to Future Architectures,” will be held at SC09 in Portland, Oregon, in mid-November.

Contact:  Manojkumar Krisnan,


LLNL's Vassilevski Receives Fulbright Scholarship

Panayot S. Vassilevski, a computational mathematician at Lawrence Livermore’s Center for Applied Scientific Computing (CASC), has received a Fulbright Scholarship and will spend a semester in Bulgaria starting Oct. 1. His research interests include numerical linear algebra and finite element methods at large, and in particular preconditioned iterative methods, multigrid methods, as well as the derivation and analysis of discretization schemes for partial differential equations. He is the co-editor  for the journal Numerical Linear Algebra with Applications. Vassilevski, who joined LLNL in March 1998, received his Ph.D. in mathematics from the St. Kliment Ohridski University of Sofia in Bulgaria in 1984. At CASC, Vassilevski is involved with the Scalable Linear Solvers project and more recently with a project on finite element smooth functions recovery.

ScalaBLAST Developer Chris Oehmen Recognized for Early Career Achievements

Chris Oehmen, a computational research scientist at PNNL and originator of the ScalaBLAST software, won the laboratory’s 2009 Ronald L. Brodzinski Award for Early Career Exceptional Achievement. Oehmen is recognized for his outstanding contributions to providing unique and highly practical high-performance and data-intensive computing solutions to major complex data analysis challenges in bioinformatics and cyber security applications.

ScalaBLAST is a massively parallel implementation of the Basic Local Alignment Search Tool (BLAST) used in biological sequence comparison applications, which outperforms in speed and capacity any other parallel implementation. The scientific impact of ScalaBLAST is evident from the fact that it is now the primary tool used by DOE’s Joint Genome Institute.

PNNL's Ian Gorton Earns Accolade for Software Engineering Achievements

This year’s PNNL Laboratory Director’s Award for Exceptional Engineering Achievement went to Ian Gorton, Associate Division Director in the Computational Science and Math Division. Gorton is a recognized international leader for solving challenging technical problems using innovative software engineering approaches, particularly in software architectures. Gorton has a reputation for carrying out leading-edge computer science research and producing innovative software tools that have high impact on real-world computational science applications.

MeDICi (Middleware for Data Intensive Computing) and other tools developed by Gorton and his team form the foundation for building distributed, high-performance, data-intensive computing applications in a range of application areas including bioinformatics, cyber security, and atmospheric sciences. MeDICi is a middleware platform (computer software that connects software components or applications) that makes it easy to integrate separate codes into complex applications that operate as a data analysis pipeline. The MeDICi technology is open source and can be freely downloaded for building complex software applications.

LBNL's David Bailey to Spend Four Weeks in Australia as Visiting Professor

David Bailey, Chief Technologist for the Computational Research Division at Lawrence Berkeley National Laboratory (LBNL), has been invited to spend four weeks in Australia as a visiting professor at the University of Newcastle. In his invitation to Bailey, Professor Bill Hogarth (who is also Pro Vice Chancellor, Faculty of Science and Information Technology) asked that he “participate in collaborative research in the area of Mathematics and network with colleagues within the School of Mathematical and Physical Sciences.” Bailey will give four presentations:

  • August 13: “Computing: The Third Mode of Scientific Discovery,” University of Newcastle (also broadcast on Access Grid)
  • August 17: “High-Precision Arithmetic Meets Experimental Mathematics,” Macquarrie University, Dept. of Mathematics
  • August 18: “Experimental Mathematics Meets Mathematical Physics,” University of Newcastle, CARMA Workshop on Multidimensional Numerical Integration and Special Function Evaluation
  • August 20: “High-Precision Arithmetic Meets Experimental Mathematics,” University of Melbourne, Dept. of Mathematics
OLCF Science Writers Sweep Best Feature" Awards

Science writers at the Oak Ridge Leadership Computing Facility (OLCF) recently swept the gold, silver, and bronze categories for best feature article in a web-based/electronic publication in an international competition, the 2009 Magnum Opus Awards for Outstanding Achievement in Custom Media. To win these awards, presented annually in conjunction with the Missouri School of Journalism, the OLCF writers competed with professionals from organizations including the Walt Disney Company; Wyeth; Cargill, Inc.; Toyota Motor Sales, USA, Inc.; and Rodale Custom Publishing. The winning writers are:

  • Scott Jones won the gold for “NCCS System Models Hummingbird Flight” ( His article describes the work of researchers from Digital Rocket Science who used the Phoenix supercomputer to dissect the dynamics of hummingbirds flapping their wings to better understand the aerodynamics of flight.
  • Dawn Levy took the silver for “Resolution Revolution” (, which describes the critical role of climate simulation software applications such as the Community Climate System Model (CCSM) in advanced climate prediction. The CCSM is a megamodel that couples four independent models of Earth’s atmosphere, oceans, lands, and sea ice, which could not run without the powerful supercomputing at the Oak Ridge Leadership Computing Facility (OLCF).
  • Leo Williams won the bronze for “Invisible Means of Support” (, which describes the work of computational astrophysicists on the Cray XT4 Jaguar supercomputer at the NCCS who made the largest simulation ever of dark matter evolving over billions of years to surround a galaxy.
Green HPC: LBNL’s Horst Simon Helps Sift through the Hype

Green HPC: a look beyond the hype” is a new, eight-part podcast series from insideHPC that examines green initiatives from all sides of the HPC ecosystem. In episode 1, “Sifting through the hype”, LBNL Associate Laboratory Director Horst Simon joins Wu-chun Feng of the Green500, Wilf Pinfold of Intel, and Dan Reed of Microsoft Research to discuss how the Green HPC conversation has evolved and why energy consumption is an issue everyone should be concerned about.



ESnet Receives $62 Million to Develop World's Fastest Computer Network

As scientists in a wide variety of disciplines increasingly rely on supercomputers and collaboration with colleagues around the world to advance their research, managing and sharing the mountain of data generated by their investigations will soon become a choking point. In order to facilitate such data-intensive research, ESnet, the Department of Energy’s high-performance networking facility managed by the Lawrence Berkeley National Laboratory, is receiving $62 million to develop what will be the world’s fastest computer network, designed specifically to support science. Funded by the American Recovery and Reinvestment Act, the Advanced Networking Initiative, will ensure that the United States stays competitive in science and technology. Specifically, ESnet will develop a prototype 100 gigbits per second (Gbps) Ethernet network to connect DOE supercomputer centers at speeds 10 times faster than current technology.

Read more at: This Link
NERSC Awards Supercomputer Contract to Cray

DOE’s National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory has awarded a contract for its next-generation supercomputing system to Cray Inc. The multi-year supercomputing contract includes delivery of a Cray XT5™ massively parallel processor supercomputer, which will be upgraded to a future-generation Cray supercomputer. When completed, the new system will deliver a peak performance of more than one petaflops, equivalent to more than one quadrillion calculations per second. Like NERSC’s current 355-teraflops Cray XT4™ system, named “Franklin,” the new supercomputing system will help advance open science research across a wide range of disciplines. Click here for more information about the scientific work done on Franklin. According to NERSC Director Kathy Yelick, Cray was awarded the contract based on several factors, including performance and energy efficiency on a set of application benchmarks that capture the challenging workload of the 3,000 NERSC users.

EMSL's Chinook Supercomputer by HP Commissioned for Research

The newest supercomputer in town is almost 15 times faster than its predecessor and ready to take on problems in areas such as climate science, hydrogen storage and molecular chemistry. The $21.4 million Chinook supercomputer, built by HP, has now been commissioned for use by Pacific Northwest National Laboratory and the Department of Energy. Chinook is featured in the Summer 2009 issue of SciDAC Review. Housed at EMSL, DOE’s Environmental Molecular Sciences Laboratory on the PNNL campus, Chinook can perform more than 160 trillion calculations per second, ranking it in the top 40 fastest computers in the world (see the Top 50). Its predecessor, EMSL’s MPP2, could run 11.2 trillion calculations per second.

NERSC Posts Results of Latest User Survey

Each year, the National Energy Research Scientific Computing Center (NERSC) surveys its user community to gather feedback about every aspect of its operation, help staff judge the quality of services, give DOE information on how well NERSC is doing, and identify areas for potential improvement. For the 2008–09 survey, 421 users responded, a response rate comparable to last year’s and significantly higher than previous years: The respondents represent all six DOE Science Offices and a variety of home institutions. Among the highest-ranked areas were reliability and availability of archival storage, timely response to questions, overall consulting and support services, and the reliability and availability of the NERSC Global Filesystem. The area cited for improvement was the overall availability of the Cray XT4 system, which was undergoing an upgrade and acceptance testing during the period covered by the survey.

LBNL Receives Funding to Develop Tools for Science Research

In the next few months, researchers from the Lawrence Berkeley National Laboratory’s Computational Research Division (CRD) will receive funding to develop tools that will help facilitate science breakthroughs in a variety of disciplines.

Dan Gunter of CRD’s Advanced Computing for Science Department will receive $250,000 in funding from the National Science Foundation’s (NSF) Strategic Technologies for Cyberinfrastructure program to develop tools that will monitor workflows on distributed systems. The NSF funding comes from the American Recovery and Reinvestment Act, and is part of a nationwide multi-institutional collaboration lead by Ewa Deelman of the University of Southern California’s Information Sciences Institute.

Arie Shoshani of CRD and Dantong Yu of the Brookhaven National Laboratory will collaborate to develop an end-to-end storage and network resource provisioning and management system for high performance computing (HPC) data transfers. The new system will enable the Storage Resource Manager (SRM), a framework that is widely used to manage shared storage systems, to interact with DOE's TeraPaths system. TeraPaths reserves bandwidth on the local networks connecting to the DOE’s wide area Energy Sciences Network (ESnet), and also interacts with OSCARS (On-Demand Secure Circuit and Advance Reservation System) to dynamically reserve bandwidth on ESnet. The SRMs on the source and target sites will provide storage reservations as well as reserve bandwidth from the storage systems into the network. ASCR funded the development of BeStMan, TeraPaths and OSCARS. The SRM concept was initiated by the CRD’s Scientific Data Management Group, which Shoshani leads.



Argonne, North Dakota Universities Partnering to Conduct More HPC Research

The U.S. Department of Energy’s (DOE) Argonne National Laboratory (ANL), the University of North Dakota (UND), and North Dakota State University (NDSU) are developing a regional partnership to explore complementary scientific research efforts. The partnership will allow the three organizations to conduct more joint research in the areas of high-performance computing, nanoscience, national security, energy, and environmental science. It also will enable Argonne, UND, and NDSU to engage in basic research projects that require a long-term commitment and will enhance each organization’s efforts to develop technologies that will address the nation’s energy and environmental challenges.

ND and NDSU scientists have already accessed Argonne’s Advanced Photon Source and the Center for Nanoscale Materials, two world-class scientific user facilities located at the laboratory. Other opportunities exist to perform research at the Argonne Leadership Computing Facility, which houses one of the world’s fastest public computers for scientific study, and initiate collaborative research projects.

Contact:  Angela Hardin,
ALCF to Showcase Blue Gene/P's Capabilities at Argonne Open House

The Argonne Leadership Computing Facility (ALCF) will showcase the Blue Gene/P’s speed, storage, and performance capabilities at the Argonne National Laboratory Open House on August 29 from 9 a.m. to 4:30 p.m. Overall, nearly 100 engaging exhibits, demonstrations, tours, and presentations will inform the public of Argonne’s research at the Open House. More information can be found at the following link.

HPC Environment Lends Unique Experience to OLCF Interns

A new generation of scientists and engineers are training this summer at the Oak Ridge Leadership Computing Facility (OLCF) in the use and care of high performance computing and its impact on scientific discovery. The OLCF has 12 interns this summer with backgrounds including computer science, physics, and mechanical and chemical engineering. Each has been assigned a mentor and a project that provides hands-on experience with the center’s Cray XT4/XT5 supercomputer known as Jaguar. Jesse Mays and Stephanie Poole, computer science and engineering majors, respectively, are developing educational models that can teach computational science to students from kindergarten up to the postdoctoral level.

Mays and Poole are currently working on a project which is the result of a partnership between OLCF and the Appalachian Regional Commission (ARC), a body of educators and business people whose goal is to support the educational development of Appalachia. The project seeks to open up college and career options to underprivileged and minority high school students, explained Bobby Whitten, a member of the OLCF’s User and Assistance Outreach Group and mentor to Mays and Poole. “Take a complex problem—if you ran it on one computer, it would take hours,” explained Mays, a senior at Morehouse College in Atlanta. “The idea is to run it on multiple computers to get it done faster.” As a demonstration, the students connected six Mac Minis together to effectively create a mini-supercomputer. They then were able to calculate pi at a speed of 19.5 gigaflops, or 19.5 billion calculations per second.

ALCF Sponsors "Performance Evaluation Using TAU" Workshop

Blue Gene/P users at the Argonne Leadership Computing Facility (ALCF) can take their code performance to the next level with TAU. On September 22–23, Sameer Shende will lead “Performance Evaluation Using Tau,” a two-day, hands-on workshop sponsored by the Argonne Leadership Computing Facility (ALCF) at Argonne National Laboratory.

Topics and demonstrations will include:

  • TAU instrumentation options
  • Features of automatic source-level and compiler-based instrumentation for C, C++, Fortran, and Python
  • Performance data collection (both profile and trace data) using TAU’s automated instrumentation
  • Analyzing performance data, including drilling down for bottlenecks and their causes
  • Code illustrations of instrumentation and measurement options
  • Generating performance profiles and traces with memory utilization and headroom, I/O and hardware performance counters data using PAPI
  • Using hardware counter data to determine which routines take the most time and why
  • Performance data analysis using ParaProf and PerfExplorer using the performance data management framework (PerfDMF) that includes TAU’s performance database
  • Cross-experiment analysis, including the effects of multi-core architectures on code performance.
ANL Researchers to Have Major Presence at Math Programming Meeting

Researchers from the Mathematics and Computer Science Division at Argonne National Laboratory will have a major presence in the 20th International Symposium on Mathematical Programming (ISMP) in Chicago August 23–28. This scientific meeting is held every three years by the Mathematical Programming Society and draws thousands of participants. Mihai Anitescu, a computational mathematician, and Todd Munson, a computational scientist, are on the organizing committee for ISMP09; and Jorge Moré, an Argonne Distinguished Fellow, has co-organized a cluster of seven sessions on “Derivative-Free and Simulation-Based Optimization.” Additionally, Anitescu will give a plenary/semiplenary presentation titled “The Challenge of Large-Scale Differential Variational Inequalities”; Moré, Sven Leyffer, a computational mathematician, and Stefan Wild, a postdoctoral appointee, will (co)chair invited sessions; and Victor Zavala, an Argonne Director’s Postdoctoral Fellow, and Ilya Safro, a postdoctoral researcher, will chair contributed sessions. Several MCS Division researchers and postdocs also will present talks, on topics ranging from manipulation of carbon emission programs to new warm-starting techniques for nonlinear branch-and-bound methods.

Contact:  Jayson Hines,
ACTS Collection Workshop Draws 40 Attendees

The Tenth Workshop on the DOE Advanced CompuTational Software (ACTS) Collection was held August 18–21 at Berkeley Lab, drawing 40 attendees this year. The annual ACTS workshop brings tool developers and users together for discussions on current computational science and engineering challenges. The workshop facilitates interactions between high-end tool development projects and a wide range of applications being developed by research and commercial institutions. Attendees learn about the tool functionalities through hands-on sessions, and are given ample time for discussions on their particular computational needs. Tony Drummond of LBNL’s Computational Research Division was the Technical Workshop Organizer.