ASCR Monthly Computing News Report - October 2008

The monthly survey of computing news of interest to ASCR is compiled by Jon Bashor (JBashor@lbl.gov) with news provided by ASCR Program Managers and Argonne, Fermi, Lawrence Berkeley, Lawrence Livermore, Los Alamos, Oak Ridge, Pacific Northwest and Sandia National labs. Contact information and links to additional information, where available, are included with each article.

In this issue...


Four Argonne Projects Recognized among DOE's Top 10 Scientific Achievements
Four projects spearheaded by Argonne computer scientists and users of the Argonne Leadership Computing Facility (ALCF) have been named to the list of top ten scientific accomplishments supported by DOE's Advanced Scientific Computing Research (ASCR) program.
Three of the top ten ASCR projects are using extensive grants of computing time on the Blue Gene/P at the ALCF to conduct large-scale calculations needed to achieve new scientific discoveries. These projects are focusing on modeling the molecular basis of Parkinson's disease (lead: Igor Tsigelny, University of California-San Diego); designing proteins at atomic scale and creating enzymes (lead: David Baker, University of Washington); and exploring the mysteries of water (lead: Giulia Galli, University of California-Davis). Another award-winning project, PETSC, a portable, extensible software toolkit, is being developed by researchers in Argonne's Mathematics and Computer Science Division (lead: Barry Smith, Argonne) to support high-performance petascale and terascale simulations based on partial differential equations.
All ten winning projects have been featured in "Breakthroughs 2008: Report of the Panel on Recent Significant Advancements in Computational Science" published by DOE. Tony Mezzacappa, Panel Chair, Oak Ridge National Laboratory, will discuss the report at SC08, in an ALCF Birds of a Feather session entitled "Petascale Computing Experiences on Blue Gene/P" being held from 12:15.-1:15 p.m. Wednesday, Nov. 19.
Contact: Cheryl Drugan, cdrugan@mcs.anl.gov
LBNL Method for Understanding Nanostructures is a Gordon Bell Prize Finalist
To fully understand the energy harnessing potential of nanostructures, a team of researchers in the Berkeley Lab's Computational Research Division (CRD) led by Lin-Wang Wang developed the Linear Scaling Three Dimensional Fragment (LS3DF). This method uses computer algorithms and a "divide-and-conquer" technique to efficiently gain insights into how nanostructures function in systems with 10,000 or more. The developers of LS3DF are finalists in the Association for Computing Machinery's (ACM) Gordon Bell Prize Competition, which recognizes outstanding achievement in high-performance computing application. The winners will be announced on November 20, 2008 at the SC08 Conference in Austin, TX.
According to Wang, traditional methods for calculating the energy potential of nanostructure systems containing 10,000 or more components can be very time-consuming and resource-intensive. Because these techniques calculate the entire structure as one component, the compute time, disk-space and memory, required to determine the energy potential of these structures grows exponentially depending on the system's size. He notes that LS3DF offers a more efficient method for calculating energy potential because it is based on the observation that the total energy of a large nanostructure system can be broken down into two component parts: electrostatic energy and quantum mechanic energy. Wang refers to this technique as "divide-and-conquer."
Contact: Linda Vu, LVu@lbl.gov
LANL Develops High-Order Discretization Methods on Polyhedral Meshes
The prediction and insights gained from simulations are no better than the physical models or the numerical methods used to solve them. High-order discretization methods are expected to be more efficient than low-order methods in modeling of miscellaneous physical processes. Development of high-order methods on polygonal and polyhedral meshes is a difficult task. It becomes even more challenging when the mesh is distorted so that it can conform and adapt to the physical domain and problem solution. We have developed and analyzed theoretically and numerically new high-order mimetic methods for solving diffusion problems on polygonal and polyhedral meshes. The diffusion problem is crucial for modeling heat transfer, contaminant transport in porous media, migration of electrons in semiconductor chips, etc.
The mimetic methods combines the analytical power of finite element methods with the mesh flexibility of finite volume methods. Success of mimetic methods in various application is based on preserving and mimicking essential underlying mathematical properties of the physical systems. The previous mimetic methods were second-order accurate for the conservative variable (temperature, pressure, energy, etc), and only first-order accurate for its flux. The new methods are second-order accurate for both scalar and vector variables. More accurate flux resolution makes significant impact on evolution of conservative quantities. The developed theoretical analysis covers general polygonal and polyhedral meshes that may include non-convex and degenerate elements. Thus, the new methods can be used in moving mesh methods resulting in non-convex elements as well as in adaptive mesh refinement methods resulting in degenerate elements. Co-authors of these results are K.Lipnikov (LANL), V.Gyrya (Penn State) G.Manzini (IMATI, Italy), L.Beirao da Veiga (UNIMI, Italy).
New Model Provides Better Data on Subsurface Mixing
To trap or treat strontium or other contaminants in the soil, researchers need to know how the contaminated water behaves in the complex subsurface. So, researchers from the Pacific Northwest National Laboratory, University of California at San Diego, and Idaho National Laboratory developed a model that provides a more realistic view of how the contaminants are mixing. The new model treats the different types of mixing, advective and diffusive, separately. Classical advection-dispersion models lump the two mixing types together, overestimating key parameters. This new model and the improved understanding it brings is critical for remediating contaminated sites, such as the plutonium production site in southeastern Washington State. Tartakovsky AM, DM Tartakovsky, and P Meakin. 2008. "Stochastic Langevin Model for Flow and Transport in Porous Media," Physical Review Letters 101, 044502.
Contact: Alex Tartakovsky, alexandre.tartakovsky@pnl.gov
ANL Researchers Funded to Optimize Parallel Performance of Accelerator Codes
Argonne researchers Lois McInnes and Tim Tautges have received funding for a new SAP activity to augment geometry tools, shape optimization procedures, and optimal meshing of ComPASS procedures. ComPASS, or Community Petascale Project for Accelerator Science and Simulation, is funded by the DOE SciDAC-2 program. The new funding will be used to support a postdoctoral researcher, who will focus on optimizing the parallel performance of accelerator modeling codes, using particle-in-cell, finite difference, and finite element techniques.
Argonne's winning proposal, headed by PI Sven Leyffer of Argonne and Dr. Jeffrey Linderoth of the University of Wisconsin-Madison, focuses on next-generation solvers for mixed-integer nonlinear programs. The three-year award was announced at the recent AMR08 conference by Sandy Landsberg, DOE Program Manager for Applied Mathematics Research.
Contact: Gail Pieper, pieper@mcs.anl.gov


ORNL's Thomas Schulthess New Swiss National Supercomputing Centre Director
Thomas Schulthess, former group leader of ORNL's Computational Materials Science Group, is the new director of the Swiss National Supercomputing Centre (CSCS) at Manno. As CSCS director, he will also be professor of computational physics at ETH Zurich, where he studied physics and earned his Ph.D. Schulthess is a supercomputing expert and has earned an international scientific reputation in the field. He worked for 12 years at Oak Ridge National Laboratory. Since 2002 led the Computational Materials Science Group and since 2005 he has also led the Nanomaterials Theory Institute. He will remain as a part-time research staff member with ORNL, continuing his research collaborations, and expanding collaborations between ETH and ORNL. For more on the announcement, visit:
BER Associate Director Anna Palmisano Visits LBNL and NERSC
Dr. Anna Palmisano, Associate Director of the Office of Biological and Environmental Research (BER) in the DOE Office of Science, recently visited Berkeley Lab Computing Sciences. Associate Laboratory Director Horst Simon discussed computational research collaborations and ESnet with her, and NERSC Director Kathy Yelick gave her a tour of the NERSC Center. In a thank-you note, Dr. Palmisano wrote, "I have heard so much about the Center over the years I have been associated with BER, so it was quite a thrill to actually have a chance to see it in action. NERSC is playing a vital role in many of our BER programs, and I expect that will continue and grow in areas such as climate and computational biology."
ESnet's Bill Johnston Featured in Video on LHC Cyberinfrastructure
Bill Johnston, who recently stepped down as head of ESnet, is featured in a 12-minute video explaining the importance of advanced networking to support the Large Hadron Collider (LHC) experiments at CERN. The video was produced by Internet2, which is partnering with ESnet to provide the cyberinfrastructure that will allow researchers in the U.S. to access data from experiments at the LHC. The LHC will be the first experiment to fully utilize the advanced capabilities of ESnet, Internet2, and US LHCNet, which will connect DOE national laboratories and university researchers across the country to the LHC data. The video can be viewed at http://www.internet2.edu/lhc/.


Princeton's New High-Speed Connection to ESnet's Dynamic ESnet4 Network
DOE's Energy Sciences Network (ESnet) just improved its Internet connections to several institutions on Princeton University's Forestall Campus, including the Princeton Plasma Physics Lab (PPPL), the High Energy Physics (HEP) Group within the Physics Department at Princeton University, and the National Oceanic and Atmospheric Administration's Geophysical Fluid Dynamics Laboratory (GFDL). Now researchers around the globe can access data from these science facilities with increasing speeds and scalability, helping enable international collaborations on bandwidth-intensive applications and experiments.
The Princeton network upgrade took approximately five months to complete, and involved running fiber optic cabling underground from the Forestall Campus outside Princeton, New Jersey, to Philadelphia where it is transported across the ESnet infrastructure to ESnet's main point of presence in McLean, Va. On the Princeton campus, the PPPL's Internet connection is now operating at 10 gigabit speeds, 10 billion bits per second, significantly faster than its previous speed of 155 megabits, or 155 million bits per second. This is a 6,400 percent improvement in performance, and ESnet's international connectivity will facilitate collaborations on world-class facilities, including the future ITER fusion reactor in France and existing fusion energy facilities such as the superconducting tokamaks in Korea (KSTAR) and in China (EAST).
Contact: Linda Vu, LVu@lbl.gov
ORNL's Phoenix Makes Way for Petascale Age
Oak Ridge National Laboratory's (ORNL's) Phoenix supercomputer, still one of the fastest vector systems in the world, was taken out of service at the beginning of October to make way for petascale systems capable of 1,000 trillion calculations a second (one petaflop). Installed in 2003 with a peak performance 3.2 teraflops, Phoenix was ORNL's most powerful system when the lab's Leadership Computing Facility was established in 2004. It has been upgraded in the years since and in its latest configuration had more than 1,000 multistreaming vector processors and a peak performance of more than 18 teraflops. In its 5½ years at ORNL, Phoenix rose as high as No. 17 on the TOP500 list of the world's fastest supercomputers.
Phoenix is being removed from ORNL's computer room to make way for two systems that are each more than 300 times as powerful as the 3.2-teraflop configuration. The Oak Ridge Leadership Computing Facility's Cray XT5 Jaguar system will boast a peak performance of one petaflop, while the National Institute of Computational Sciences' Cray XT5 Kraken system will peak at slightly less than one petaflop.
Contact: Jayson Hines, hinesjb@ornl.gov
PNNL Researchers Implement Popular Text Model on Cray XMT
As part of the Pacific Northwest National Laboratory's Center for Adaptive Supercomputing Software, researchers are building programs for multithreaded machines to accelerate discovery within disparate datasets. Researchers are implementing the popular Latent Dirichlet Analysis (LDA) text model on the Cray XMT multithreaded system. LDA is a "latent topic model" that builds a probability model in terms of documents and words, and can be used for classification, clustering, anomaly detection, and summarization. As part of this effort, researchers have implemented table-lookup-based versions of several numeric algorithms and have achieved more than 20x speedup over built-in functions using this approach. Finally, researchers are implementing code generation for more general hierarchical Bayesian models, such as LDA, that will allow a domain scientist to specify a probability model at a high level with no knowledge of C or parallel programming, and will generate parallel C code to be executed on the Cray XMT.
Contact: Daniel Chavarria, Daniel.chavarria@pnl.go
New System Maintaining Software for Oak Ridge Leadership Computing Facilities Users
One of the challenges facing supercomputing centers is managing third-party software. For example, different researchers need or prefer different outside libraries, editors, management tools, etc., which are all stored on various systems. The Oak Ridge Leadership Computing Facility (OLCF), also known as the National Center for Computational Science (NCCS), at ORNL currently supports over 70 different (non-vendor) libraries, tools, and applications for its Cray XT4 system alone. And for most applications and libraries the NCCS supports multiple versions of the software and multiple builds of each version to support three different compilers and different compilations.
NCCS staff member Mark Fahey has created a new suite of tools and policies for installing and maintaining users' preferred software, including naming schemes, template scripts for building and testing software, and scripts to police the entire installation "area". While the new tools, dubbed the NCCS Software Environment, were originally designed for the NCCS's Cray XT4 known as Jaguar, they now manage all third-party installs for the Blue Gene/P and two AMD-based quad-core clusters (Lens and Smoky) at the NCCS as well.
Contact: Jayson Hines, hinesjb@ornl.gov


LBNL Researchers Contribute Expertise in all Aspects of SC08 Conference
Researchers from Lawrence Berkeley National Laboratory are making significant contributions to the SC08 Conference Technical Program, contributing four technical papers and one research poster, organizing two workshops, participating in two panel discussions and hosting of co-hosting six Birds-of-a-Feather sessions (BoFs). Additionally, UC Berkeley Prof. David Patterson, who has a joint appointment in LBNL's Future Technologies Group, is one of four invited speakers and Dale Sartor of the Environmental Energy Technologies Division will give a Masterworks presentation on energy-efficient computing.
As a leader in energy-efficient computing, LBNL co-organized the Nov. 16 workshop, "Power Efficiency and the Path to Exascale Computing," is part of the panel discussion on "Will Electric Utilities Give Away Supercomputers with the Purchase of a Power Contract?" Sartor's presentation will tell how to "Save Energy Now in Computer Centers" and Sartor and Bill Tschudi, along with PNNL's Moe Khaleel, are co-hosting a BoFs looking at "High Energy Performance for High Performance Computing."
Contact: Jon Bashor, JBashor@lbl.gov
ALCF Staff to Host Four SC08 Birds-of-a-Feather Sessions
Argonne Leadership Computing Facility (ALCF) staff will host four Birds-of-a-Feather (BOF) sessions at the SC08 conference. On Tuesday, November 18, Pete Beckman, ALCF Interim Director, will lead a session on "Coordinated Fault Tolerance in High-End Computing Environments". Susan Coghlan, ALCF Associate Division Director, will facilitate another BOF on "Blue Gene System Management Community". This session will provide an opportunity for Blue Gene system administrators to share information and discuss problems. On the same day, Kalyan Kumaran, ALCF Manager, Performance Engineering and Data Analytics, will present a BOF on "SPEC MPI2007: A Benchmark to Measure MPI Application Performance." The talk will describe the benchmark creation process and a roadmap of future additions to the suite.
On Wednesday, November 19, Katherine Riley, ALCF Team Lead, will facilitate a fourth session on "Petascale Computing Experiences on Blue Gene/P". Kalyan Kumaran and Paul Messina, ALCF Director of Science, will also speak at this session. With well over 100 applications ported to the Blue Gene/P at the ALCF, many lessons have been learned. Anyone interested in the P architecture is invited to make presentations and join a discussion on Blue Gene/P science, algorithms, and performance issues. For more details, see the ALCF events calendar at: http://www.alcf.anl.gov/events/index.php
Fermilab Readies Exhibits for SC08 Conference
Intense work continues to produce exhibits related to science and computing for the Fermilab booth at the SC08 conference to be held Nov. 15-21 in Austin, Texas. To view what Fermilab will be displaying, go to the following link: http://supercomputing.fnal.gov/SC2008/picturedirectory.htm
HPCT Seminar Held at ALCF
The Argonne Leadership Computing Facility and IBM hosted an October 21-22 seminar on the IBM High Performance Computing Toolkit (HPCT) at Argonne. The HPCT is an integrated software environment that addresses the performance analysis and performance debugging of sequential and parallel scientific applications for users at any level of experience with parallel programming. The toolkit supports IBM high performance computing platforms, including Blue Gene and AIX/Power. It abstracts the performance behavior of an application across four architectural dimensions associated with the system: processor, thread, communication, and memory.
The seminar covered the usage and techniques for toolkit components. In addition to the presentation, hands-on lab interaction and discussion helped users improve their individual applications.
Contact: Cheryl Drugan, cdrugan@mcs.anl.gov
Argonne Researchers Give Talks in Chicago Humanities Festival
Two Argonne researchers - Ian Foster and Mark Hereld - participated in the Chicago Humanities Festival in October. Foster, senior scientist at Argonne National Laboratory and director of the University of Chicago/Argonne Computation Institute, discussed the explosion of online information, showing how technology, such as supercomputers and massive datasets, is revolutionizing the way knowledge is generated and used. Hereld, an experimental systems engineer at Argonne and the University of Chicago, participated in a panel on "Emergence: Philosophy Meets Science". Emergence refers to the ways that a multiplicity of seemingly discrete, relatively simple interactions can give rise, without any apparent direction or plan, to astonishing complexity and pattern.
Contact: Gail Pieper, pieper@mcs.anl.gov
DOE Summer School Teaching Material Available on the Web
The Northwest Consortium for Multiscale Mathematics held its summer school focusing on the computational issues surrounding multiscale simulation on August 4-6 2008 at the WSU-Tri-Cities/Pacific Northwest National Laboratory. This workshop had 28 attendees, and was broadcast live via streaming media to facilitate off-site involvement and had 288 viewers in August alone. This workshop has been aimed at graduate and post-doctoral students in mathematics, scientific computing, materials sciences, geophysics, computational physics, and mechanical engineering; averaging 12-15 faculty and a student/teacher ratio about 1:1. The materials from the summer school remain available for the research and education community at: http://multiscale.emsl.pnl.gov/sessions.shtml
Two LBNL Presentations at Los Alamos Computer Science Symposium
The Los Alamos Computer Science Symposium held Oct. 13-15 in Santa Fe featured presentations by two LBNL Computing Sciences staff members. Harvey Wasserman of NERSC discussed "Recent Workload Characterization Activities at NERSC," and Paul Hargrove of the Future Technologies Group in CRD talked about "System-level Checkpoint/Restart with BLCR."
Argonne Participates in Grace Hopper Celebration
Argonne staff discussed computational research in progress at the Argonne Leadership Computing Facility and Mathematics and Computer Science Division with attendees at the Grace Hopper Celebration of Women in Computing on October 1-4 at the Keystone Resort, Colorado. Argonne representatives also provided information on the latest job opportunities. The Grace Hopper Celebration offers a series of conferences designed to bring the research and career interests of women in computing to the forefront. Leading researchers present their current work, while special sessions focus on the role of women in today's technology fields, including computer science, information technology, research and engineering. Past Grace Hopper Celebrations have resulted in collaborative proposals, networking, mentoring, and increased visibility for the contributions of women in computing.
Contact: Gail Pieper, pieper@mcs.anl.gov
Why Choose a Career in Scientific Computing?
William Scullin, High Performance Computing System Administrator, Senior, Argonne Leadership Computing Facility (ALCF), explained to students why they should consider a career in scientific computing at the 2008 ACM South Central Programming Contest (http://acm2008.cct.lsu.edu/sites/lsu) on October 17. He gave an invited presentation to undergraduate students and coaches about scientific computing research under way and opportunities that are available to students at the ALCF.
Contact: Cheryl Drugan, cdrugan@mcs.anl.gov