ASCR Monthly Computing News Report - May 2008

The monthly survey of computing news of interest to ASCR is compiled by Jon Bashor ( with news provided by ASCR Program Managers and Argonne, Fermi, Lawrence Berkeley, Lawrence Livermore, Los Alamos, Oak Ridge, Pacific Northwest and Sandia National labs. Contact information and links to additional information, where available, are included with each article.

In this issue...


LBNL’s “Radical” Supercomputer Design Concept Resonates around the World
An LBNL project to design a specialized supercomputer using highly efficient microprocessors, such as those used in cell phones and other consumer electronics, received worldwide attention. The project was announced in a May 5 press release announcing Berkeley Lab’s partnership with Tensilica to create a prototype energy-efficient climate computer, and in a paper published in the May issue of the International Journal of High Performance Computing Applications (NOTE - the link may require login credentials or a fee to view the article). The paper by Michael Wehner, Lenny Oliker and John Shalf proposes an innovative architecture for modeling climate change with 1-kilometer resolution.
The project was described in the May 14 edition of Nature News ( (NOTE - the link may require login credentials or a fee to view the article) in conjunction with a longer report on a climate summit held in Reading, England, and described as “radical” in the May 22 online edition of Nature Reports on Climate Change ( The project was also the subject of coverage and commentary in countries around the world, including the UK, India, Spain, Australia, Germany, Thailand, and Bulgaria. U.S. media coverage included ScienceDaily, Slashdot, MIT Technology Review and Medical News Today.
Contact: John Shalf,
PNNL-Developed Technique Focuses on Mass Spectrometry Peptides
A unique machine-learning approach that only searches for peptides detectable by mass spectrometry may prove to be more cost effective and accurate than current methods. Developed by researchers at Pacific Northwest National Laboratory, the approach is based on support vector machine (SVM) methodologies. The SVM is an accurate and robust approach to statistical classification, with the essential ability to do nonlinear mapping and the straightforward treatment of highly noisy data. These features make the technique particularly attractive for biological analyses in which data are inherently noisy.
Peptide identification is one of the fundamental challenges of mass spectrometry (MS)-based proteomics because of their complexity and dynamics. As samples become more complex and include biological communities, the future of proteomics relies on the ability to accurately perform these identifications at a rate that can keep pace with high-throughput techniques such as accurate mass and elution time (AMT) studies. The PNNL-developed SVM method offers an approach to perform quantitative prediction of peptides that are detectable that, when combined with elution (i.e., extracting a substance from another by washing it with a solvent) time prediction, could provide for the rapid creation of an AMT database and reduce the need for extensive MS analyses. Furthermore, the ability to define MS peptides will have a cascading effect that will ultimately yield more accurate statistical prediction of identification, as has been demonstrated for peptide and protein quantification. The research was published in the May 2008 Bioinformatics (
Contact: Bobbie-Jo Webb-Robertson,
PNNL Hybrid Model Aims to Improve Understanding of Subsurface
Researchers at Pacific Northwest National Laboratory are developing a hybrid multiscale subsurface reactive transport model aimed at increasing our understanding of the subsurface. While many subsurface problems central to energy and environmental issues can be effectively addressed by single models at a single scale, some problems require linkage of models running concurrently across multiple scales. Researchers are integrating these tools into a coherent multiscale modeling framework under the "Hybrid Numerical Methods for Multiscale Simulations of Biogeochemical Processes" project using high-performance computing, providing a major opportunity to advance both scientific understanding and predictive capability that is applicable to field-scale problems.
Two ASCR-funded Science Application Partnerships projects support the hybrid model development. Under the "Component Software Infrastructure for Achieving High Level Scalability in Groundwater Reactive Transport Modeling and Simulation" project, researchers are developing a high performance computational framework based on the Common Component Architecture that will implement the model out of scalable, reusable components. A smooth particle hydrodynamics application has been fully incorporated into the framework, and work is underway to include a continuum subsurface simulation code. Under the "Process Integration, Data Management, and Visualization for Subsurface Simulations" project, researchers are building a modeling environment that combines workflow technology, provenance and data management, and scientific visualization tools to support efficient use of high-end computing resources and simplified organization, tracking, and analysis of studies comprised of hundreds of simulations. A prototype that enables scientists to configure, run, and monitor simulations and tracks the inputs and outputs associated with each processing step has been developed.
PNNL's research is featured in the Spring 2008 issue of SciDAC Review
Contact: Bruce Palmer,
Tim Scheibe,
Karen Schuchardt,
LLNL's WPP Code Selected for SC08 Cluster Challenge Application
The WPP code, developed by a LLNL team of researchers led by Anders Petersson, implements substantial capabilities for modeling 3-D seismic events and has been used to model both historic and recent earthquake events in California and elsewhere. This code has been selected as one of six application codes highlighted as part of the SC08 Cluster Challenge. The Cluster Challenge is designed to showcase the power of cluster architectures and open source software to solve interesting problems. Teams consisting of up to six students, a supervisor, and optional vendor partners will compete in real time on the exhibit floor to run a workload of real-world problems on clusters of their own design. More information on this upcoming event can be found at
Award-Winning LBNL Paper Attracts Online Media Attention
A research paper by Berkeley Lab researchers exploring ways to make a popular scientific analysis code run smoothly on different types of multicore computers won the Best Paper Award in the application track at the IEEE International Parallel and Distributed Processing Symposium (IPDPS) in April and got the attention of two HPC publications. The paper, "Lattice Boltzmann Simulation Optimization on Leading Multicore Platforms," was written by Samuel Williams and Lenny Oliker of the Computational Research Division, and Jonathan Carter, John Shalf, and Kathy Yelick of NERSC. Yelick also was a keynote speaker at the symposium.
HPCwire ran a story about the paper and the prize, "Multicore Code Booster -Research on code optimization explores multicore computing, wins award" at A commentary in Ars Technica by Jon Stokes, "PS3's Cell CPU tops high-performance computing benchmark," can be read at
Contact: Sam Williams,
LBNL's Water Research Project Joins Microsoft Research Silicon Valley Road Show
When Microsoft Research staged its fourth Silicon Valley Road Show on May 22, among the innovative projects showcased at the public event was a research program to improve the collection, access and analysis of water resources data by the Berkeley Water Center, a joint effort between Berkeley Lab's Computational Research Division (CRD), UC Berkeley's College of Engineering and UC Berkeley's College of Natural Resources, with support from Microsoft Research.
Local, state and federal governments have long collected detailed information about water supplies, such as measuring river flows and water content in winter snows to make allocation decisions for farms, businesses and residential consumers. However, these agencies use different methods to collect and archive data, posing a challenge for scientists who need to retrieve and integrate all those datasets in order to carry out comprehensive analyses. The Berkeley project strives to ease that headache for scientists and has developed a prototype data server, which runs on Microsoft SQL Server 2005.
The road show was held at Microsoft's Mountain View campus. Following opening presentations, researchers provided demonstrations in an interactive setting, showcasing some of the innovative projects that the Microsoft Research group is working on in the areas of search, graphics, security, privacy and more.
Contact: Deb Agarwal,


Oak Ridge LCF Director Jim Hack Addresses Senate on Climate Modeling
Oak Ridge LCF director James (Jim) Hack recently testified before the U.S. Senate on the intricacies of climate modeling. Hack spoke at the hearing on "Improving the Capacity of U.S. Climate Modeling for Decision-Makers and Other End-Users" on May 8, 2008, before the committee on Commerce, Science, and Transportation. One of the world's foremost experts on computational climate modeling, Hack stressed future investments in computational algorithms to improve the accuracy of future-predicting climate models and a "close and continuing collaboration between the climate, facilities, applied mathematics, and computer science communities."
Hack is in charge of a new program at Oak Ridge National Laboratory to coordinate the various climate-related research activities taking place at the lab in different organizations. "There is no single pacing item to the advancement of climate change science, but a collection of interrelated science and technology challenges," he said. "Many of the issues discussed in this testimony speak to the need for a balanced investment portfolio in computational infrastructure, climate science, computer science, and applied mathematics. In the short and long term, computational capability remains a significant bottleneck and should remain a high-priority investment."
Contact: Cheryl Drugan,
Sandian Bill Spotz Begins IPA at ASCR
William (Bill) Spotz will begin an assignment at the Department of Energy Office of Science on June 9, 2008, where he will be a Mathematician IPA in the Office of Advanced Scientific Computing Research and a program manager for SciDAC Applied Mathematics Institutes. Bill is currently a senior staff member in Sandia's Exploratory Simulation Technologies department conducting research in high-order discretizations including spectral elements with applications to climate modeling. Bill is also an expert in Python and is the lead developer of the PyTrilinos package that has been used to develop a number of multiphysics coupled simulation capabilities for DOE applications.
Contact: Bill Spotz,
Argonne's Robert Ross Receives Outstanding Young Alumni Award
Dr. Robert Ross, Computer Scientist in the Mathematics and Computer Science Division at Argonne National Laboratory, has received an Outstanding Young Alumni Award for 2008 from Clemson University, South Carolina. The award, which was presented at the 13th Annual Engineering and Science Banquet, recognizes his significant career success, as well as notable contributions to the community and engineering and science.
Ross, a resident of Argonne, Ill., received a bachelor of science in computer engineering in 1994 from Clemson, followed with a Ph.D. in 2000. He's been recognized with a Presidential Early Career Award and R&D 100 Award, a mark of excellence recognizing the most innovative ideas of the year. His work with MPICH2, a high performance software application, enables developers to run the same code on a wide variety of platforms, from laptops and workstations to the largest and fastest parallel computers in the world. Applications include materials science, combustion simulation, astrophysics, climate modeling, and bioinformatics.
Contact: Gail Pieper,
Sandia/ASCR-AMR Researcher Serves on SIAM Optimization Plenary Panel
Tamara Kolda, Sandia National Labs, served on the "Forward Looking" plenary panel at the SIAM Conference on Optimization in May 2008. The topic of the panel was the future of optimization. Topics highlighted by the panelists included user-focused optimization, global optimization, computing in a many-core environment, and challenges in applications where data is a major factor (like machine learning). Look for a summary of the panels remarks in a future publication by SIAM.
Contact: Tamara Kolda,
LLNL's Panayot Vassilevski Co-Chairs Copper Mountain Conference on Iterative Methods
Panayot Vassilevski co-chaired (with Howard Elman at the University of Maryland) the Program Committee for the 10th Copper Mountain Conference on Iterative Methods, April 6-11, 2008. This annual conference is one of the premier meetings for discussing research results related to linear and nonlinear iterative methods. This year's additional highlighted topics included stochastic PDEs and uncertainty, inverse problems, optimization, multicore architectures, and systems of PDEs. Several ASCR-funded LLNL researchers presented results at this meeting, including Ulrike Yang and Allison Baker, who discussed "Parallel Algebraic Multigrid for Systems of PDEs": Panayot Vassilevski, who presented "Constrained Minimization Function Recovery in Lagrangian Hydrodynamics"; and Rob Falgout, who presented "Parallel Sweeping Algorithms in SN Transport." More information can be found on the conference web site:
Sandia/ASCR-AMR Scientist Helps Judge Student Paper Contest
Ray Tuminaro, Sandia National Labs, served on the committee for the 10th Copper Mountain Conference on Iterative Methods held this past April in Copper Mountain, Colorado. In addition, Dr. Tuminaro reviewed submissions as a member of the Copper Mountain student paper selection committee. This year's 18 student submissions were particularly strong and covered several topics including iterative methods, optimization, image reconstruction, and blood flow. Currently, Dr. Tuminaro is serving as Editor-in-Chief for the special issue of the SIAM Journal on Scientific Computing (SISC) dedicated to recent progress in iterative methods.
Contact: Ray Tuminaro,


Upgraded Jaguar at ORNL Passes Acceptance Testing
Upgrades to Oak Ridge National Laboratory's (ORNL's) Jaguar supercomputer have more than doubled its performance. The system recently completed acceptance testing, running applications in climate science, quantum chemistry, combustion science, materials science, nanoscience, fusion science, and astrophysics, as well as benchmarking applications that test supercomputing performance.
The Jaguar system, a Cray XT4 located at ORNL's Leadership Computing Facility (Oak Ridge LCF), now uses more than 31,000 processing cores to deliver up to 263 trillion calculations a second (263 teraflops). Jaguar was among the most powerful computing systems within DOE's Office of Science even before the recent upgrade. "The Department of Energy's Leadership Computing Facility is putting unprecedented computing power in the hands of leading scientists to enable the next breakthroughs in science and technology," said ORNL Director Thom Mason. "This upgrade is an essential step along that path, bringing us ever closer to the era of petascale computing."
Contact: Jayson Hines,
PNNL Rolls Out MeDICi Integration Framework for Data-Intensive Computing
Computer scientists at Pacific Northwest National Laboratory have rolled out the MeDICi Integration Framework, a middleware platform (computer software that connects software components or applications) that makes it easy to integrate separate codes into complex applications that operate as a data analysis pipeline. The framework is the first step in an evolving development project to create an underlying architecture for high performance analytical applications. Building such analytical pipelines is typically fraught with difficulties, as the codes and components that process and transform the data are written in different programming languages, requiring the user to invest time in translating the data for each application. Additional problems with data-intensive computing applications are processing data from sensors or instruments that produce high volume data streams and moving large data sets from one application to another. The MeDICi Integration Framework is designed to address these issues. The MeDICi Integration Framework is publicly available for free download.
Contact: Ian Gorton,
NERSC's Integrated Performance Monitoring (IPM) Is Gaining Users Worldwide
NERSC's David Skinner, along with Nick Wright of the San Diego Supercomputer Center, went to the Texas Advanced Computing Center (TACC) this month to give a training class on how to collect and analyze performance profiles using the Integrated Performance Monitoring (IPM) framework on TACC's new Ranger system. The training was funded by the National Science Foundation. The main talk was titled "IPM - A Performance Monitoring Environment for Petascale High-Performance Computing Systems," and there was also some hands-on training.
IPM, an open-source software tool developed at NERSC, makes it easy for both non-expert users and HPC center managers to get a simple performance profile for each job that is run. Without that information, both center staff and users can be unaware of potential performance gains that are well within their grasp. IPM has had a growth spurt over the last two months. HPC facilities that installed IPM on their clusters included the University Politehnica of Bucharest, Romania, and the National Research Center for Intelligent Computing Systems (NCIC) in Beijing, China.
Contact: David Skinner,
Cray Center of Excellence Makes Science Happen
To help researchers better tailor their codes to the Cray XT4 architecture, ORNL and Cray founded the Cray Center of Excellence (COE) on the ORNL campus in the spring of 2005. As new systems are delivered to ORNL, the COE staff assist principal investigators in porting and optimizing the most important scientific applications to the new machines. Currently, says COE director John Levesque, the COE is working on six to eight applications, tweaking the various codes so that they achieve maximum performance on the latest Cray systems. The COE also has a strong educational component, featuring workshops for scientists who use supercomputers in their research. Recently, the COE has begun optimizing certain routines in libraries for a variety of applications as well.
Contact: Jayson Hines,


Town Hall Meeting Set to Refine Roadmap for Countering Cyber Threats
The second town hall meeting of the Cyber Security Grass Roots Community will be held June 30 through July 2 at Oak Ridge National Laboratory to discuss and refine a roadmap addressing national concerns about cyber threats for DOE. The discussion will focus on a draft white paper describing a cyber security research and development program that the community would like to pursue. Visit to register and for more information. The community, organized by Deb Frincke, Pacific Northwest National Laboratory; Charlie Catlett, Argonne National Laboratory; Ed Talbot, Sandia National Laboratories; and Brian Worley, Oak Ridge National Laboratory, is working to develop a science-driven, proactive, innovative R&D agenda. The community seeks to apply scientific principles from mathematics, computer science, complex systems and other disciplines to pursue transformational cyber security capabilities and architecture, enabling a quantitative and proactive approach for addressing cyber threats for both classified and unclassified needs. Those interested in participating are welcome to join the twice-monthly teleconferences and to contribute to papers through the following link - (
Oak Ridge LCF Hosts Series of Workshops
The Oak Ridge LCF recently hosted a series of workshops aimed at educating the wider high performance computing community. A total of three workshops were hosted at Oak Ridge National Laboratory (ORNL) in mid-April, beginning with the Oak Ridge LCF Cray XT workshop on April 14, 15, and 16. Staff from the Oak Ridge LCF, ORNL's Joint Institute for Computational Sciences, Cray, and chipmaker AMD discussed XT issues; researchers participated in hands-on sessions with the Cray XT system; and computational scientists gathered with vendors and ORNL's staff experts to discuss strategies for making the most of their time on Cray XT supercomputers.
The Lustre workshop, held April 16, focused on helping application scientists get the most from the Lustre File System. The workshop was presented by Oleg Drokin and Wang Di, both file system engineers with Sun Microsystems and seasoned Lustre developers. Closing out the workshop series was the Oak Ridge LCF Users Meeting on April 17 and 18. Principal investigators and members of their research teams gathered with Oak Ridge LCF staff and vendors to discuss challenges and solutions in areas such as porting and scaling of applications on the XT system.
Contact: Jayson Hines,
Attendees Give ALCF Performance Workshop High Marks
To help INCITE awardees maximize their allocations on the Blue Gene/P, the Argonne Leadership Computing Facility hosted "Performance," the second in the 2008 INCITE workshop series. Thirty people attended the May 7-8 workshop. Advanced users received hands-on training on performance and debugging tools to enhance their applications on the Blue Gene/P. Feedback from the attendees indicated they found information and presentations about the profiling tools and parallel I/O to be especially useful. Due to the workshop's success, ALCF expects to host another Performance workshop this year. In addition, an INCITE proposal writing workshop is being planned for June.
Contact: Chel Lancaster,
ALCF Hosts Tour for Science Careers in Search of Women Conference
The Argonne Leadership Computing Facility participated in the labwide Science Careers in Search of Women conference on April 3. This annual event brings high school women from 60 Chicago-area schools to Argonne to experience science careers firsthand through interaction with female scientists. Approximately 20 students toured the ALCF's Interim Supercomputing Support Facility and got a close-up look at the Blue Gene/P supercomputer, accompanied by tour leader Sandra Bittner, Senior UNIX Security Engineer, and Chel Lancaster, Marketing and Outreach Coordinator at the ALCF.
Contact: Chel Lancaster,
Oak Ridge LCF Big Presence at CUG 2008
The Oak Ridge LCf once again enjoyed a major presence at the annual Cray Users Group, held May 5-8 in Helsinki, Finland. The conference theme this year was "Crossing the Boundaries," and the event featured numerous educational seminars, including general sessions, tutorials, and birds-of-a-feather sessions. Numerous Oak Ridge LCF staff members gave talks and participated in other aspects of the conference. For example, the Oak Ridge LCF's James Rogers spoke on the future of supercomputing in a talk titled "Reaching a Computational Summit: The 1 PFLOP Cray XT at the Oak Ridge National Laboratory," and Richard Graham discussed MPI in "Overview of the Activities of the MPI Forum."  "The conference was very informative," said Mark Fahey of the Oak Ridge LCF. "There was a lot of information about the quad-core architecture, and the interactive sessions provided a very useful, engaging setting."
Contact: Jayson Hines,
Supercomputing Careers Highlighted at Collegiate Scholars Roundtable
At the annual career roundtable hosted by the University of Chicago Collegiate Scholars Program (CSP) and Goldman Sachs on April 5, ALCF's Ira Goldberg met with students to discuss careers in supercomputing. The event was part of an array of enriching college-preparatory opportunities that CSP offers Chicago city public high school students ( CSP's goal is to prepare talented students for academic success at the best colleges and universities.
Contact: Chel Lancaster,