ASCR Monthly Computing News Report - September 2009

In this issue...


LBNL's Code Yields First Full Simulation of Star's Final Hours

The precise conditions inside a white dwarf star in the hours leading up to its explosive end as a Type Ia supernova are one of the mysteries confronting astrophysicists studying these massive stellar explosions. But now, a team of SciDAC researchers, including Ann Almgren, John Bell and Andy Nonaka of Berkeley Lab's Center for Computational Sciences and Engineering, has created the first full-star simulation of the hours preceding the largest thermonuclear explosions in the universe. In a paper published in the October issue of Astrophysical Journal, the CCSE researchers along with Mike Zingale of Stony Brook University and Stan Woosley of University of California, Santa Cruz, describe the first-ever three-dimensional, full-star simulations of convection in a white dwarf leading up to ignition of a Type Ia supernova.

The researchers are members of the SciDAC Computational Astrophysics Consortium led by Woosley. The team ran their simulations on Jaguar, a Cray XT4 supercomputer at the Oak Ridge Leadership Computing Facility in Tennessee, using an allocation under DOE's Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program.

Read the news release and see the video
Evaluating Energy Cost Savings of Next-Generation Energy Systems in Real Time

Volatile fossil fuel prices and the major adoption of intermittent renewable energy resources pose great challenges to the operation of the next-generation national power grid. Researchers in the Mathematics and Computer Science Division at Argonne National Laboratory have developed and tested a general real-time stochastic optimization strategy that exploits weather forecasts and uncertainty information in a systematic manner. The results show that energy costs can be reduced by as much as 45 percent in a photovoltaic-hydrogen hybrid system—an energy production system that is clean, environmental-friendly, modular, and independent from fossil fuels—and 10-15 percent in a building HVAC system. The strategy is currently being used to perform economic studies of the impact of a major adoption (over 20 percent) of wind power in the grid.

Contact:  Gail Pieper,
Subsurface Simulation Enhanced with New PNNL-Developed Programming Model

A new programming model developed by PNNL researchers is proving to be effective in managing multi-task parallelism on high-performance computers. The model, the Common Component Architecture (CCA) Multiple Component Multiple Data (MCMD), can be deployed to easily manage multi-level programming in subsurface simulations

The effectiveness of CCA MCMD programming was demonstrated on STOMP, a large-scale (~800K lines of code) PNNL-developed subsurface simulation code. The performance results for a 3D saturated flow problem with the STOMP CCA framework showed that the MCMD programming model is very effective in achieving scalability on thousands of processors with high efficiency. To simulate the saturated flow problem containing ~1.2 million nodes on 2048 processors, the CCA MCMD version of STOMP was an order of magnitude faster than the original parallel version of STOMP.

Contact:  Sue Chin,
PNNL's Free Web Portal Cited as "Hot Paper"

A paper on the Basis Set Exchange (BSE), a Web portal built by scientists from Pacific Northwest National Laboratory for the Environmental Molecular Sciences Laboratory, was recently cited as a "hot paper" by Essential Science Indicators, a major scientific database. BSE is among a small group of papers recognized very soon after publication, reflected by rapid and significant numbers of citations. These papers are often key papers in their fields.

BSE, the focus of the hot paper, gets accessed about 20,000 times a month. The Web portal allows researchers to search, browse and retrieve a wide array of available basis sets, and to share basis sets. The BSE makes it easier for chemists, biologists, and scientists of every strip to find basis sets essential to their simulations. The BSE provides a publicly accessible website for researchers to browse and download basis sets in a variety of formats. In addition, by enabling researchers to contribute new basis sets to the repository, the portal encourages sharing of data and knowledge with the scientific community.

Contact:  Sue Chin,
ASCR Initiative on Ice Sheet Dynamics Leverages Sandia's Trilinos

A new ASCR initiative aimed at improving the predictability of Ice Sheet models began in September. The expertise and code base of DOE computational scientists is being leveraged to impact this important climate application. The dynamics of the Antarctic and Greenland Ice Sheets are currently a large source of uncertainty in climate predictions, particularly in regards to sea-level rise. Two of the six funded projects have significant involvement of Sandia researchers, who will incorporate the scalable solvers and modern software engineering infrastructure of the Trilinos framework into Ice Sheet simulation codes. This leverages DOE investments from the ASCR Base Math, SciDAC TOPS2 and CSCAPES centers, and the NNSA's ASC program. One of the projects, led at Columbia University, seeks to devise scalable algorithms for XFEM (extended finite element methods), which has shown promise in the simulation of fracture. Another, the SEA-CISM project led by ORNL and in collaboration with LANL, will transform the open-source Glimmer code for Ice Sheet simulation into a parallel and highly scalable code that interfaces with the other components of the global CCSM (Community Climate System Model).

Contact:  Andy Salinger, or
Ray Tuminaro,
Researchers Develop New Mathematical Theory for Second Order Systems of PDEs

Mathematics researchers Anders Petersson, LLNL, Heinz-Otto Kreiss, Trasko-Storo Institute for Mathematics (Sweden), and Omar Ortiz, Universidad Nacional de Cordoba (Argentina), have developed a new theory for initial-boundary value problems for second order systems of partial differential equations. In particular, they developed a well-posedness theory for second order systems in bounded domains where boundary phenomena like glancing and surface waves play an important role. Attempts have previously been made to write a second order system consisting of N equations as a larger first order system. Unfortunately, the resulting first order system consists, in general, of more than 2N equations which leads to many complications, such as side conditions which must be satisfied by the solution of the larger first order system.

Instead, the results of this work use the theory of pseudo-differential operators combined with mode analysis which provides many benefits including ensuring the reduction to a first order system of equations always gives a system of 2N equations, expanding the class of problems that can be treated with 'integration by parts' techniques, and clarifying the relation between boundary conditions and boundary phenomena. The new theory has implications for a broad field of wave propagation phenomena including acoustic, electromagnetic, and elastic wave propagation, as well as Einstein's equations of general relativity.

Contact:  Lori Diachin,
Professor to Describe New Algorithm Used in Solving Challenging Problems on GPUs

Wisconsin Professor Dan Negrut has been invited by NVIDIA to deliver one of the only three GPU application talks at the first GPU Technology Conference, which will take place in San Jose Sept. 30-Oct. 2. Negrut, currently an assistant professor at the University of Wisconsin - Madison, was a visiting scholar in the Mathematics and Computer Science Division at Argonne National Laboratory in 2004-2005, and he recently spent part of the summer of 2009 conducting research with Argonne computer scientists on the dynamics of complex multibody systems. Negrut will present his work on "Driving on Mars: Simulating Tracked Vehicle Operation on Granular Terrain," which uses a cone complementarity algorithm developed as a part of Argonne's applied mathematics program. The highly parallel structure of graphics processing units has attracted considerable interest as an alternative to general-purpose processors for solving challenging problems in science and engineering.

Contact:  Gail Pieper,
LBNL's Center for Computational Sciences and Engineering Share Research Results

John Bell, group leader of the Center for Computational Sciences and Engineering, and Aleksandar Donev, an Alvarez postdoctoral fellow working in CCSE, both gave presentations at the "DSMC: Theory, Methods and Applications" (DSMC09) conference held Sept. 13-16 in Santa Fe, NM. The Direct Simulation Monte Carlo (DSMC) conference is a four-day workshop bringing together DSMC developers and practitioners.

Bell's invited lecture on "Modeling of Fluctuations in Algorithm Refinement Methods" will opened the Wednesday, Sept. 16, session on Fluctuations and Granular Gases. Donev's contributed talk, "Fluctuating Hydrodynamics of Non-Ideal Fluids via Stochastic Hard-Sphere Molecular Dynamics (SHSD)", followed Bell's presentation. Read more about the conference at:

George Pau, an Alvarez postdoctoral fellow working in CCSE, gave a contributed talk and poster at the TOUGH Symposium 2009, held September 14-16 at LBNL. The symposium presented applications and enhancements to the TOUGH simulator for multiphase fluid, heat, and chemical transport. Pau's poster, presented in the "Carbon Dioxide Storage" section of the poster session was on "High Resolution Studies of the Diffusion-Convection Process During CO2 Storage in Saline Aquifers." His talk on September 16 was entitled "Parallel Second-Order Adaptive Mesh Algorithm for Reactive Flow in Geochemical Systems."

Andy Aspden, holder of LBNL's prestigious Seaborg Fellowship and a member of CCSE, is the co-author of the cover article in the September issue of the Journal of Fluid Mechanics published by the University of Cambridge. The paper is titled "The effect of sudden buoyancy flux increases on turbulent plumes" and is authored by M. M. Scase, A. J. Aspden, and C. P. Caulfield.

Contact:  Linda Vu
Sandia Develops Predictability Assessment in Stochastic Reaction Networks

Sandia researchers have developed an approach for parametric uncertainty quantification in stochastic reaction networks. Bayesian inference techniques are used to find the best Polynomial Chaos representation of the stochastic system state, taking into account not only intrinsic stochastic noise but also parametric uncertainties. A likelihood-based adaptive domain decomposition is introduced to handle multimodalities in the system state. The approach is shown to capture the correct system behavior for a benchmark bistable Schlögl model over a wide range of parameter variations. A journal article about this work is scheduled to appear in the October issue of the Journal of Computational and Theoretical Nanoscience.

Contact: Bert Debusschere,
New PNNL Projects Seek Ways to Improve Power Grid Reliability

PNNL researchers recently won ASCR-funded projects aimed at improving the operations of the electrical grid. Researcher Henry Huang has been funded to develop advanced Kalman filtering techniques and formulate dynamic state estimations that will fundamentally change the paradigm of real-time operations and control for complex engineering systems. If successful, the research will have a revolutionary impact on how complex systems are operated and ultimately lead to better decision-making and control of complex systems.

ASCR also has funded researcher Shuai Lu to build the mathematical framework and develop methodologies for real-time validation and calibration of the models and parameters for large interconnected time-variant systems, using online measurement data continuously collected during the operation of the system. The framework will be applied to the electric power system, where accurate models are critical to system control and reliability. This approach will greatly improve the accuracy of system models used for online contingency analysis simulating the consequences of severe system elements failure and finding the remedial actions to the control of the system.

Contact:  Sue Chin,


Jeff Nichols to Head ORNL Computing Programs

Jeff Nichols has been named associated laboratory director for ORNL's Computing and Computational Sciences Directorate, succeeding Thomas Zacharia. Nichols' appointment is effective October 1. He has filled the job in an interim capacity since April, when Zacharia was named deputy laboratory director for science and technology at ORNL and senior vice president for science and technology at UT-Battelle. Nichols had been the directorate's deputy associate laboratory director since December 2007.

He came to ORNL in 2002, serving as director of the Computer Science and Mathematics Division. Before coming to ORNL he was deputy director of the Environmental Molecular Sciences Laboratory at Pacific Northwest National Laboratory. Nichols received a doctorate in theoretical physical chemistry in 1983 from Texas A&M University and bachelor's degrees in chemistry and mathematics from Malone College in 1978.

Contact:  Jayson Hines,
ALCF's Beckman Named ACM Senior Member

Pete Beckman, director of the Argonne Leadership Computing Facility (ALCF), has been named a Senior Member of the Association for Computing Machinery (ACM). The ACM Senior Member program, initiated in 2006, includes members with at least 10 years of professional experience who have demonstrated performance that sets them apart from their peers through technical leadership and technical or professional contributions. As one of ACM's prestigious Advanced Member Grades, ACM Senior Member status recognizes the top 25% of ACM Professional Members for their demonstrated excellence in the computing field. Along with Fellows and Distinguished Engineers, Scientists, and Members, ACM's Senior Members join a distinguished list of colleagues to whom ACM and its members look for guidance and leadership in computing and information technology.

Contact:  Pete Beckman,
LLNL's Chandrika Kamath Publishes New Book on Scientific Data Mining

Lawrence Livermore researcher Chandrika Kamath published a new SIAM book entitled "Scientific Data Mining: A Practical Perspective." The book describes how techniques from the multi-disciplinary field of data mining can be used to address the modern problem of data overload in science and engineering domains in fields such as medicine, remote sensing, astronomy and high-energy physics. It starts with a survey of analysis problems in different applications and identifies the common themes across these domains and uses them to define an end-to-end process of scientific data mining. This multi-step process includes tasks such as processing the raw image or mesh data to identify objects of interest; extracting relevant features describing the objects; detecting patterns among the objects; and displaying the patterns for validation by the scientists. A majority of the book describes how techniques from disciplines such as image and video processing, statistics, machine learning, pattern recognition and mathematical optimization can be used for the tasks in each step. It also includes a description of software systems developed for scientific data mining and general guidelines for getting started on the analysis of massive, complex data sets. More information can be found at the following link:

Michael Dell visits Lawrence Livermore National Laboratory

Michael Dell, founder and CEO of Dell, Inc., visited LLNL on Sept. 18 to meet with senior managers and discuss collaborative work to advance Linux cluster high-performance computing. Dell is one of the key partners on the Hyperion project which brings together 10 industrial partners (including Dell) and LLNL to accelerate the development of next-generation Linux clusters. During the visit, he was also shown the recently acquired Dell Coastal system (a Linux cluster with 1,152 nodes, 9216 cores, and 88.5 Tflop/s peak speed) that will initially be dedicated to science supporting the National Ignition Campaign.

Contact:  Lori Diachin,
NERSC's Kathy Yelick Will Give Keynote Address at PGAS Conference

NERSC Director Kathy Yelick will give an invited keynote address on “Beyond UPC” at the Third Conference on Partitioned Global Address Space (PGAS) Programming Models, October 5–8 at George Washington University in Ashburn, Virginia. Yelick, who is the PI of the Berkeley Unified Parallel C project, will also participate in a panel discussing “Is the PGAS model appropriate for heterogeneous computing?”

ORNL's Buddy Bland Keynotes at 2009 Computing in Atmospheric Sciences Workshop

ORNL Leadership Computing Facility Project Director Arthur S. “Buddy” Bland was a keynote speaker at the 2009 Computing in Atmospheric Sciences Workshop. The workshop, held Sept. 13-16 in Annecy, France, was the ninth meeting in a series of biennial workshops hosted by the National Center for Atmospheric Research, sponsored by the U.S. National Science Foundation. The workshop serves as a forum for international specialists to discuss advances in information technology and the evolving infrastructures that allow scientists to explore atmospheric issues as part of the Earth system model. Topics discussed at the forum included international progress reports and plans from national weather, climate and research laboratories, challenges facing high performance computing facilities, and performance of weather forecasting and Earth system model applications on high-end supercomputers.

Bland’s presentation at the workshop focused on challenges facing high performance computing facilities such as the OLCF located at ORNL, home to the Cray supercomputer, Jaguar. As supercomputers continue to grow in speed, their power usage grows as well. Jaguar, currently at 1.64 petaflops, uses over seven megawatts of power, and it is expected that high performance machines will use in excess of 10 megawatts of power in the next few years. Bland discussed the forces that drive the power requirements of high performance systems, the demands placed on the facilities that house these systems, and how ORNL is dealing with both of these issues.

Contact:  Jayson Hines,
John Shalf to Give Keynote Address at iWAPT2009 in Japan

John Shalf, head of NERSC's Science-Driven System Architecture team, will be a keynote speaker at the Fourth International Workshop on Automatic Performance Tuning (iWAPT2009), October 1-2 at the University of Tokyo, Japan. Shalf's talk, "Green Flash: Extreme-Scale Computing on a Finite Power Budget," will describe the Green Flash research project, which is developing an energy-efficient computing architecture that uses auto-tuning methods for both the hardware and software optimization. (Osni Marques of CRD is on the workshop program committee.)

While in Japan, Shalf will also visit Kyoto University, home of one of the major HPC systems in Japan. He will meet with faculty who are working on Japan's 10 petaflop computing project, in which Japan is investing close to $1B, and learn more about the architecture plans. Shalf will give a presentation describing NERSC, our application workload, and our research interests in order find areas of mutual interest for potential research collaborations.

Contact:  Jon Bashor,
ESnet's Bill Johnston to Give Opening Address at Grid Workshop in Poland

Bill Johnston, who retired last year as head of ESnet and now acts as scientific liaison for the network, has been invited to give the opening plenary address at the Krakow Grid Workshop. Bill's talk is entitled, "Progress in Integrating Networks with Service Oriented Architectures / Grids: ESnet's Guaranteed Bandwidth Service." The workshop is being held Oct. 12-14 in Krakow, Poland.

This workshop is one of several significant European Grid-in-the-service-of-science events that draws science Grid developers from around Europe. According to Johnston, the workshop is an important opportunity to spread the ESnet message of the importance of service-oriented network capabilities to key science communities in Europe. These communities will be ESnet allies in the work to get virtual circuit services instantiated in European networks that connect key collaborators in DOE science.

For more information, go to:


ESnet Honored as One of Top 10 Government IT Innovators

Once a year, InformationWeek magazine honors the most innovative players in the field of information technology, including the top 10 government agency innovators. And on Sept. 14, the DOE's Energy Sciences Network (ESnet) was recognized as a member of this select group for its work helping thousands of researchers worldwide manage the massive amounts of scientific data stemming from the application of petascale supercomputers and high-precision instruments to cutting-edge disciplines such as climate science, high energy physics, astrophysics and genomics.

ESnet, which provides the high-bandwidth networking infrastructure supporting DOE researchers, implemented a highly innovative design consisting of two separate, parallel networks. The first network, called the IP network, provides reliable global Internet connectivity. The second network, called the Science Data Network (SDN), provides circuit-oriented services tailored for large-scale science needs. SDN circuits provide guaranteed, end-to-end connections with a variety of customizable services, including traffic isolation and advance reservation of network capacity.

OLCF Cray XT5 Undergoes Upgrade to 2 Petaflop/s

The Oak Ridge Leadership Computing Facility's Cray XT, Jaguar, is undergoing an upgrade that will increase the peak performance of the machine from approximately 1.6 quadrillion calculations per second (or petaflop/s) to over 2 petaflop/s. The process of upgrading the machine is tentatively scheduled to occur in fives phases over the course of 14 weeks, ending in November 2009.

Cray's XT Jaguar is composed of the Jaguar XT5 and XT4 partitions, creating the world's second fastest supercomputer for open research. Each of the XT5 partition's 18,688 compute nodes presently contains two quad-core AMD Opteron (Barcelona) processors, resulting in over 149,000 processing cores. Funded with Recovery Act money from the Department of Energy, the upgrade will replace each of the XT5 partition's quad-core processors with six-core AMD Opteron processors, code-named Istanbul. The result will bring Jaguar XT5's total number of processors to over 224,000.

The upgrade will proceed on a rolling basis, keeping large portions of the machine available for user access for most of the process. Each of the five phases of the upgrade is scheduled to take between two to four weeks, and involves removing dozens of Jaguar's 200 cabinets, each containing thousands of processing cores, for replacement. The XT4 partition will be available throughout the XT5 upgrade. Once the upgrade is completed, the XT5's new processors will be tested thoroughly to ensure reliable and improved performance.

Contact:  Jayson Hines,
Obama Honors IBM's Blue Gene with National Medal of Technology and Innovation

President Obama recognized IBM (NYSE: IBM) and its Blue Gene family of supercomputers with the National Medal of Technology and Innovation, the country's most prestigious award given to leading innovators for technological achievement. He will personally bestow the award at a special White House ceremony on October 7. IBM, which earned the National Medal of Technology and Innovation on eight other occasions, is the only company recognized with the award this year. The Argonne Leadership Computing Facility houses the powerful Blue Gene/P system named Intrepid, with a peak speed of 557 teraflops (TF) and a Linpack speed of 450 TF. Intrepid debuted in June 2008 as the world's fastest computer for open science and third fastest overall.

Blue Gene's speed and expandability have enabled business and science to address a wide range of complex problems and make more informed decisions - not just in the life sciences, but also in astronomy, climate, simulations, modeling and many other areas. The influence of the Blue Gene supercomputer's energy-efficient design and computing model can be seen today across the Information Technology industry. Today, 18 of the top 20 most energy-efficient supercomputers in the world are built on IBM high performance computing technology, according to the latest Supercomputing "Green500 List" announced by in July, 2009.

Contact:  Pete Beckman,


ORNL Makes Jaguar Available to University Researchers through ORAU

Oak Ridge Associated Universities (ORAU) announces a call for proposals for a series of high-performance computing grants that would allow faculty and student research teams the opportunity to participate in research with the benefit of ORNL's computing resources and staff. The competitive grant program, open to ORAU's 97 member institutions, provides potential funding of up to $75,000 for three years, allowing participant teams the opportunity to take full advantage of ORNL's ultrascale computing resources for scientific discovery in any discipline.

The second annual ORAU/ORNL High-Performance Computing Grant competition presents an opportunity for university research to expand their existing research initiatives and demonstrate alignment with ORNL's cross-cutting science agenda as it relates to computing and computational sciences.

Contact:  Jayson Hines,
LBNL to Collaborate with University of Tokyo IT Center

Berkeley Lab Computing Sciences has signed a memorandum of understanding establishing a research collaboration with the University of Tokyo Information Technology Center. The research will involve high performance computing (HPC), computational science, and optimization of applications performance on HPC systems, for example:

  • Numerical libraries for scientific computation
  • Couplers for multi-physics simulations
  • Scientific computing benchmarks
  • Parallel programming models for peta/exascale systems
  • Auto-tuning technologies for scientific computing
  • Parallel algorithms for eigenvalue calculations and sparse direct/iterative solvers for systems of linear equations
  • Runtime system for peta/exascale systems
The agreement also provides mutual access to facilities for research and an employee exchange opportunity.
ALCF "Performance Evaluation Using TAU" Workshop Aids Blue Gene/P Users

On September 22-23, the Argonne Leadership Computing Facility (ALCF) sponsored "Performance Evaluation Using Tau," a two-day, hands-on workshop for Blue Gene/P users. Led by Sameer Shende, ParaTools, Inc., and aided by hands-on help from ALCF staff members, computational scientists evaluated the performance of their parallel, scientific applications on the Blue Gene/P with the TAU Performance System®. The scientists learned how to use TAU to collect and analyze performance data in order to enhance the scalability of their applications. The workshop was very well received by the participants. In fact, one attendee plans to add the TAU manual distributed at the workshop to performance tools documents.

Contact:  Nick Romero,
LBNL's Yelick and Shalf Speak at SPEEDUP Workshop on HPC in Switzerland

The 38th SPEEDUP Workshop on High-Performance Computing was held last week (September 7–8) in Lausanne, Switzerland, with a focus on multicore computing and parallel languages. NERSC Director Kathy Yelick was there to give a presentation on Unified Parallel C (UPC), and NERSC Architecture Team lead John Shalf spoke on “Green Flash: Exaflop Computing on a Petaflop Power Budget.”

LBNL Hosts Software Best Practices Workshop in San Francisco

More than 60 HPC experts attended the SciDAC Outreach Center's Third Workshop on HPC Best Practices, "HPC Center Software Lifecycles", held September 28-29 at the Hotel Nikko in San Francisco. The purpose of the workshop is to identify best practices for creating and maintaining reliable and sustainable software for use at HPC centers. Skinner will co-chair a breakout session on Tools; Tony Drummond of CRD will co-chair a session on Libraries; Shane Canon of NERSC will co-chair System Software; Craig Tull of CRD will co-chair Planning; and Deb Agarwal of CRD will co-chair Development.

The workshop is sponsored by two DOE offices: the Office of Advanced Scientific Computing Research (ASCR) in the Office of Science, and Advanced Simulation and Computing in the National Nuclear Security Administration.

Argonne Sponsors Grace Hopper Conference

The 9th Annual Grace Hopper Celebration of Women in Computing Conference, held September 30-October 3 in Tucson, Arizona, focused on “Creating Technology for Social Good.”  The world’s largest gathering of women in computing recognized the significant role that women play in defining technology used to solve social issues.  Conference presenters were leaders in their respective fields, representing industry, academia, and government.  This year, Argonne’s Computing, Environment, and Life Sciences directorate, along with the Leadership Computing Facility, Computing and Information Systems, Mathematics and Computer Science, and Human Resources divisions, were Bronze sponsors of the conference.  Argonne staff members Jan Griffin, Staff Assistant Senior, Computing, Environment and Life Sciences; Yu Huang, Principal Software Engineer, Engineering Support, Advanced Photon Source; and Barbara Kreaseck, Faculty Participant, Mathematics and Computer Science Division (also with La Sierra University) participated in the event.

OLCF, NERSC Specialists Attend HPC User Forum, Share Expertise

Scientific computing specialists from ORNL and NERSC traveled to Broomsfield, CO to share their expertise at the 33rd High-Performance Computing (HPC) User Forum Sept. 8- 10. The forum is aimed at advancing the state of high-performance computing through open discussions between HPC users in industry, government and academia, as well as HPC vendors and other interested parties.

“It’s a user-oriented forum that provides a venue within which to have quarterly discussions about the state and future of the field of high-performance computing,” explained Doug Kothe, director of science at the Oak Ridge Leadership Computing Facility (OLCF) and member of the HPC User Forum steering committee. Other ORNL scientific computing specialists in attendance were Jim Hack, director of the National Center for Computational Sciences, Galen Shipman, leader of the Technology Integration group at the OLCF, and Trey White, research computer scientist at the OLCF. The application theme of the forum was climate, weather and, Earth sciences.

Hack gave the first presentation of the forum on the roll of high-performance computing in understanding and positively reacting to global climate change. Kothe chaired a panel on HPC application scaling issues, requirements and trends. The nine-member panel responded to open-ended questions that encouraged them to reflect on the current state of high-performance computing, but also to look forward to the next generation of challenges that will face the field. Among the panel members were Shipman and White. Shipman closed the forum with a presentation on “Spider,” the largest Lustre-based file system.

NERSC User Services Group Leader Jonathan Carter participated in a “Technical Panel on HPC Application Scaling Issues, Requirements and Trends — Scalability: What works, what doesn't?” This session addressed the subject of application scaling using various computing architectures and programming models. Panelists were asked to discuss the various approaches to scalability for these different architectures/models and comment on how various applications fit or do not fit well onto the architecture/model.