Abstracts


Basic Research Needs Workshop on Synthesis Science for Energy Relevant Technology

This report, which is the result of the Basic Energy Sciences Workshop on Basic Research Needs for Synthesis Science for Energy Technologies, lays out the scientific challenges and opportunities in synthesis science.

The workshop was attended by more than 100 leading national and international scientific experts. Its five topical and two crosscutting panels identified four priority research directions (PRDs) for realizing the vision of predictive, science-directed synthesis:

 

  1. Achieve mechanistic control of synthesis to access new states of matter

    The opportunities for synthesizing new materials are almost limitless. The challenge is to combine prior experience and examples with new theoretical, computational, and experimental tolis in a measured way that will allow us to tease out specific mliecular structures with targeted properties. Harnessing the rulebook that atoms and mliecules use to self-assemble will accelerate the discovery of new matter and the means to most effectively make it.

  2. Accelerate materials discovery by exploiting extreme conditions, complex chemistries and mliecules, and interfacial systems

    Even as our theoretical understanding of synthetic processes increases, many future discoveries will come from regions of parameter space that are relatively unexplored and beyond current predictive capabilities. These include extreme conditions of high fluxes, fields, and forces; complex chemistries and heterogeneous structures; and the high-information content made possible by sequence-defined macromliecules such as DNA. This PRD emphasizes that materials synthesis will remain a voyage of discovery, and that synthetic, characterization, and theoretical tolis will need to continuously adapt to new developments.

  3. Harness the complex functionality of hierarchical matter

    Hierarchical matter exploits the coupling among the different types of atomic assemblies, or heterogeneities, distributed across multiple length scales. These interactions lead to emergent properties not possible in homogeneous materials. Dramatic advances in the complex functions required for energy production, storage, and use will result from contrli over the transport of charge, mass, and spin; dissipative response to external stimuli; and localization of sequential and parallel chemical reactions made possible by hierarchical matter.

  4. Integrate emerging theoretical, computational, and in situ characterization tolis to achieve directed synthesis with real time adaptive contrli

    Theory, computation, and characterization are critical components to the effective discovery and design of new mliecules and materials. Important but insufficient is the prediction of the final composition and structure. Critical to the process is knowing and predicting how materials assemble and the consequences of the assembly for final material properties. Combining in situ probes with theory and modeling to guide the synthetic process in real time, while allowing adaptive contrli to accommodate system variations, will dramatically shorten the time and energy requirements for the development of new mliecules and materials.

    The historical impact of chemistry and materials on society makes a compelling case for developing a foundational science of synthesis. Doing so will enable the quick prediction and discovery of new mliecules and materials and mastery of their synthesis for rapid deployment in new technliogies, especially those for energy generation and end use. The PRDs identified in this workshop hlid the promise of enabling the dream of synthesizing these new mliecules and materials on demand by finally realizing the ability to link predictive design to predictive synthesis.


BES Workshop on Future Electron Sources

The DOE Office of Basic Energy Sciences (BES) sponsored the Future Electron Sources workshop to identify opportunities and needs for injector developments at the existing and future BES facilities. The workshop was held at the SLAC National Accelerator Laboratory on September 8-9, 2016. The workshop assessed the state of the art and future development requirements, with emphasis on the underlying engineering, science and technology necessary to realize the next generation of electron injectors to advance photon based science. A major objective was to optimize the performance of free electron laser facilities, which are presently limited in x-ray power and spectrum coverage due to the unavailability of suitable injectors. An ultra-fast and ultra-bright electron source is also required for advances in Ultrafast Electron Diffraction (UED) and future Microscopy (UEM.)  The scope included normal conducting and superconducting RF injectors, including better performance cathodes and simulation tools. The workshop explored opportunities for discovery enabled by advanced electron sources, and identified processes to enhance interactions and collaborations among DOE laboratories to most effectively use their resources and skills to advance scientific frontiers in energy-relevant areas, as well as the challenges anticipated by advances in source brightness.

The goals of this workshop were to:

  • Evaluate the present state of the art in electron injectors
  • Identify the gaps in current electron source capabilities, and what developments should have high priority to support current and future photon based science
  • Identify the engineering, science and technology challenges
  • Identify methods of interaction and collaboration among the facilities so that resources are most effectively focused onto key problems.
  • Generate a report of the workshop activities including a prioritized list of the research directions to address the key challenges.

Workshop participants emphasized that advances in all major technical areas of electron sources are required to meet future X-ray and electron scattering instrument needs.


BES Computing and Data Requirements in the Exascale Age 

Computers have revolutionized every aspect of our lives. Yet in science, the most tantalizing applications of computing lie just beyond our reach. The current quest to build an exascale computer with one thousand times the capability of today’s fastest machines (and more than a million times that of a laptop) will take researchers over the next horizon. The field of materials, chemical reactions, and compounds is inherently complex. Imagine millions of new materials with new functionalities waiting to be discovered — while researchers also seek to extend those materials that are known to a dizzying number of new forms. We could translate massive amounts of data from high precision experiments into new understanding through data mining and analysis. We could have at our disposal the ability to predict the properties of these materials, to follow their transformations during reactions on an atom-by-atom basis, and to discover completely new chemical pathways or physical states of matter. Extending these predictions from the nanoscale to the mesoscale, from the ultrafast world of reactions to long-time simulations to predict the lifetime performance of materials, and to the discovery of new materials and processes will have a profound impact on energy technology. In addition, discovery of new materials is vital to move computing beyond Moore’s law. To realize this vision, more than hardware is needed. New algorithms to take advantage of the increase in computing power, new programming paradigms, and new ways of mining massive data sets are needed as well. This report summarizes the opportunities and the requisite computing ecosystem needed to realize the potential before us.

In addition to pursuing new and more complete physical models and theoretical frameworks, this review found that the following broadly grouped areas relevant to the U.S. Department of Energy (DOE) Office of Advanced Scientific Computing Research (ASCR) would directly affect the Basic Energy Sciences (BES) mission need.


Basic Research Needs Workshop on Quantum Materials for Energy Relevant Technology

Imagine future computers that can perform calculations a million times faster than today’s most powerful supercomputers at only a tiny fraction of the energy cost. Imagine power being generated, stored, and then transported across the national grid with nearly no loss. Imagine ultrasensitive sensors that keep us in the loop on what is happening at home or work, warn us when something is going wrong around us, keep us safe from pathogens, and provide unprecedented control of manufacturing and chemical processes. And imagine smart windows, smart clothes, smart buildings, supersmart personal electronics, and many other items — all made from materials that can change their properties “on demand” to carry out the functions we want. The key to attaining these technological possibilities in the 21st century is a new class of materials largely unknown to the general public at this time but destined to become as familiar as silicon. Welcome to the world of quantum materials — materials in which the extraordinary effects of quantum mechanics give rise to exotic and often incredible properties..


Sustainable Ammonia Synthesis – Exploring the scientific challenges associated with discovering alternative, sustainable processes for ammonia production

Ammonia (NH3) is essential to all life on our planet. Until about 100 years ago, NH3 produced by reduction of dinitrogen (N2) in air came almost exclusively from bacteria containing the enzyme nitrogenase..

DOE convened a roundtable of experts on February 18, 2016.

Participants in the Roundtable discussions concluded that the scientific basis for sustainable processes for ammonia synthesis is currently lacking, and it needs to be enhanced substantially before it can form the foundation for alternative processes. The Roundtable Panel identified an overarching grand challenge and several additional scientific grand challenges and research opportunities:

  • Discovery of active, selective, scalable, long-lived catalysts for sustainable ammonia synthesis.
  • Development of relatively low pressure (<10 atm) and relatively low temperature (<200 C) thermal processes.
  • Integration of knowledge from nature (enzyme catalysis), molecular/homogeneous and heterogeneous catalysis.
  • Development of electrochemical and photochemical routes for N2 reduction based on proton and electron transfer
  • Development of biochemical routes to N2 reduction
  • Development of chemical looping (solar thermochemical) approaches
  • Identification of descriptors of catalytic activity using a combination of theory and experiments
  • Characterization of surface adsorbates and catalyst structures (chemical, physical and electronic) under conditions relevant to ammonia synthesis.

 


 

Neuromorphic Computing – From Materials Research to Systems Architecture Roundtable

Computation in its many forms is the engine that fuels our modern civilization. Modern computation—based on the von Neumann architecture—has allowed, until now, the development of continuous improvements, as predicted by Moore’s law. However, computation using current architectures and materials will inevitably—within the next 10 years—reach a limit because of fundamental scientific reasons.

DOE convened a roundtable of experts in neuromorphic computing systems, materials science, and computer science in Washington on October 29-30, 2015 to address the following basic questions:

Can brain-like (“neuromorphic”) computing devices based on new material concepts and systems be developed to dramatically outperform conventional CMOS based technology? If so, what are the basic research challenges for materials sicence and computing?

The overarching answer that emerged was:

The development of novel functional materials and devices incorporated into unique architectures will allow a revolutionary technological leap toward the implementation of a fully “neuromorphic” computer.

To address this challenge, the following issues were considered:

The main differences between neuromorphic and conventional computing as related to: signaling models, timing/clock, non-volatile memory, architecture, fault tolerance, integrated memory and compute, noise tolerance, analog vs. digital, and in situ learning

New neuromorphic architectures needed to: produce lower energy consumption, potential novel nanostructured materials, and enhanced computation

Device and materials properties needed to implement functions such as: hysteresis, stability, and fault tolerance

Comparisons of different implementations: spin torque, memristors, resistive switching, phase change, and optical schemes for enhanced breakthroughs in performance, cost, fault tolerance, and/or manufacturability

 


 

Basic Research Needs for Environmental Management

This report is based on a BES/BER/ASCR workshop on Basic Research Needs for Environmental Management, which was held on July 8-11, 2015. The workshop goal was to define priority research directions that will provide the scientific foundations for future environmental management technologies, which will enable more efficient, cost-effective, and safer cleanup of nuclear waste.

One of the US Department of Energy’s (DOE) biggest challenges today is cleanup of the legacy resulting from more than half a century of nuclear weapons production. The research and manufacturing associated with the development of the nation’s nuclear arsenal has left behind staggering quantities of highly complex, highly radioactive wastes and contaminated soils and groundwater. Based on current knowledge of these legacy problems and currently available technologies, DOE projects that hundreds of billions of dollars and more than 50 years of effort will be required for remediation.

Over the past decade, DOE’s progress towards cleanup has been stymied in part by a lack of investment in basic science that is foundational to innovation and new technology development. During this decade, amazing progress has been made in both experimental and computational tools that have been applied to many energy problems such as catalysis, bioenergy, solar energy, etc. Our abilities to observe, model, and exploit chemical phenomena at the atomic level along with our understanding of the translation of molecular phenomena to macroscopic behavior and properties have advanced tremendously; however, remediation of DOE’s legacy waste problems has not yet benefited from these advances because of the lack investment in basic science for environmental cleanup.

Advances in science and technology can provide the foundation for completing the cleanup more swiftly, inexpensively, safely, and effectively. The lack of investment in research and technology development by DOE’s Office of Environmental Management (EM) was noted in a report by a task force to the Secretary of Energy’s Advisory Board (SEAB 2014). Among several recommendations, the report suggested a workshop be convened to develop a strategic plan for a “fundamental research program focused on developing new knowledge and capabilities that bear on the EM challenges.” This report summarizes the research directions identified at a workshop on Basic Research Needs for Environmental Management. This workshop, held July 8-11, 2015, was sponsored by three Office of Science offices: Basic Energy Sciences, Biological and Environmental Research, and Advanced Scientific Computing Research. The workshop participants included 65 scientists and engineers from universities, industry, and national laboratories, along with observers from the DOE Offices of Science, EM, Nuclear Energy, and Legacy Management.

As a result of the discussions at the workshop, participants articulated two Grand Challenges for science associated with EM cleanup needs. They are as follows:

Interrogation of Inaccessible Environments over Extremes of Time and Space

Whether the contamination problem involves highly radioactive materials in underground waste tanks or large volumes of contaminated soils and groundwaters beneath the Earth’s surface, characterizing the problem is often stymied by an inability to safely and cost effectively interrogate the system. Sensors and imaging capabilities that can operate in the extreme environments typical of EM’s remaining cleanup challenges do not exist. Alternatively, large amounts of data can sometimes be obtained about a system, but appropriate data analytics tools are lacking to enable effective and efficient use of all the information for performance regression or prediction. Research into new approaches for remote and in situ sensing, and new algorithms for data analytics are critically needed. Depending on the cleanup problem, these new approaches must span temporal and spatial scales—from seconds to millennia, from atoms to kilometers.

Understanding and Exploiting Interfacial Phenomena in Extreme Environments

While many of EM’s remaining cleanup problems involve unprecedented extremes in complexity, an additional layer is provided by the numerous contaminant forms and their partitioning across interfaces in these wastes, including liquid-liquid, liquid-solid, and others. For example, the wastes in the high-level radioactive waste tanks can have consistencies of paste, gels, or Newtonian slurries, where water behaves more like a solute than a solvent. Unexpected chemical forms of the contaminants and radionuclides partition to unusual solids, colloids, and other phases in the tank wastes, complicating their efficient separation. Mastery of the chemistry controlling contaminant speciation and its behavior at the solid-liquid and liquid-liquid interfaces in the presence of large quantities of ionizing radiation is needed to develop improved waste treatment approaches and enhance the operating efficiencies of treatment facilities. These same interfacial processes, if understood, can be exploited to develop entirely new approaches for effective separations technologies, both for tank waste processing and subsurface remediation.

Based on the findings of the technical panels, six Priority Research Directions (PRDs) were identified as the most urgent scientific areas that need to be addressed to enable EM to meet its mission goals. All of these PRDs are also embodied in the two Grand Challenges. Further, these six PRDs are relevant to all aspects of EM waste issues, including tank wastes, waste forms, and subsurface contamination. These PRDs include the following:

  • Elucidating and exploiting complex speciation and reactivity far from equilibrium
  • Understanding and controlling chemical and physical processes at interfaces
  • Harnessing physical and chemical processes to revolutionize separations
  • Mechanisms of materials degradation in harsh environments
  • Mastering hierarchical structures to tailor waste forms
  • Predictive understanding of subsurface system behavior and response to perturbations.

Two recurring themes emerged during the course of the workshop that cut across all of the PRDs. These crosscutting topics give rise to Transformative Research Capabilities. The first such capability, Multidimensional characterization of extreme, dynamic, and inaccessible environments, centers on the need for obtaining detailed chemical and physical information on EM wastes in waste tanks and during waste processing, in wastes forms, and in the environment. New approaches are needed to characterize and monitor these highly hazardous and/or inaccessible materials in their natural environment, either using in situ techniques or remote monitoring. These approaches are particularly suited for studying changes in the wastes over time and distances, for example. Such in situ and remote techniques are also critical for monitoring the effectiveness of waste processes, subsurface transport, and long-term waste form stability. However, far more detailed information will be needed to obtain fundamental insight into materials structure and molecular-level chemical and physical processes required for many of the PRDs. For these studies, samples must be retrieved and studied ex situ, but the hazardous nature of these samples requires special handling. Recent advances in nanoscience have catalyzed the development of high-sensitivity characterization tools—many of which are available at DOE user facilities, including radiological user facilities—and the means of handling ultrasmall samples, including micro- and nanofluidics and nanofabrication tools. These advances open the door to obtaining unprecedented information that is crucial to formulating concepts for new technologies to complete EM’s mission.

The sheer magnitude of the data needed to fully understand the complexity of EM wastes is daunting, but it is just the beginning. Additional data will need to be gathered to both monitor and predict changes—in tank wastes, during processing, in waste forms and in the subsurface over broad time and spatial scales. Therefore, the second Transformative Research Capability, Integrated simulation and data-enabled discovery, identified the need to develop curated databases and link experiments and theory through big-deep data methodologies. These state-of-the-art capabilities will be enabled by high-performance computing resources available at DOE user facilities.

The foundational knowledge to support innovation for EM cannot wait as the tank wastes continue to deteriorate and result in environmental, health, and safety issues. As clearly stated in the 2014 Secretary of Energy Advisory Board report, completion of EM’s remaining responsibilities will simply not be possible without significant innovation and that innovation can be derived from use-inspired fundamental research as described in this report. The breakthroughs that will evolve from this investment in basic science will reduce the overall risk and financial burden of cleanup while also increasing the probability of success. The time is now ripe to proceed with the basic science in support of more effective solutions for environmental management. The knowledge gleaned from this basic research will also have broad applicability to many other areas central to DOE’s mission, including separations methods for critical materials recovery and isotope production, robust materials for advanced reactor and steam turbine designs, and new capabilities for examining subsurface transport relevant to the water/energy nexus.


Challenges at the Frontiers of Matter and Energy: Transformative Opportunities for Discovery Science

 

  • FIVE TRANSFORMATIVE OPPORTUNITIES FOR DISCOVERY SCIENCE

    As a result of this effort, it has become clear that the progress made to date on the five Grand Challenges has created a springboard for seizing five new Transformative Opportunities that have the potential to further transform key technologies involving matter and energy. These five new Transformative Opportunities and the evidence supporting them are discussed in this new report, “Challenges at the Frontiers of Matter and Energy: Transformative Opportunities for Discovery Science.”
  • Mastering Hierarchical Architectures and Beyond-Equilibrium Matter

    Complex materials and chemical processes transmute matter and energy, for example from CO2 and water to chemical fuel in photosynthesis, from visible light to electricity in solar cells and from electricity to light in light emitting diodes (LEDs) Such functionality requires complex assemblies of heterogeneous materials in hierarchical architectures that display time-dependent away-from-equilibrium behaviors. Much of the foundation of our understanding of such transformations however, is based on monolithic single- phase materials operating at or near thermodynamic equilibrium. The emergent functionalities enabling next-generation disruptive energy technologies require mastering the design, synthesis, and control of complex hierarchical materials employing dynamic far-from-equilibrium behavior. A key guide in this pursuit is nature, for biological systems prove the power of hierarchical assembly and far- from-equilibrium behavior. The challenges here are many: a description of the functionality of hierarchical assemblies in terms of their constituent parts, a blueprint of atomic and molecular positions for each constituent part, and a synthesis strategy for (a) placing the atoms and molecules in the proper positions for the component parts and (b) arranging the component parts into the required hierarchical structure. Targeted functionality will open the door to significant advances in the harvesting, transforming (e.g., reducing CO2, splitting water, and fixing nitrogen), storing, and use of energy to create new materials, manufacturing processes, and technologies—the lifeblood of human societies and economic growth.
  • Beyond Ideal Materials and Systems: Understanding the Critical Roles of Heterogeneity, Interfaces, and Disorder

    Real materials, both natural ones and those we engineer, are usually a complex mixture of compositional and structural heterogeneities, interfaces, and disorder across all spatial and temporal scales. It is the fluctuations and disorderly states of these heterogeneities and interfaces that often determine the system’s properties and functionality. Much of our fundamental scientific knowledge is based on “ideal” systems, meaning materials that are observed in “frozen” states or represented by spatially or temporally averaged states. Too often, this approach has yielded overly simplistic models that hide important nuances and do not capture the complex behaviors of materials under realistic conditions. These behaviors drive vital chemical transformations such as catalysis, which initiates most industrial manufacturing processes, and friction and corrosion, the parasitic effects of which cost the U.S. economy billions of dollars annually. Expanding our scientific knowledge from the relative simplicity of ideal, perfectly ordered, or structurally averaged materials to the true complexity of real-world heterogeneities, interfaces, and disorder should enable us to realize enormous benefits in the materials and chemical sciences, which translates to the energy sciences, including solar and nuclear power, hydraulic fracturing, power conversion, airframes, and batteries.
  • Harnessing Coherence in Light and Matter

    Quantum coherence in light and matter is a measure of the extent to which a wave field vibrates in unison with itself at neighboring points in space and time. Although this phenomenon is expressed at the atomic and electronic scales, it can dominate the macroscopic properties of materials and chemical reactions such as superconductivity and efficient photosynthesis. In recent years, enormous progress has been made in recognizing, manipulating, and exploiting quantum coherence. This progress has already elucidated the role that symmetry plays in protecting coherence in key materials, taught us how to use light to manipulate atoms and molecules, and provided us with increasingly sophisticated techniques for controlling and probing the charges and spins of quantum coherent systems. With the arrival of new sources of coherent light and electron beams, thanks in large part to investments by the U.S. Department of Energy’s Office of Basic Energy Sciences (BES), there is now an opportunity to engineer coherence in heterostructures that incorporate multiple types of materials and to control complex, multistep chemical transformations. This approach will pave the way for quantum information processing and next-generation photovoltaic cells and sensors.
  • Revolutionary Advances in Models, Mathematics, Algorithms, Data, and Computing

    Science today is benefiting from a convergence of theoretical, mathematical, computational, and experimental capabilities that put us on the brink of greatly accelerating our ability to predict, synthesize, and control new materials and chemical processes, and to understand the complexities of matter across a range of scales. Imagine being able to chart a path through a vast sea of possible new materials to find a select few with desired properties. Instead of the time-honored forward approach, in which materials with desired properties are found through either trial-and-error experiments or lucky accidents, we have the opportunity to inversely design and create new materials that possess the properties we desire. The traditional approach has allowed us to make only a tiny fraction of all the materials that are theoretically possible. The inverse design approach, through the harmonious convergence of theoretical, mathematical, computational, and experimental capabilities, could usher in a virtual cornucopia of new materials with functionalities far beyond what nature can provide. Similarly, enhanced mathematical and computational capabilities significantly enhance our ability to extract physical and chemical insights from vastly larger data streams gathered during multimodal and multidimensional experiments using advanced characterization facilities.
  • Exploiting Transformative Advances in Imaging Capabilities across Multiple Scales

    Historically, improvements in imaging capabilities have always resulted in improved understanding of scientific phenomena. A prime challenge today is finding ways to reconstruct raw data, obtained by probing and mapping matter across multiple scales, into analyzable images. BES investments  in new and improved imaging facilities, most notably synchrotron x-ray sources, free-electron lasers, electron microscopes, and neutron sources, have greatly advanced our powers of observation, as have substantial improvements in laboratory- scale technologies. Furthermore, BES is now planning or actively discussing exciting new capabilities. Taken together, these advances in imaging capabilities provide an opportunity to expand our ability to observe and study matter from the 3D spatial perspectives of today to true “4D” spatially and temporally resolved maps of dynamics that allow quantitative predictions of time-dependent material properties and chemical processes. The knowledge gained will impact data storage, catalyst design, drug delivery, structural materials, and medical implants, to name just a few key technologies.
  • ENABLING SUCCESS

    Seizing each of these five Transformative Opportunities, as well as accelerating further progress on Grand Challenge research, will require specific, targeted investments from BES in the areas of synthesis, meaning the ability to make the materials and architectures that are envisioned; instrumentation and tools, a category that includes theory and computation; and human capital, the most important asset for advancing the Grand Challenges and Transformative Opportunities. While “Challenges at the Frontiers of Matter and Energy: Transformative Opportunities for Discovery Science” could be viewed as a sequel to the original Grand Challenges report, it breaks much new ground in its assessment of the scientific landscape today versus the scientific landscape just a few years ago. In the original Grand Challenges report, it was noted that if the five Grand Challenges were met, our ability to direct matter and energy would be measured only by the limits of human imagination. This new report shows that, prodded by those challenges, the scientific community is positioned today to seize new opportunities whose impacts promise to be transformative for science and society, as well as dramatically accelerate progress in the pursuit of the original Grand Challenges.

 


Controlling Subsurface Fractures and Fluid Flow: A Basic Research Agenda

From beneath the surface of the earth, we currently obtain about 80-percent of the energy our nation consumes each year. In the future we have the potential to generate billions of watts of electrical power from clean, green, geothermal energy sources. Our planet’s subsurface can also serve as a reservoir for storing energy produced from intermittent sources such as wind and solar, and it could provide safe, long-term storage of excess carbon dioxide, energy waste products and other hazardous materials. However, it is impossible to underestimate the complexities of the subsurface world. These complexities challenge our ability to acquire the scientific knowledge needed for the efficient and safe exploitation of its resources.

To more effectively harness subsurface resources while mitigating the impacts of developing and using these resources, the U.S. Department of Energy established SubTER – the Subsurface Technology and Engineering RD&D Crosscut team. This DOE multi-office team engaged scientists and engineers from the national laboratories to assess and make recommendations for improving energy-related subsurface engineering. The SubTER team produced a plan with the overall objective of “adaptive control of subsurface fractures and fluid flow.”This plan revolved around four core technological pillars—Intelligent Wellbore Systems that sustain the integrity of the wellbore environment; Subsurface Stress and Induced Seismicity programs that guide and optimize sustainable energy strategies while reducing the risks associated with subsurface injections; Permeability Manipulation studies that improve methods of enhancing, impeding and eliminating fluid flow; and New Subsurface Signals that transform our ability to see into and characterize subsurface systems.

The SubTER team developed an extensive R&D plan for advancing technologies within these four core pillars and also identified several areas where new technologies would require additional basic research. In response, the Office of Science, through its Office of Basic Energy Science (BES), convened a roundtable consisting of 15 national lab, university and industry geoscience experts to brainstorm basic research areas that underpin the SubTER goals but are currently
underrepresented in the BES research portfolio. Held in Germantown, Maryland on May 22, 2015, the round-table participants developed a basic research agenda that is detailed in this report.

Highlights include the following:

  • A grand challenge calling for advanced imaging of stress and geological processes to help understand how stresses and chemical substances are distributed in the subsurface—knowledge that is critical to all aspects of subsurface engineering;
  • A priority research direction aimed at achieving control of fluid flow through fractured media;
  • A priority research direction aimed at better understanding how mechanical and geochemical perturbations to subsurface rock systems are coupled through fluid  and mineral interactions;
  • A priority research direction aimed at studying the structure, permeability, reactivity and other properties of nanoporous rocks, like shale, which have become critical energy materials and exhibit important hallmarks of mesoscale materials;
  • A cross-cutting theme that would accelerate development of advanced computational methods to describe heterogeneous time-dependent geologic systems that could, among other potential benefits, provide new  and vastly improved models of hydraulic fracturing and its environmental impacts;
  • A cross-cutting theme that would lead to the creation of “geo-architected materials” with controlled repeatable heterogeneity and structure that can be tested under a variety of thermal, hydraulic, chemical and mechanical conditions relevant to subsurface systems;
  • A cross-cutting theme calling for new laboratory studies on both natural and geo-architected subsurface materials that deploy advanced high-resolution 3D imaging and chemical analysis methods to determine the ;rates and mechanisms of fluid-rock processes, and to test predictive models of such phenomena.

Many of the key energy challenges of the future demand a greater understanding of the subsurface world in all of its complexity. This greater under- standing will improve the ability to control and manipulate the subsurface world in ways that will benefit both the economy and the environment. This report provides specific basic research pathways to address some of the most fundamental issues of energy-related subsurface engineering.


Future of Electron Scattering and Diffraction

The ability to correlate the atomic- and nanoscale-structure of condensed matter with physical properties (e.g., mechanical, electrical, catalytic, and optical) and functionality forms the core of many disciplines. Directing and controlling materials at the quantum-, atomic-, and molecular-levels creates enormous challenges and opportunities across a wide spectrum of critical technologies, including those involving the generation and use of energy. The workshop identified next generation electron scattering and diffraction instruments that are uniquely positioned to address these grand challenges. The workshop participants identified four key areas where the next generation of such instrumentation would have major impact:

A – Multidimensional Visualization of Real Materials
B – Atomic-scale Molecular Processes
C – Photonic Control of Emergence in Quantum Materials
D – Evolving Interfaces, Nucleation, and Mass Transport

Real materials are comprised of complex three-dimensional arrangements of atoms and defects that directly determine their potential for energy applications. Understanding real materials requires new capabilities for three-dimensional atomic scale tomography and spectroscopy of atomic and electronic structures with unprecedented sensitivity, and with simultaneous spatial and energy resolution. Many molecules are able to selectively and efficiently convert sunlight into other forms of energy, like heat and electric current, or store it in altered chemical bonds. Understanding and controlling such process at the atomic scale require unprecedented time resolution. One of the grand challenges in condensed matter physics is to understand, and ultimately control, emergent phenomena in novel quantum materials that necessitate developing a new generation of instruments that probe the interplay among spin, charge, orbital, and lattice degrees of freedom with intrinsic time- and length-scale resolutions. Molecules and soft matter require imaging and spectroscopy with high spatial resolution without damaging their structure. The strong interaction of electrons with matter allows high-energy electron pulses to gather structural information before a sample is damaged.

Electron ScatteringImaging, diffraction, and spectroscopy are the fundamental capabilities of electron-scattering instruments. The DOE BES-funded TEAM (Transmission Electron Aberration-corrected Microscope) project achieved unprecedented sub-atomic spatial resolution in imaging through aberration-corrected transmission electron microscopy. To further advance electron scattering techniques that directly enable groundbreaking science, instrumentation must advance beyond traditional two-dimensional imaging. Advances in temporal resolution, recording the full phase and energy spaces, and improved spatial resolution constitute a new frontier in electron microscopy, and will directly address the BES Grand Challenges, such as to “control the emergent properties that arise from the complex correlations of atomic and electronic constituents” and the “hidden states” “very far away from equilibrium”. Ultrafast methods, such as the pump-probe approach, enable pathways toward understanding, and ultimately controlling, the chemical dynamics of molecular systems and the evolution of complexity in mesoscale and nanoscale systems. Central to understanding how to synthesize and exploit functional materials is having the ability to apply external stimuli (such as heat, light, a reactive flux, and an electrical bias) and to observe the resulting dynamic process in situ and in operando, and under the appropriate environment (e.g., not limited to UHV conditions).

To enable revolutionary advances in electron scattering and science, the participants of the workshop recommended three major new instrumental developments:

A. Atomic-Resolution Multi-Dimensional Transmission Electron Microscope: This instrument would provide quantitative information over the entire real space, momentum space, and energy space for visualizing dopants, interstitials, and light elements; for imaging localized vibrational modes and the motion of charged particles and vacancies; for correlating lattice, spin, orbital, and charge; and for determining the structure and molecular chemistry of organic and soft matter. The instrument will be uniquely suited to answer fundamental questions in condensed matter physics that require understanding the physical and electronic structure at the atomic scale. Key developments include stable cryogenic capabilities that will allow access to emergent electronic phases, as well as hard/soft interfaces and radiation- sensitive materials.

B. Ultrafast Electron Diffraction and Microscopy Instrument: This instrument would be capable of nano-diffraction with 10 fs temporal resolution in stroboscopic mode, and better than 100 fs temporal resolution in single shot mode. The instrument would also achieve single- shot real-space imaging with a spatial/temporal resolution of 10 nm/10 ps, representing a thousand fold improvement over current microscopes. Such a capability would be complementary to x-ray free electron lasers due to the difference in the nature of electron and x-ray scattering, enabling space-time mapping of lattice vibrations and energy transport, facilitating the understanding of molecular dynamics of chemical reactions, the photonic control of emergence in quantum materials, and the dynamics of mesoscopic materials.

C. Lab-In-Gap Dynamic Microscope: This instrument would enable quantitative measurements of materials structure, composition, and bonding evolution in technologically relevant environments, including liquids, gases and plasmas, thereby assuring the understanding of structure function relationship at the atomic scale with up to nanosecond temporal resolution. This instrument would employ a versatile, modular sample stage and holder geometry to allow the multi-modal (e.g., optical, thermal, mechanical, electrical, and electrochemical) probing of materials’ functionality in situ and in operando. The electron optics encompasses a pole piece that can accommodate the new stage, differential pumping, detectors, aberration correctors, and other electron optical elements for measurement of materials dynamics.

To realize the proposed instruments in a timely fashion, BES should aggressively support research and development of complementary and enabling instruments, including new electron sources, advanced electron optics, new tunable specimen pumps and sample stages, and new detectors. The proposed instruments would have transformative impact on physics, chemistry, materials science, engineering


X-ray Optics for BES Light Source Facilities

Each new generation of synchrotron radiation sources has delivered an increase in average brightness 2 to 3 orders of magnitude over the previous generation. The next evolution toward diffraction-limited storage rings will deliver another 3 orders of magnitude increase. For ultrafast experiments, free electron lasers (FELs) deliver 10 orders of magnitude higher peak brightness than storage rings. Our ability to utilize these ultrabright sources, however, is limited by our ability to focus, monochromate, and manipulate these beams with X-ray optics. X-ray optics technology unfortunately lags behind source technology and limits our ability to maximally utilize even today’s X-ray sources. With ever more powerful X-ray sources on the horizon, a new generation of X-ray optics must be developed that will allow us to fully utilize these beams of unprecedented brightness.

The increasing brightness of X-ray sources will enable a new generation of measurements that could have revolutionary impact across a broad area of science, if optical systems necessary for transporting and analyzing X-rays can be perfected. The high coherent flux will facilitate new science utilizing techniques in imaging, dynamics, and ultrahigh-resolution spectroscopy. For example, zone-plate-based hard X-ray microscopes are presently used to look deeply into materials, but today’s resolution and contrast are restricted by limitations of the current lithography used to manufacture nanodiffractive optics. The large penetration length, combined in principle with very high spatial resolution, is an ideal probe of hierarchically ordered mesoscale materials, if zone-plate focusing systems can be improved. Resonant inelastic X-ray scattering (RIXS) probes a wide range of excitations in materials, from charge-transfer processes to the very soft excitations that cause the collective phenomena in correlated electronic systems. However, although RIXS can probe high-energy excitations, the most exciting and potentially revolutionary science involves soft excitations such as magnons and phonons; in general, these are well below the resolution that can be probed by today’s optical systems. The study of these low-energy excitations will only move forward if advances are made in high-resolution gratings for the soft X-ray energy region, and higher-resolution crystal analyzers for the hard X-ray region. In almost all the forefront areas of X-ray science today, the main limitation is our ability to focus, monochromate, and manipulate X-rays at the level required for these advanced measurements.

To address these issues, the U.S. Department of Energy (DOE) Office of Basic Energy Sciences (BES) sponsored a workshop, X-ray Optics for BES Light Source Facilities, which was held March 27–29, 2013, near Washington, D.C.  The workshop addressed a wide range of technical and organizational issues. Eleven working groups were formed in advance of the meeting and sought over several months to define the most pressing problems and emerging opportunities and to propose the best routes forward for a focused R&D program to solve these problems. The workshop participants identified eight principal research directions (PRDs), as follows:

  • Development of advanced grating lithography and manufacturing for high-energy resolution techniques such as soft X-ray inelastic scattering.
  • Development of higher-precision mirrors for brightness preservation through the use of advanced metrology in manufacturing, improvements in manufacturing techniques, and in mechanical mounting and cooling.
  • Development of higher-accuracy optical metrology that can be used in manufacturing, verification, and testing of optomechanical systems, as well as at wavelength metrology that can be used for quantification of individual optics and alignment and testing of beamlines.
  • Development of an integrated optical modeling and design framework that is designed and maintained specifically for X-ray optics.
  • Development of nanolithographic techniques for improved spatial resolution and efficiency of zone plates.
  • Development of large, perfect single crystals of materials other than silicon for use as beam splitters, seeding monochromators, and high-resolution analyzers.
  • Development of improved thin-film deposition methods for fabrication of multilayer Laue lenses and high-spectral-resolution multilayer gratings.
  • Development of supports, actuator technologies, algorithms, and controls to provide fully integrated and robust adaptive X-ray optic systems.
  • Development of fabrication processes for refractive lenses in materials other than silicon.

The workshop participants also addressed two important nontechnical areas: our relationship with industry and organization of optics within the light source facilities. Optimization of activities within these two areas could have an important effect on the effectiveness and efficiency of our overall endeavor. These are crosscutting managerial issues that we identified as areas that needed further in-depth study, but they need to be coordinated above the individual facilities.

Finally, an issue that cuts across many of the optics improvements listed above is routine access to beamlines that ideally are fully dedicated to optics research and/or development. The success of the BES X-ray user facilities in serving a rapidly increasing user community has led to a squeezing of beam time for vital instrumentation activities. Dedicated development beamlines could be shared with other R&D activities, such as detector programs and novel instrument development.

In summary, to meet the challenges of providing the highest-quality X-ray beams for users and to fully utilize the high-brightness sources of today and those that are on the horizon, it will be critical to make strategic investments in X-ray optics R&D. This report can provide guidance and direction for effective use of investments in the field of X-ray optics and potential approaches to develop a better-coordinated program of X-ray optics development within the suite of BES synchrotron radiation facilities. Due to the importance and complexity of the field, the need for tight coordination between BES light source facilities and with industry, as well as the rapid evolution of light source capabilities the workshop participants recommend holding similar workshops at least biannually.


Neutron and X-ray Detectors

The Basic Energy Sciences (BES) X-ray and neutron user facilities attract more than 12,000 researchers each year to perform cutting-edge science at these state-of-the-art sources. While impressive breakthroughs in X-ray and neutron sources give us the powerful illumination needed to peer into the nano- to mesoscale world, a stumbling block continues to be the distinct lag in detector development, which is slowing progress toward data collection and analysis. Urgently needed detector improvements would reveal chemical composition and bonding in 3-D and in real time, allow researchers to watch “movies” of essential life processes as they happen, and make much more efficient use of every X-ray and neutron produced by the source

The immense scientific potential that will come from better detectors has triggered worldwide activity in this area. Europe in particular has made impressive strides, outpacing the United States on several fronts. Maintaining a vital U.S. leadership in this key research endeavor will require targeted investments in detector R&D and infrastructure.

To clarify the gap between detector development and source advances, and to identify opportunities to maximize the scientific impact of BES user facilities, a workshop on Neutron and X-ray Detectors was held August 1-3, 2012, in Gaithersburg, Maryland. Participants from universities, national laboratories, and commercial organizations from the United States and around the globe participated in plenary sessions, breakout groups, and joint open-discussion summary sessions.

Sources have become immensely more powerful and are now brighter (more particles focused onto the sample per second) and more precise (higher spatial, spectral, and temporal resolution). To fully utilize these source advances, detectors must become faster, more efficient, and more discriminating. In supporting the mission of today’s cutting-edge neutron and X-ray sources, the workshop identified six detector research challenges (and two computing hurdles that result from the corresponding increase in data volume) for the detector community to overcome in order to realize the full potential of BES neutron and X-ray facilities.

Resolving these detector impediments will improve scientific productivity both by enabling new types of experiments, which will expand the scientific breadth at the X-ray and neutron facilities, and by potentially reducing the beam time required for a given experiment. These research priorities are summarized in the table below. Note that multiple, simultaneous detector improvements are often required to take full advantage of brighter sources.

High-efficiency hard X-ray sensors: The fraction of incident particles that are actually detected defines detector efficiency. Silicon, the most common direct-detection X-ray sensor material, is (for typical sensor thicknesses) 100% efficient at 8 keV, 25%efficient at 20 keV, and only 3% efficient at 50 keV. Other materials are needed for hard X-rays.

Replacement for 3He for neutron detectors: 3He has long been the neutron detection medium of choice because of its high cross section over a wide neutron energy range for the reaction 3He + n —> 3H + 1H + 0.764 MeV. 3He stockpiles are rapidly dwindling, and what is available can be had only at prohibitively high prices. Doped scintillators hold promise as ways to capture neutrons and convert them into light, although work is needed on brighter, more efficient scintillator solutions. Neutron detectors also require advances in speed and resolution.

Fast-framing X-ray detectors: Today’s brighter X-ray sources make time-resolved studies possible. For example, hybrid X-ray pixel detectors, initially developed for particle physics, are becoming fairly mature X-ray detectors, with considerable development in Europe. To truly enable time-resolved studies, higher frame rates and dynamic range are required, and smaller pixel sizes are desirable.

High-speed spectroscopic X-ray detectors: Improvements in the readout speed and energy resolution of X-ray detectors are essential to enable chemically sensitive microscopies. Advances would make it possible to take images with simultaneous spatial and chemical information.

Very high-energy-resolution X-ray detectors: The energy resolution of semiconductor detectors, while suitable for a wide range of applications, is far less than what can be achieved with X-ray optics. A direct detector that could rival the energy resolution of optics could dramatically improve the efficiency of a multitude of experiments, as experiments are often repeated at a number of different energies. Very high-energy-resolution detectors could make these experiments parallel, rather than serial.

Low-background, high-spatial-resolution neutron detectors: Low-background detectors would significantly improve experiments that probe excitations (phonons, spin excitations, rotation, and diffusion in polymers and molecular substances, etc.) in condensed matter. Improved spatial resolution would greatly benefit radiography, tomography, phase-contrast imaging, and holography.

Improved acquisition and visualization tools: In the past, with the limited variety of slow detectors, it was straightforward to visualize data as it was being acquired (and adjust experimental conditions accordingly) to create a compact data set that the user could easily transport. As detector complexity and data rates explode, this becomes much more challenging. Three goals were identified as important for coping with the growing data volume from high-speed detectors:

  • Facilitate better algorithm development. In particular, algorithms that can minimize the quantity of data stored.
  • Improve community-driven mechanisms to reduce data protocols and enhance quantitative, interactive visualization tools.
  • Develop and distribute community-developed, detector-specific simulation tools.
  • Aim for parallelization to take advantage of high-performance analysis platforms.
Improved analysis work flows: Standardize the format of metadata that accompanies detector data and describes the experimental setup and conditions. Develop a standardized user interface and software framework for analysis and data management.

 

The diversity of detector improvements required is necessarily as broad as the range of scientific experimentation at BES facilities. This workshop identified a variety of avenues by which detector R&D can enable enhanced science at BES facilities. The Research Directions listed above will be addressed by focused R&D and detector engineering, both of which require specialized infrastructure and skills. While U.S. leadership in neutron and X-ray detectors lags behind other countries in several areas, significant talent exists across the complex. A forum of technical experts, facilities management, and BES could be a venue to provide further definition.


From Quanta to the Continuum:  Opportunities for Mesoscale Science
We are at a time of unprecedented challenge and opportunity. Our economy is in need of a jump start, and our supply of clean energy needs to dramatically increase. Innovation through basic research is a key means for addressing both of these challenges. The great scientific advances of the last decade and more, especially at the nanoscale, are ripe for exploitation. Seizing this key opportunity requires mastering the mesoscale, where classical, quantum, and nanoscale science meet. It has become clear that—in many important areas—the functionality that is critical to macroscopic behavior begins to manifest itself not at the atomic or nanoscale but at the mesoscale, where defects, interfaces, and non-equilibrium structures are the norm. With our recently acquired knowledge of the rules of nature that govern the atomic and nanoscales, we are well positioned to unravel and control the complexity that determines functionality at the mesoscale. The reward for breakthroughs in our understanding at the mesoscale is the emergence of previously unrealized functionality. The present report explores the opportunity and defines the research agenda for mesoscale science—discovering, understanding, and controlling interactions among disparate systems and phenomena to reach the full potential of materials complexity and functionality. The ability to predict and control mesoscale phenomena and architectures is essential if atomic and molecular knowledge is to blossom into a next generation of technology opportunities, societal benefits, and scientific advances.


  • Imagine the ability to manufacture at the mesoscale:  that is, the directed assembly of mesoscale structures that possess unique functionality that yields faster, cheaper, higher performing, and longer lasting products, as well as products that have functionality that we have not yet imagined.
  • Imagine the realization of biologically inspired complexity and functionality with inorganic earth-abundant materials to transform energy conversion, transmission, and storage.
  • Imagine the transformation from top-down design of materials and systems with macroscopic building blocks to bottom-up design with nanoscale functional units producing next-generation technological innovation.
    This is the promise of mesoscale science.

Mesoscale science and technology opportunities build on the enormous foundation of nanoscience that the scientific community has created over the last decade and continues to create. New features arise naturally in the transition to the mesoscale, including the emergence of collective behavior; the interaction of disparate electronic, mechanical, magnetic, and chemical phenomena; the appearance of defects, interfaces and statistical variation; and the self assembly of functional composite systems. The mesoscale represents a discovery laboratory for finding new science, a self-assembly foundry for creating new functional systems, and a design engine for new technologies.

The last half-century and especially the last decade have witnessed a remarkable drive to ever smaller scales, exposing the atomic, molecular, and nanoscale structures that anchor the macroscopic materials and phenomena we deal with every day. Given this knowledge and capability, we are now starting the climb up from the atomic and nanoscale to the greater complexity and wider horizons of the mesoscale. The constructionist path up from atomic and nanoscale to mesoscale holds a different kind of promise than the reductionist path down:  it allows us to re-arrange the nanoscale building blocks into new combinations, exploit the dynamics and kinetics of these new coupled interactions, and create qualitatively different mesoscale architectures and phenomena leading to new functionality and ultimately new technology. The reductionist journey to smaller length and time scales gave us sophisticated observational tools and intellectual understanding that we can now apply with great advantage to the wide opportunity of mesoscale science following a bottom-up approach.
Realizing the mesoscale opportunity requires advances not only in our knowledge but also in our ability to observe, characterize, simulate, and ultimately control matter. Mastering mesoscale materials and phenomena requires the seamless integration of theory, modeling, and simulation with synthesis and characterization. The inherent complexity of mesoscale phenomena, often including many nanoscale structural or functional units, requires theory and simulation spanning multiple space and time scales. In mesoscale architectures the positions of individual atoms are often no longer relevant, requiring new simulation approaches beyond density functional theory and molecular dynamics that are so successful at atomic scales. New organizing principles that describe emergent mesoscale phenomena arising from many coupled and competing degrees of freedom wait to be discovered and applied. Measurements that are dynamic, in situ, and multimodal are needed to capture the sequential phenomena of composite mesoscale materials. Finally, the ability to design and realize the complex materials we imagine will require qualitative advances in how we synthesize and fabricate materials and how we manage their metastability and degradation over time. We must move from serendipitous to directed discovery, and we must master the art of assembling structural and functional nanoscale units into larger architectures that create a higher level of complex functional systems.
While the challenge of discovering, controlling, and manipulating complex mesoscale architectures and phenomena to realize new functionality is immense, success in the pursuit of these research directions will have outcomes with the potential to transform society. The body of this report outlines the need, the opportunities, the challenges, and the benefits of mastering mesoscale science.


Data and Communications in Basic Energy Sciences: Creating a Pathway for Scientific Discovery

This report is based on the Department of Energy (DOE) Workshop on “Data and Communications in Basic Energy Sciences: Creating a Pathway for Scientific Discovery” that was held at the Bethesda Marriott in Maryland on October 24-25, 2011. The workshop brought together leading researchers from the Basic Energy Sciences (BES) facilities and Advanced Scientific Computing Research (ASCR). The workshop was co-sponsored by these two Offices to identify opportunities and needs for data analysis, ownership, storage, mining, provenance and data transfer at light sources, neutron sources, microscopy centers and other facilities.

Their charge was to identify current and anticipated issues in the acquisition, analysis, communication and storage of experimental data that could impact the progress of scientific discovery, ascertain what knowledge, methods and tools are needed to mitigate present and projected shortcomings and to create the foundation for information exchanges and collaboration between ASCR and BES supported researchers and facilities.

The workshop was organized in the context of the impending data tsunami that will be produced by DOE’s BES facilities. Current facilities, like SLAC National Accelerator Laboratory’s Linac Coherent Light Source, can produce up to 18 terabytes (TB) per day, while upgraded detectors at Lawrence Berkeley National Laboratory’s Advanced Light Source will generate ~10TB per hour. The expectation is that these rates will increase by over an order of magnitude in the coming decade. The urgency to develop new strategies and methods in order to stay ahead of this deluge and extract the most science from these facilities was recognized by all. The four focus areas addressed in this workshop were:

  • Workflow Management - Experiment to Science: Identifying and managing the data path from experiment to publication.
  • Theory and Algorithms: Recognizing the need for new tools for computation at scale, supporting large data sets and realistic theoretical models.
  • Visualization and Analysis: Supporting near-real-time feedback for experiment optimization and new ways to extract and communicate critical information from large data sets.
  • Data Processing and Management: Outlining needs in computational and communication approaches and infrastructure needed to handle unprecedented data volume and information content.

It should be noted that almost all participants recognized that there were unlikely to be any turn-key solutions available due to the unique, diverse nature of the BES community, where research at adjacent beamlines at a given light source facility often span everything from biology to materials science to chemistry using scattering, imaging and/or spectroscopy. However, it was also noted that advances supported by other programs in data research, methodologies, and tool development could be implemented on reasonable time scales with modest effort. Adapting available standard file formats, robust workflows, and in-situ analysis tools for user facility needs could pay long-term dividends.

Workshop participants assessed current requirements as well as future challenges and made the following recommendations in order to achieve the ultimate goal of enabling transformative science in current and future BES facilities:

Theory and analysis components should be integrated seamlessly within experimental workflow.
Develop new algorithms for data analysis based on common data formats and toolsets.

Move analysis closer to experiment.
Move the analysis closer to the experiment to enable real-time (in-situ) streaming capabilities, live visualization of the experiment and an increase of the overall experimental efficiency.

Match data management access and capabilities with advancements in detectors and sources.
Remove bottlenecks, provide interoperability across different facilities/beamlines and apply forefront mathematical techniques to more efficiently extract science from the experiments.

This workshop report examines and reviews the status of several BES facilities and highlights the successes and shortcomings of the current data and communication pathways for scientific discovery. It then ascertains what methods and tools are needed to mitigate present and projected data bottlenecks to science over the next 10 years. The goal of this report is to create the foundation for information exchanges and collaborations among ASCR and BES supported researchers, the BES scientific user facilities, and ASCR computing and networking facilities.

To jumpstart these activities, there was a strong desire to see a joint effort between ASCR and BES along the lines of the highly successful Scientific Discovery through Advanced Computing (SciDAC) program in which integrated teams of engineers, scientists and computer scientists were engaged to tackle a complete end-to-end workflow solution at one or more beamlines, to ascertain what challenges will need to be addressed in order to handle future increases in data


Research Needs and Impacts in Predictive Simulation for Internal Combustion Engines (PreSICE)

This report is based on a SC/EERE Workshop to Identify Research Needs and Impacts in Predictive Simulation for Internal Combustion Engines (PreSICE), held March 3, 2011, to determine strategic focus areas that will accelerate innovation in engine design to meet national goals in transportation efficiency.

The U.S. has reached a pivotal moment when pressures of energy security, climate change, and economic competitiveness converge. Oil prices remain volatile and have exceeded $100 per barrel twice in five years. At these prices, the U.S. spends $1 billion per day on imported oil to meet our energy demands. Because the transportation sector accounts for two-thirds of our petroleum use, energy security is deeply entangled with our transportation needs. At the same time, transportation produces one-quarter of the nation’s carbon dioxide output. Increasing the efficiency of internal combustion engines is a technologically proven and cost-effective approach to dramatically improving the fuel economy of the nation’s fleet of vehicles in the near- to mid-term, with the corresponding benefits of reducing our dependence on foreign oil and reducing carbon emissions. Because of their relatively low cost, high performance, and ability to utilize renewable fuels, internal combustion engines—including those in hybrid vehicles—will continue to be critical to our transportation infrastructure for decades. Achievable advances in engine technology can improve the fuel economy of automobiles by over 50% and trucks by over 30%. Achieving these goals will require the transportation sector to compress its product development cycle for cleaner, more efficient engine technologies by 50% while simultaneously exploring innovative design space. Concurrently, fuels will also be evolving, adding another layer of complexity and further highlighting the need for efficient product development cycles. Current design processes, using “build and test” prototype engineering, will not suffice. Current market penetration of new engine technologies is simply too slow—it must be dramatically accelerated.

These challenges present a unique opportunity to marshal U.S. leadership in science-based simulation to develop predictive computational design tools for use by the transportation industry. The use of predictive simulation tools for enhancing combustion engine performance will shrink engine development timescales, accelerate time to market, and reduce development costs, while ensuring the timely achievement of energy security and emissions targets and enhancing U.S. industrial competitiveness.

In 2007 Cummins achieved a milestone in engine design by bringing a diesel engine to market solely with computer modeling and analysis tools. The only testing was after the fact to confirm performance. Cummins achieved a reduction in development time and cost. As important, they realized a more robust design, improved fuel economy, and met all environmental and customer constraints. This important first step demonstrates the potential for computational engine design. But, the daunting complexity of engine combustion and the revolutionary increases in efficiency needed require the development of simulation codes and computation platforms far more advanced than those available today.

 

Based on these needs, a Workshop to Identify Research Needs and Impacts in Predictive Simulation for Internal Combustion Engines (PreSICE) convened over 60 U.S. leaders in the engine combustion field from industry, academia, and national laboratories to focus on two critical areas of advanced simulation, as identified by the U.S. automotive and engine industries. First, modern engines require precise control of the injection of a broad variety of fuels that is far more subtle than achievable to date and that can be obtained only through predictive modeling and simulation. Second, the simulation, understanding, and control of these stochastic in-cylinder combustion processes lie on the critical path to realizing more efficient engines with greater power density. Fuel sprays set the initial conditions for combustion in essentially all future transportation engines; yet today designers primarily use empirical methods that limit the efficiency achievable. Three primary spray topics were identified as focus areas in the workshop:

  1. The fuel delivery system, which includes fuel manifolds and internal injector flow,
  2. The multi-phase fuel–air mixing in the combustion chamber of the engine, and
  3. The heat transfer and fluid interactions with cylinder walls.

 

Current understanding and modeling capability of stochastic processes in engines remains limited and prevents designers from achieving significantly higher fuel economy. To improve this situation, the workshop participants identified three focus areas for stochastic processes:

  1. Improve fundamental understanding that will help to establish and characterize the physical causes of stochastic events,
  2. Develop physics-based simulation models that are accurate and sensitive enough to capture performance-limiting variability, and
  3. Quantify and manage uncertainty in model parameters and boundary conditions.

 

Improved models and understanding in these areas will allow designers to develop engines with reduced design margins and that operate reliably in more efficient regimes. All of these areas require improved basic understanding, high-fidelity model development, and rigorous model validation. These advances will greatly reduce the uncertainties in current models and improve understanding of sprays and fuel–air mixture preparation that limit the investigation and development of advanced combustion technologies.

The two strategic focus areas have distinctive characteristics but are inherently coupled. Coordinated activities in basic experiments, fundamental simulations, and engineering-level model development and validation can be used to successfully address all of the topics identified in the PreSICE workshop. The outcome will be:

  1. New and deeper understanding of the relevant fundamental physical and chemical processes in advanced combustion technologies,
  2. Implementation of this understanding into models and simulation tools appropriate for both exploration and design, and
  3. Sufficient validation with uncertainty quantification to provide confidence in the simulation results.

 

These outcomes will provide the design tools for industry to reduce development time by up to 30% and improve engine efficiencies by 30% to 50%. The improved efficiencies applied to the national mix of transportation applications have the potential to save over 5 million barrels of oil per day, a current cost savings of $500 million per day.

 


 

 

Compact Light Source Thumbnail
JPG
Report

 

Report of the Basic Energy Sciences Workshop on Compact Light Sources

This report is based on a BES Workshop on Compact Light Sources, held May 11-12, 2010, to evaluated the advantages and disadvantages of compact light source approaches and compared their performance to the third generation storage rings and free-electron lasers. The workshop examined the state of the technology for compact light sources and their expected progress. The workshop evaluated the cost efficiency, user access, availability, and reliability of such sources. Working groups evaluated the advantages and disadvantages of Compact Light Source (CLS) approaches, and compared their performance to the third-generation storage rings and free-electron lasers (FELs). The primary aspects of comparison were 1) cost effectiveness, 2) technical availability v. time frame, and 3) machine reliability and availability for user access. Five categories of potential sources were analyzed: 1) inverse Compton scattering (ICS) sources, 2) mini storage rings, 3) plasma sources, 4) sources using plasma-based accelerators, and 5) laser high harmonic generation (HHG) sources.

Compact light sources are not a substitute for large synchrotron and FEL light sources that typically also incorporate extensive user support facilities. Rather they offer attractive, complementary capabilities at a small fraction of the cost and size of large national user facilities. In the far term they may offer the potential for a new paradigm of future national user facility. In the course of the workshop, we identified overarching R&D topics over the next five years that would enhance the performance potential of both compact and large-scale sources:

  • Development of infrared (IR) laser systems delivering kW-class average power with femtosecond pulses at kHz repetition rates. These have application to ICS sources, plasma sources, and HHG sources.
  • Development of laser storage cavities for storage of 10-mJ picosecond and femtosecond pulses focused to micron beam sizes.
  • Development of high-brightness, high-repetition-rate electron sources.
  • Development of continuous wave (cw) superconducting rf linacs operating at 4 K, while not essential, would reduce capital and operating cost.

 

 


 

 

 

Basic Research Needs for Carbon Capture: Beyond 2020

This report is based on a SC/FE workshop on Carbon Capture: Beyond 2020, held March 4–5, 2010, to assess the basic research needed to address the current technical bottlenecks in carbon capture processes and to identify key research priority directions that will provide the foundations for future carbon capture technologies.

The problem of thermodynamically efficient and scalable carbon capture stands as one of the greatest challenges for modern energy researchers. The vast majority of US and global energy use derives from fossil fuels, the combustion of which results in the emission of carbon dioxide into the atmosphere. These anthropogenic emissions are now altering the climate. Although many alternatives to combustion are being considered, the fact is that combustion will remain a principal component of the global energy system for decades to come. Today’s carbon capture technologies are expensive and cumbersome and energy intensive. If scientists could develop practical and cost-effective methods to capture carbon, those methods would at once alter the future of the largest industry in the world and provide a technical solution to one of the most vexing problems facing humanity.

The carbon capture problem is a true grand challenge for today’s scientists. Postcombustion CO2 capture requires major new developments in disciplines spanning fundamental theoretical and experimental physical chemistry, materials design and synthesis, and chemical engineering. To start with, the CO2 molecule itself is thermodynamically stable and binding to it requires a distortion of the molecule away from its linear and symmetric arrangement. This binding of the gas molecule cannot be too strong, however; the sheer quantity of CO2 that must be captured ultimately dictates that the capture medium must be recycled over and over. Hence the CO2 once bound, must be released with relatively little energy input. Further, the CO2 must be rapidly and selectively pulled out of a mixture that contains many other gaseous components. The related processes of precombustion capture and oxycombustion pose similar challenges. It is this nexus of high-speed capture with high selectivity and minimal energy loss that makes this a true grand challenge problem, far beyond any of today’s artificial molecular manipulation technologies, and one whose solution will drive the advancement of molecular science to a new level of sophistication.

We have only to look to nature, where such chemical separations are performed routinely, to imagine what may be achieved. The hemoglobin molecule transports oxygen in the blood rapidly and selectively and releases it with minimal energy penalty. Despite our improved understanding of how this biological system works, we have yet to engineer a molecular capture system that uses the fundamental cooperativity process that lies at the heart of the functionality of hemoglobin. While such biological examples provide inspiration, we also note that newly developed theoretical and computational capabilities; the synthesis of new molecules, materials, and membranes; and the remarkable advances in characterization techniques enabled by the Department of Energy’s measurement facilities all create a favorable environment for a major new basic research push to solve the carbon capture problem within the next decade.

The Department of Energy has established a comprehensive strategy to meet the nation’s needs in the carbon capture arena. This framework has been developed following a series of workshops that have engaged all the critical stakeholder communities. The strategy that has emerged is based upon a tiered approach, with Fossil Energy taking the lead in a series of applied research programs that will test and extend our current systems. ARPA-E (Advanced Research Projects Agency–Energy) is supporting potential breakthroughs based upon innovative proposals to rapidly harness today’s technical capabilities in ways not previously considered. These needs and plans have been well summarized in the report from a recent workshop—Carbon Capture 2020, held in October 5 and 6, 2009—focused on near-term strategies for carbon capture improvements (http://www.netl.doe.gov/publications/ proceedings/09/CC2020/pdfs/Richards_Summary.pdf ). Yet the fact remains that when the carbon capture problem is looked at closely, we see today’s technologies fall far short of making carbon capture an economically viable process. This situation reinforces the need for a parallel, intensive use-inspired basic research effort to address the problem. This was the overwhelming conclusion of a recent workshop—Carbon Capture: Beyond 2020, held March 4 and 5, 2010—and is the subject of the present report. To prepare for the second workshop, an in-depth assessment of current technologies for carbon capture was conducted; the result of this study was a factual document, Technology and Applied R&D Needs for Carbon Capture: Beyond 2020. This document, which was prepared by experts in current carbon capture processes, also summarized the technological gaps or bottlenecks that limit currently available carbon capture technologies. The report considered the separation processes needed for all three CO2 emission reduction strategies—postcombustion, precombustion, and oxycombustion—and assessed three primary separation technologies based on liquid absorption, membranes, and solid adsorption.

The workshop “Carbon Capture: Beyond 2020” convened approximately 80 attendees from universities, national laboratories, and industry to assess the basic research needed to address the current technical bottlenecks in carbon capture processes and to identify key research priority directions that will provide the foundations for future carbon capture technologies. The workshop began with a plenary session including speakers who summarized the extent of the carbon capture challenge, the various current approaches, and the limitations of these technologies. Workshop attendees were then given the charge to identify high-priority basic research directions that could provide revolutionary new concepts to form the basis for separation technologies in 2020 and beyond. The participants were divided into three major panels corresponding to different approaches for separating gases to reduce carbon emissions—liquid absorption, solid adsorption, and membrane separations. Two other panels were instructed to attend each of these three technology panels to assess crosscutting issues relevant to characterization and computation. At the end of the workshop, a final plenary session was convened to summarize the most critical research needs identified by the workshop attendees in each of the three major technical panels and from the two cross-cutting panels.

The reports of the three technical panels included a set of high level Priority Research Directions meant to serve as inspiration to researchers in multiple disciplines—materials science, chemistry, biology, computational science, engineering, and others—to address the huge scientific challenges facing this nation and the world as we seek technologies for large-scale carbon capture beyond 2020. These Priority Research Directions were clustered around three main areas, all tightly coupled:

  • Understand and control the dynamic atomic-level and molecular-level interactions of the targeted species with the separation media.

  • Discover and design new materials that incorporate designed structures and functionalities tuned for optimum separation properties.

  • Tailor capture/release processes with alternative driving forces, taking advantage of a new generation of materials.

In each of the technical panels, the participants identified two major crosscutting research themes. The first was the development of new analytical tools that can characterize materials structure and molecular processes across broad spatial and temporal scales and under realistic conditions that mimic those encountered in actual separation processes. Such tools are needed to examine interfaces and thin films at the atomic and molecular levels, achieving an atomic/molecular-scale understanding of gas–host structures, kinetics, and dynamics, and understanding and control of nanoscale synthesis in multiple dimensions. A second major crosscutting theme was the development of new computational tools for theory, modeling, and simulation of separation processes. Computational techniques can be used to elucidate mechanisms responsible for observed separations, predict new desired features for advanced separations materials, and guide future experiments, thus complementing synthesis and characterization efforts. These two crosscut areas underscored the fact that the challenge for future carbon capture technologies will be met only with multidisciplinary teams of scientists and engineers. In addition, it was noted that success in this fundamental research area must be closely coupled with successful applied research to ensure the continuing assessment and maturation of new technologies as they undergo scale-up and deployment.

Carbon capture is a very rich scientific problem, replete with opportunity for basic researchers to advance the frontiers of science as they engage on one of the most important technical challenges of our times. This workshop report outlines an ambitious agenda for addressing the very difficult problem of carbon capture by creating foundational new basic science. This new science will in turn pave the way for many additional advances across a broad range of scientific disciplines and technology sectors.



Computational Materials Science and Chemistry: Accelerating Discovery and Innovation through Simulation-Based Engineering and Science

This report is based on a SC Workshop on Computational Materials Science and Chemistry for Innovation on July 26-27, 2010, to assess the potential of state-of-the-art computer simulations to accelerate understanding and discovery in materials science and chemistry, with a focus on potential impacts in energy technologies and innovation.

The urgent demand for new energy technologies has greatly exceeded the capabilities of today's materials and chemical processes. To convert sunlight to fuel, efficiently store energy, or enable a new generation of energy production and utilization technologies requires the development of new materials and processes of unprecedented functionality and performance. New materials and processes are critical pacing elements for progress in advanced energy systems and virtually all industrial technologies.

Over the past two decades, the United States has developed and deployed the world's most powerful collection of tools for the synthesis, processing, characterization, and simulation and modeling of materials and chemical systems at the nanoscale, dimensions of a few atoms to a few hundred atoms across. These tools, which include world-leading x-ray and neutron sources, nanoscale science facilities, and high-performance computers, provide an unprecedented view of the atomic-scale structure and dynamics of materials and the molecular-scale basis of chemical processes. For the first time in history, we are able to synthesize, characterize, and model materials and chemical behavior at the length scale where this behavior is controlled. This ability is transformational for the discovery process and, as a result, confers a significant competitive advantage.

Perhaps the most spectacular increase in capability has been demonstrated in high performance computing. Over the past decade, computational power has increased by a factor of a million due to advances in hardware and software. This rate of improvement, which shows no sign of abating, has enabled the development of computer simulations and models of unprecedented fidelity.

We are at the threshold of a new era where the integrated synthesis, characterization, and modeling of complex materials and chemical processes will transform our ability to understand and design new materials and chemistries with predictive power. In turn, this predictive capability will transform technological innovation by accelerating the development and deployment of new materials and processes in products and manufacturing.

Harnessing the potential of computational science and engineering for the discovery and development of materials and chemical processes is essential to maintaining leadership in these foundational fields that underpin energy technologies and industrial competitiveness. Capitalizing on the opportunities presented by simulation-based engineering and science in materials and chemistry will require an integration of experimental capabilities with theoretical and computational modeling; the development of a robust and sustainable infrastructure to support the development and deployment of advanced computational models; and the assembly of a community of scientists and engineers to implement this integration and infrastructure. This community must extend to industry, where incorporating predictive materials science and chemistry into design tools can accelerate the product development cycle and drive economic competitiveness.

The confluence of new theories, new materials synthesis capabilities, and new computer platforms has created an unprecedented opportunity to implement a "materials-by-design" paradigm with wide-ranging benefits in technological innovation and scientific discovery. The Workshop on Computational Materials Science and Chemistry for Innovation was convened in Bethesda, Maryland, on July 26-27, 2010. Sponsored by the Department of Energy (DOE) Offices of Advanced Scientific Computing Research and Basic Energy Sciences, the workshop brought together 160 experts in materials science, chemistry, and computational science representing more than 65 universities, laboratories, and industries, and four agencies.

The workshop examined seven foundational challenge areas in materials science and chemistry: materials for extreme conditions, self-assembly, light harvesting, chemical reactions, designer fluids, thin films and interfaces, and electronic structure. Each of these challenge areas is critical to the development of advanced energy systems, and each can be accelerated by the integrated application of predictive capability with theory and experiment.

The workshop concluded that emerging capabilities in predictive modeling and simulation have the potential to revolutionize the development of new materials and chemical processes. Coupled with world-leading materials characterization and nanoscale science facilities, this predictive capability provides the foundation for an innovation ecosystem that can accelerate the discovery, development, and deployment of new technologies, including advanced energy systems. Delivering on the promise of this innovation ecosystem requires the following:

  • Integration of synthesis, processing, characterization, theory, and simulation and modeling. Many of the newly established Energy Frontier Research Centers and Energy Hubs are exploiting this integration.

  • Achieving/strengthening predictive capability in foundational challenge areas. Predictive capability in the seven foundational challenge areas described in this report is critical to the development of advanced energy technologies.

  • Developing validated computational approaches that span vast differences in time and length scales. This fundamental computational challenge crosscuts all of the foundational challenge areas. Similarly challenging is coupling of analytical data from multiple instruments and techniques that are required to link these length and time scales.

  • Experimental validation and quantification of uncertainty in simulation and modeling. Uncertainty quantification becomes increasingly challenging as simulations become more complex.

  • Robust and sustainable computational infrastructure, including software and applications. For modeling and simulation, software equals infrastructure. To validate the computational tools, software is critical infrastructure that effectively translates huge arrays of experimental data into useful scientific understanding. An integrated approach for managing this infrastructure is essential.

  • Efficient transfer and incorporation of simulation-based engineering and science in industry. Strategies for bridging the gap between research and industrial applications and for widespread industry adoption of integrated computational materials engineering are needed.



New Science for a Secure and Sustainable Energy Future

This Basic Energy Sciences Advisory Committee (BESAC) report summarizes a 2008 study by the Subcommittee on Facing our Energy Challenges in a New Era of Science to: (1) assimilate the scientific research directions that emerged from the BES Basic Research Needs workshop reports into a comprehensive set of science themes, and (2) identify the new implementation strategies and tools required to accomplish the science.

The United States faces a three-fold energy challenge:

  • Energy Independence. U.S. energy use exceeds domestic production capacity by the equivalent of 16 million barrels of oil per day, a deficit made up primarily by importing oil and natural gas. This deficit has nearly tripled since 1970.

  • Environmental Sustainability. The United States must reduce its emissions of carbon dioxide and other greenhouse gases that accelerate climate change. The primary source of these emissions is combustion of fossil fuel, comprising about 85% of U.S. national energy supply.

  • Economic Opportunity. The U.S. economy is threatened by the high cost of imported energy—as much as $700 billion per year at recent peak prices. We need to create next-generation clean energy technologies that do not depend on imported oil. U.S. leadership would not only provide solutions at home but also create global economic opportunity.

The magnitude of the challenge is so immense that existing energy approaches—even with improvements from advanced engineering and improved technology based on known concepts—will not be enough to secure our energy future. Instead, meeting the challenge will require new technologies for producing, storing and using energy with performance levels far beyond what is now possible. Such technologies spring from scientific breakthroughs in new materials and chemical processes that govern the transfer of energy between light, electricity and chemical fuels. Integrating a major national mobilization of basic energy research—to create needed breakthroughs—with appropriate investments in technology and engineering to accelerate bringing new energy solutions to market will be required to meet our three-fold energy challenge. This report identifies three strategic goals for which transformational scientific breakthroughs are urgently needed:

  • Making fuels from sunlight
  • Generating electricity without carbon dioxide emissions
  • Revolutionizing energy efficiency and use

Meeting these goals implies dramatic changes in our technologies for producing and consuming energy. We will manufacture chemical fuel from sunlight, water and carbon dioxide instead of extracting it from the earth. We will generate electricity from sunlight, wind, and high-efficiency clean coal and advanced nuclear plants instead of conventional coal and nuclear technology. Our cars and light trucks will be driven by efficient electric motors powered by a new generation of batteries and fuel cells.

These new, advanced energy technologies, however, require new materials and control of chemical change that operate at dramatically higher levels of functionality and performance. Converting sunlight to electricity with double or triple today's efficiency, storing electricity in batteries or supercapacitors at ten times today's densities, or operating coal-fired and nuclear power plants at far higher temperatures and efficiencies requires materials with atom by atom design and control, tailored nanoscale structures where every atom has a specific function. Such high performing materials would have complexity far higher than today's energy materials, approaching that of biological cells and proteins. They would be able to seamlessly control the ebb and flow of energy between chemical bonds, electrons, and light, and would be the foundation of the alternative energy technologies of the future.

Creating these advanced materials and chemical processes requires characterizing the structure and dynamics of matter at levels beyond our present reach. The physical and chemical phenomena that capture, store and release energy take place at the nanoscale, often involving subtle changes in single electrons or atoms, on timescales faster than we can now resolve. Penetrating the secrets of energy transformation between light, chemical bonds, and electrons requires new observational tools capable of probing the still-hidden realms of the ultrasmall and ultrafast. Observing the dynamics of energy flow in electronic and molecular systems at these resolutions is necessary if we are to learn to control their behavior.

Fundamental understanding of complex materials and chemical change based on theory, computation and advanced simulation is essential to creating new energy technologies. A working transistor was not developed until the theory of electronic behavior on semiconductor surfaces was formulated. In superconductivity, sweeping changes occurred in the field when a microscopic theory of the mechanism of superconductivity was finally developed. As Nobel Laureate Phillip Anderson has written, more is different: at each level of complexity in science, new laws need to be discovered for breakthrough progress to be made. Without such breakthroughs, future technologies will not be realized. The digital revolution was only made possible by transistors—try to imagine the information age with vacuum tubes. Nearly as ubiquitous are lasers, the basis for modern day read-heads used in CDs, DVDs, and bar code scanners. Lasers could not be developed until the quantum theory of light emission by materials was understood.

These advances—high-performance materials enabling precise control of chemical change, characterization tools probing the ultrafast and the ultrasmall, and new understanding based on advanced theory and simulation—are the agents for moving beyond incremental improvements and creating a truly secure and sustainable energy future.

Given these tools, we can imagine, and achieve, revolutionary new energy systems.



Full Report
(August 2010)

Science for Energy Technology: Strengthening the Link between Basic Research and Industry, Full Report
JPG
Report
Report

Initial Report
(April 2010)

Science for Energy Technology: Strengthening the Link between Basic Research and Industry
JPG
Report
Report
Summary

Science for Energy Technology: Strengthening the Link between Basic Research and Industry

This Basic Energy Sciences Advisory Committee (BESAC) report summarizes the results of a

Workshop on Science for Energy Technology on January 18-21, 2010, to identify the scientific priority research directions needed to address the roadblocks and accelerate the innovation of clean energy technologies.

The nation faces two severe challenges that will determine our prosperity for decades to come: assuring clean, secure, and sustainable energy to power our world, and establishing a new foundation for enduring economic and jobs growth. These challenges are linked: the global demand for clean sustainable energy is an unprecedented economic opportunity for creating jobs and exporting energy technology to the developing and developed world. But achieving the tremendous potential of clean energy technology is not easy. In contrast to traditional fossil fuel-based technologies, clean energy technologies are in their infancy, operating far below their potential, with many scientific and technological challenges to overcome.

Industry is ultimately the agent for commercializing clean energy technology and for reestablishing the foundation for our economic and jobs growth. For industry to succeed in these challenges, it must overcome many roadblocks and continuously innovate new generations of renewable, sustainable, and low-carbon energy technologies such as solar energy, carbon sequestration, nuclear energy, electricity delivery and efficiency, solid state lighting, batteries and biofuels. The roadblocks to higher performing clean energy technology are not just challenges of engineering design but are also limited by scientific understanding. Innovation relies on contributions from basic research to bridge major gaps in our understanding of the phenomena that limit efficiency, performance, or lifetime of the materials or chemistries of these sustainable energy technologies. Thus, efforts aimed at understanding the scientific issues behind performance limitations can have a real and immediate impact on cost, reliability, and performance of technology, and ultimately a transformative impact on our economy.

With its broad research base and unique scientific user facilities, the DOE Office of Basic Energy Sciences (BES) is ideally positioned to address these needs. BES has laid out a broad view of the basic and grand challenge science needs for the development of future clean energy technologies in a series of comprehensive "Basic Research Needs" workshops and reports (inside front cover and http://www.sc.doe.gov/bes/reports/list.html) and has structured its programs and launched initiatives to address the challenges.

The basic science needs of industry, however, are often more narrowly focused on solving specific nearer-term roadblocks to progress in existing and emerging clean energy technologies. To better define these issues and identify specific barriers to progress, the Basic Energy Sciences Advisory Committee (BESAC) sponsored the Workshop on Science for Energy Technology, January 18-21, 2010. A wide cross-section of scientists and engineers from industry, universities, and national laboratories delineated the basic science Priority Research Directions most urgently needed to address the roadblocks and accelerate the innovation of clean energy technologies. These Priority Research Directions address the scientific understanding underlying performance limitations in existing but still immature technologies. Resolving these performance limitations can dramatically improve the commercial penetration of clean energy technologies.

A key conclusion of the Workshop is that in addition to the decadal challenges defined in the "Basic Research Needs" reports, specific research directions addressing industry roadblocks are ripe for further emphasis. Another key conclusion is that identifying and focusing on specific scientific challenges and translating the results to industry requires more direct feedback and communication and collaboration between industrial and BES-supported scientists. BES-supported scientists need to be better informed of the detailed scientific issues facing industry, and industry more aware of BES capabilities and how to utilize them. An important capability is the suite of BES scientific user facilities, which are seen as playing a key role in advancing the science of clean energy technology.

Working together, industry and BES-supported scientists can achieve the required understanding and control of the performance limitations of clean energy technology, accelerate innovation in its development, and help build the workforce needed to implement the growing clean energy economy.



Next-Generation Photon Sources for Grand Challenges in Science and Energy

This Basic Energy Sciences Advisory Committee (BESAC) report summarizes the results of an October 2008 Photon Workshop of the Subcommittee on Facing our Energy Challenges in a New Era of Science to identify connections between major new research opportunities and the capabilities of the next generation of light sources. Particular emphasis was on energy-related research.

The next generation of sustainable energy technologies will revolve around transformational new materials and chemical processes that convert energy efficiently among photons, electrons, and chemical bonds. New materials that tap sunlight, store electricity, or make fuel from splitting water or recycling carbon dioxide will need to be much smarter and more functional than today's commodity-based energy materials. To control and catalyze chemical reactions or to convert a solar photon to an electron requires coordination of multiple steps, each carried out by customized materials and interfaces with designed nanoscale structures. Such advanced materials are not found in nature the way we find fossil fuels; they must be designed and fabricated to exacting standards, using principles revealed by basic science. Success in this endeavor requires probing, and ultimately controlling, the interactions among photons, electrons, and chemical bonds on their natural length and time scales.

Control science—the application of knowledge at the frontier of science to control phenomena and create new functionality—realized through the next generation of ultraviolet and X-ray photon sources, has the potential to be transformational for the life sciences and information technology, as well as for sustainable energy. Current synchrotron-based light sources have revolutionized macromolecular crystallography. The insights thus obtained are largely in the domain of static structure. The opportunity is for next generation light sources to extend these insights to the control of dynamic phenomena through ultrafast pump-probe experiments, time-resolved coherent imaging, and high-resolution spectroscopic imaging. Similarly, control of spin and charge degrees of freedom in complex functional materials has the potential not only to reveal the fundamental mechanisms of high-temperature superconductivity, but also to lay the foundation for future generations of information science.

This report identifies two aspects of energy science in which next-generation ultraviolet and X-ray light sources will have the deepest and broadest impact:

  • The temporal evolution of electrons, spins, atoms, and chemical reactions, down to the femtosecond time scale.

  • Spectroscopic and structural imaging of nano objects (or nanoscale regions of inhomogeneous materials) with nanometer spatial resolution and ultimate spectral resolution.

The dual advances of temporal and spatial resolution promised by fourth-generation light sources ideally match the challenges of control science. Femtosecond time resolution has opened completely new territory where atomic motion can be followed in real time and electronic excitations and decay processes can be followed over time. Coherent imaging with short-wavelength radiation will make it possible to access the nanometer length scale, where intrinsic quantum behavior becomes dominant. Performing spectroscopy on individual nanometer-scale objects rather than on conglomerates will eliminate the blurring of the energy levels induced by particle size and shape distributions and reveal the energetics of single functional units. Energy resolution limited only by the uncertainty relation is enabled by these advances.

Current storage-ring-based light sources and their incremental enhancements cannot meet the need for femtosecond time resolution, nanometer spatial resolution, intrinsic energy resolution, full coherence over energy ranges up to hard X-rays, and peak brilliance required to enable the new science outlined in this report. In fact, the new, unexplored territory is so expansive that no single currently imagined light source technology can fulfill the whole potential. Both technological and economic challenges require resolution as we move forward. For example, femtosecond time resolution and high peak brilliance are required for following chemical reactions in real time, but lower peak brilliance and high repetition rate are needed to avoid radiation damage in high-resolution spatial imaging and to avoid space-charge broadening in photoelectron spectroscopy and microscopy.

But light sources alone are not enough. The photons produced by next-generation light sources must be measured by state-of-the-art experiments installed at fully equipped end stations. Sophisticated detectors with unprecedented spatial, temporal, and spectral resolution must be designed and created. The theory of ultrafast phenomena that have never before been observed must be developed and implemented. Enormous data sets of diffracted signals in reciprocal space and across wide energy ranges must be collected and analyzed in real time so that they can guide the ongoing experiments. These experimental challenges—end stations, detectors, sophisticated experiments, theory, and data handling—must be planned and provided for as part of the photon source. Furthermore, the materials and chemical processes to be studied, often in situ, must be synthesized and developed with equal care. These are the primary factors determining the scientific and technological return on the photon source investment.

Of equal or greater concern is the need for interdisciplinary platforms to solve the grand challenges of sustainable energy, climate change, information technology, biological complexity, and medicine. No longer are these challenges confined to one measurement or one scientific discipline. Fundamental problems in correlated electron materials, where charge, spin, and lattice modes interact strongly, require experiments in electron, neutron, and X-ray scattering that must be coordinated across platforms and user facilities and that integrate synthesis and theory as well. The model of users applying for one-time access to single-user facilities does not promote the coordinated, interdisciplinary approach needed to solve today's grand challenge problems. Next-generation light sources and other user facilities must learn to accommodate the interdisciplinary, cross-platform needs of modern grand challenge science. Only through the development of such future sources, appropriately integrated with advanced end stations and detectors and closely coupled with broader synthesis, measurement, theory, and modeling tools, can we meet the demands of a New Era of Science.



Directing Matter and Energy: Five Challenges for Science and the Imagination

This Basic Energy Sciences Advisory Committee (BESAC) Grand Challenges report identifies the most important scientific questions and science-driven technical challenges facing BES and describes the importance of these challenges to advances in disciplinary science, to technology development, and to energy and other societal needs. The report originated from a January 25, 2005, request from the Office of Science and is the product of numerous BESAC and Grand Challenges Subcommittee meetings and conferences in 2006-2007.

It is frequently said that any sufficiently advanced technology is indistinguishable from magic. Modern science stands at the beginning of what might seem by today's standards to be an almost magical leap forward in our understanding and control of matter, energy, and information at the molecular and atomic levels. Atoms—and the molecules they form through the sharing or exchanging of electrons—are the building blocks of the biological and non-biological materials that make up the world around us. In the 20th century, scientists continually improved their ability to observe and understand the interactions among atoms and molecules that determine material properties and processes. Now, scientists are positioned to begin directing those interactions and controlling the outcomes on a molecule-by-molecule and atom-by-atom basis, or even at the level of electrons. Long the staple of science- fiction novels and films, the ability to direct and control matter at the quantum, atomic, and molecular levels creates enormous opportunities across a wide spectrum of critical technologies. This ability will help us meet some of humanity's greatest needs, including the need for abundant, clean, and cheap energy. However, generating, storing, and distributing adequate and sustainable energy to the nation and the world will require a sea change in our ability to control matter and energy.

One of the most spectacular technological advances in the 20th century took place in the field of information, as computers and microchips became ubiquitous in our society. Vacuum tubes were replaced with transistors and, in accordance with Moore's Law (named for Intel co-founder Gordon Moore), the number of transistors on a microchip has doubled approximately every two years for the past two decades. However, if the time comes when integrated circuits can be fabricated at the molecular or nanoscale level, the limits of Moore's Law will be far surpassed. A supercomputer based on nanochips would comfortably fit in the palm of your hand and use less electricity than a cottage. All the information stored in the Library of Congress could be contained in a memory the size of a sugar cube. Ultimately, if computations can be carried out at the atomic or sub-nanoscale levels, today's most powerful microtechnology will seem as antiquated and slow as an abacus.

For the future, imagine a clean, cheap, and virtually unlimited supply of electrical power from solar-energy systems modeled on the photosynthetic processes utilized by green plants, and power lines that could transmit this electricity from the deserts of the Southwest to the Eastern Seaboard at nearly 100-percent efficiency. Imagine information and communications systems based on light rather than electrons that could predict when and where hurricanes make landfall, along with self-repairing materials that could survive those hurricanes. Imagine synthetic materials fully compatible and able to communicate with biological materials. This is speculative to be sure, but not so very far beyond the scope of possibilities.

Acquiring the ability to direct and control matter all the way down to molecular, atomic, and electronic levels will require fundamental new knowledge in several critical areas. This report was commissioned to define those knowledge areas and the opportunities that lie beyond. Five interconnected Grand Challenges that will pave the way to a science of control are identified in the regime of science roughly defined by the Basic Energy Science portfolio, and recommendations are presented for what must be done to meet them.

FIVE GRAND CHALLENGES FOR BASIC ENERGY SCIENCES

  • How do we control material processes at the level of electrons?

    Electrons are the negatively charged subatomic particles whose dynamics determine materials properties and direct chemical, electrical, magnetic, and physical processes. If we can learn to direct and control material processes at the level of electrons, where the strange laws of quantum mechanics rule, it should pave the way for artificial photosynthesis and other highly efficient energy technologies, and could revolutionize computer technologies.

  • How do we design and perfect atom- and energy- efficient synthesis of revolutionary new forms of matter with tailored properties?

    Humans, through trial and error experiments or through lucky accidents, have been able to make only a tiny fraction of all the materials that are theoretically possible. If we can learn to design and create new materials with tailored properties, it could lead to low-cost photovoltaics, self-repairing and self-regulating devices, integrated photonic (light-based) technologies, and nano-sized electronic and mechanical devices.

  • How do remarkable properties of matter emerge from complex correlations of the atomic or electronic constituents and how can we control these properties?

    Emergent phenomena, in which a complex outcome emerges from the correlated interactions of many simple constituents, can be widely seen in nature, as in the interactions of neurons in the human brain that result in the mind, the freezing of water, or the giant magneto-resistance behavior that powers disk drives. If we can learn the fundamental rules of correlations and emergence and then learn how to control them, we could produce, among many possibilities, an entirely new generation of materials that supersede present-day semiconductors and superconductors.

  • How can we master energy and information on the nanoscale to create new technologies with capabilities rivaling those of living things?

    Biology is nature's version of nanotechnology, though the capabilities of biological systems can exceed those of human technologies by a vast margin. If we can understand biological functions and harness nanotechnologies with capabilities as effective as those of biological systems, it should clear the way towards profound advances in a great many scientific fields, including energy and information technologies.

  • How do we characterize and control matter away—especially very far away—from equilibrium?

    All natural and most human-induced phenomena occur in systems that are away from the equilibrium in which the system would not change with time. If we can understand system effects that take place away—especially very far away—from equilibrium and learn to control them, it could yield dramatic new energy-capture and energy storage technologies, greatly improve our predictions for molecular-level electronics, and enable new mitigation strategies for environmental damage.

    We now stand at the brink of a "Control Age" that could spark revolutionary changes in how we inhabit our planet, paving the way to a bright and sustainable future for us all. But answering the call of the five Grand Challenges for Basic Energy Science will require that we change our fundamental understanding of how nature works. This will necessitate a three-fold attack: new approaches to training and funding, development of instruments more precise and flexible than those used up to now for observational science, and creation of new theories and concepts beyond those we currently possess. The difficulties involved in this change of our understanding are huge, but the rewards for success should be extraordinary. If we succeed in meeting these five Grand Challenges, our ability to direct and control matter might one day be measured only by the limits of human imagination.



Basic Research Needs for Materials under Extreme Environments

This report is based on a BES Workshop on Basic Research Needs for Materials under Extreme Environments, June 11-13, 2007, to evaluate the potential for developing revolutionary new materials that will meet demanding future energy requirements that expose materials to environmental extremes.

Never has the world been so acutely aware of the inextricably linked issues of energy, environment, economy, and security. As the economies of developing countries boom, so does their demand for energy. Today nearly a quarter of the world does not have electrical power, yet the demand for electricity is projected to more than double over the next two decades. Increased demand for energy to power factories, transport commodities and people, and heat/cool homes also results in increased CO2 emissions. In 2007 China, a major consumer of coal, surpassed the United States in overall carbon dioxide emissions. As global CO2 emissions grow, the urgency grows to produce energy from carbon-based sources more efficiently in the near term and to move to non-carbon-based energy sources, such as solar, hydrogen, or nuclear, in the longer term. As we look toward the future, two points are very clear: (1) the economy and security of this nation is critically dependent on a readily available, clean and affordable energy supply; and (2) no one energy solution will meet all future energy demands, requiring investments in development of multiple energy technologies.

Materials are central to every energy technology, and future energy technologies will place increasing demands on materials performance with respect to extremes in stress, strain, temperature, pressure, chemical reactivity, photon or radiation flux, and electric or magnetic fields. For example, today's state-of-the-art coal-fired power plants operate at about 35% efficiency. Increasing this efficiency to 60% using supercritical steam requires raising operating temperatures by nearly 50% and essentially doubling the operating pressures. These operating conditions require new materials that can reliably withstand these extreme thermal and pressure environments. To lower fuel consumption in transportation, future vehicles will demand lighter weight components with high strength. Next-generation nuclear fission reactors require materials capable of withstanding higher temperatures and higher radiation flux in highly corrosive environments for long periods of time without failure. These increasingly extreme operating environments accelerate the aging process in materials, leading to reduced performance and eventually to failure. If one extreme is harmful, two or more can be devastating. High temperature, for example, not only weakens chemical bonds, it also speeds up the chemical reactions of corrosion.

Often materials fail at one-tenth or less of their intrinsic limits, and we do not understand why. This failure of materials is a principal bottleneck for developing future energy technologies that require placing materials under increasingly extreme conditions. Reaching the intrinsic limit of materials performance requires understanding the atomic and molecular origins of this failure. This knowledge would enable an increase in materials performance of order of magnitude or more. Further, understanding how these extreme environments affect the physical and chemical processes that occur in the bulk material and at its surface would open the door to employing these conditions to make entirely new classes of materials with greatly enhanced performance for future energy technologies. This knowledge will not be achieved by incremental advances in materials science. Indeed, this knowledge will only be gained by innovative basic research that will unlock the fundamentals of how extremes environments interact with materials and how these interactions can be controlled to reach the intrinsic limits of materials performance and to develop revolutionary new materials. These new materials would have enormous impact for the development of future energy technologies: extending lifetimes, increasing efficiencies, providing novel capabilities, and lowering costs. Beyond energy applications, these new materials would have a huge impact on other areas of importance to this nation, including national security, industry, and other areas where robust, reliable materials are required.

This report summarizes the research directions identified by a Basic Energy Sciences Workshop on Basic Research Needs for Materials under Extreme Environments, held in June 2007. More than 140 invited scientists and engineers from academia, industry, and the national laboratories attended the workshop, along with representatives from other offices within the Department of Energy, including the National Nuclear Security Administration, the Office of Nuclear Energy, the Office of Energy Efficiency and Renewable Energy, and the Office of Fossil Energy. Prior to the workshop, a technology resource document, Technology and Applied R&D Needs for Materials under Extreme Environments, was prepared that provided the participants with an overview of current and future materials needs for energy technologies. The workshop began with a plenary session that outlined the technology needs and the state of the art in research of materials under extreme conditions. The workshop was then divided into four panels, focusing on specific types of extreme environments: Energetic Flux Extremes, Chemical Reactive Extremes, Thermomechanical Extremes, and Electromagnetic Extremes. The four panels were asked to assess the current status of research in each of these four areas and identify the most promising research directions that would bridge the current knowledge gaps in understanding how these four extreme environments impact materials at the atomic and molecular levels. The goal was to outline specific Priority Research Directions (PRDs) that would ultimately lead to the development of vastly improved materials across a broad range of future energy technologies. During the course of the workshop, a number of common themes emerged across these four panels and a fifth panel was charged to identify these cross-cutting research areas.

Photons and energetic particles can cause damage to materials that occurs over broad time and length scales. While initiation, characterized by localized melting and re-crystallization, may occur in fractions of a picosecond, this process can produce cascades of point defects that diffuse and agglomerate into larger clusters. These nanoscale clusters can eventually reach macroscopic dimensions, leading to decreased performance and failure. The panel on energetic flux extremes noted that this degradation and failure is a key barrier to achieving more efficient energy generation systems and limits the lifetime of materials used in photovoltaics, solar collectors, nuclear reactors, optics, electronics and other energy and security systems used in extreme flux environments. The panel concluded that the ability to prevent this degradation from extreme fluxes is critically dependent on being able to elucidate the atomic- and molecular-level mechanisms of defect production and damage evolution triggered by single and multiple energetic particles and photons interacting with materials. Advances in characterization and computational tools have the potential to provide an unprecedented opportunity to elucidate these key mechanisms. In particular, ultrafast and ultra-high spatial resolution characterization tools will allow the initial atomic-scale damage events to be observed. Further, advanced computational capabilities have the potential to capture multiscale damage evolution from atomic to macroscopic dimensions. Elucidation of these mechanisms would allow the complex pathways of damage evolution from the atomic to the macroscopic scale to be understood. This knowledge would ultimately allow atomic and molecular structures to be manipulated in a predicable manner to create new materials that have extraordinary tolerance and can function within an extreme environment without property degradation. Further, it would provide revolutionary capabilities for synthesizing materials with novel structures or, alternatively, to force chemical reactions that normally result in damage to proceed along selected pathways that are either benign or self-repair damage initiation.

Chemically reactive extreme environments are found in many advanced energy systems, including fuel cells, nuclear reactors, and batteries, among others. These conditions include aqueous and non-aqueous liquids (such as mineral acids, alcohols, and ionic liquids) and gaseous environments (such as hydrogen, ammonia, and steam). The panel evaluating extreme chemical environments concluded there is a lack of fundamental understanding of thermodynamic and kinetic processes that occur at the atomic level under these important reactive environments. The chemically induced degradation of materials is initiated at the interface of a material with its environment. Chemical stability in these environments is often controlled by protective surfaces, either self-healing, stable films that form on a surface (such as oxides) or by coatings that are applied to a surface. Besides providing surface stability, these films must also prevent facile mass transport of reactive species into the bulk of the material. While some films can have long lifetimes, increasing severity of environments can cause the films to break down, leading to costly materials failure. A major challenge therefore is to develop a new generation of surface layers that are extremely robust under aggressive chemical conditions. Before this can be accomplished, however, it is critical to understand the equilibrium and non-equilibrium thermodynamics and reaction kinetics that occur at the atomic level at the interface of the protective film with its environment. The stability of the film can be further complicated by differences in the material's morphology, structure, and defects. It is critical that these complex and interrelated chemical and physical processes be understood at the nanoscale using new capabilities in materials characterization and theory, modeling, and simulation. Armed with this information, it will be possible to develop a new generation of robust surface films to protect materials in extreme chemical environments. Further, this understanding will provide insight into developing films that can self-heal and to synthesizing new classes of materials that have unimaginable stability to aggressive chemical environments.

The need for materials that can withstand thermomechanical extremes—high pressure and stress, strain and strain rate, and high and low temperature—is found across a broad range of energy technologies, such as efficient steam turbines and heat exchangers, fuel-efficient vehicles, and strong wind turbine blades. Failures of materials under thermomechanical extremes can be catastrophic and costly. The panel on thermomechanical extremes concluded that designing new materials with properties specifically tailored to withstand thermomechanical extremes must begin with understanding the fundamental chemical and physical processes involved in materials failure, extending from the nanoscale to the collective behavior at the macroscale. Further, the behavior of materials must be understood under static, quasistatic, and dynamic thermomechanical extremes. This requires learning how atoms and electrons move within a material under extremes to provide insight into defect production and eventual evolution into microstructural components, such as dislocations, voids, and grain boundaries. This will require advanced analytical tools that can study materials in situ as these defects originate and evolve. Once these processes are understood, it will be possible to predict responses of materials under thermomechanical extremes using advanced computation tools. Further, this fundamental knowledge will open new avenues for designing and synthesizing materials with unique properties. Using these thermomechanical extremes will allow the very nature of chemical bonds to be tuned to produce revolutionary new materials, such as ultrahard materials.

As electrical energy demand grows, perhaps by greater than 70% over the next 50 years, so does the need to develop materials capable of operating at extreme electric and magnetic fields. To develop future electrical energy technologies, new materials are needed for magnets capable of operating at higher fields in generators and motors, insulators resistant to higher electric fields and field gradients, and conductors/superconductors capable of carrying higher current at lower voltage. The panel on electromagnetic extremes concluded that the discovery and understanding of this broad range of new materials requires revealing and controlling the defects that occur at the nanoscale. Defects are responsible for breakdown of insulators, yet defects are needed within local structures of superconductors to trap magnetic vortices. The ability to observe these defects as materials interact with electromagnetic extremes is just becoming available with advances in characterization tools with increased spatial and time resolution. Understanding how these nanoscale defects evolve to affect the macroscale behavior of materials is a grand challenge, and advances in multiscale modeling are required to understand the behavior of materials under these extremes. Once the behavior of defects in materials is understood, then materials could be designed to prevent dielectric breakdown or to enhance magnetic behavior. For example, composite materials having appropriate structures and properties could be tailored using nanoscale self-assembly techniques. The panel projected that understanding how electric and magnetic fields affect materials at the atomic and molecular level could lead to the ability to control materials properties and synthesis. Such control would lead to a new generation of materials that is just emerging today—such as electrooptic materials that can be switched between transparency and opacity through application of electric fields. Beyond energy applications, these tailored materials could have enormous importance in security, computing, electronics, and other applications.

During the course of the workshop, four recurring science issues emerged as important themes: (1) Achieving the Limits of Performance; (2) Exploiting Extreme Environments for Materials Design and Synthesis; (3) Characterization on the Scale of Fundamental Interactions; and (4) Predicting and Modeling Materials Performance. All four of the workshop panels identified the need to understand the complex and interrelated physical and chemical processes that control the various performance limits of materials subjected to extreme conditions as the major technical bottleneck in meeting future energy needs. Most of these processes involve understanding the cascade of events that is initiated at atomic-level defects and progresses through macroscopic materials properties. By understanding various mechanisms by which materials fail, for example, it may be possible to increase the performance and lifetime limits of materials by an order of magnitude or more and thereby achieve the true limits of materials performance.

Understanding the atomic and molecular basis of the interaction of extreme environments with materials provides an exciting and unique opportunity to produce entirely new classes of materials. Today materials are made primarily by changing temperature, composition, and sometimes, pressure. The panels concluded that extreme conditions—in the form of high temperatures, pressures, strain rate, radiation fluxes, or external fields, alone or in combination—can potentially be used as new "knobs" that can be manipulated for the synthesis of revolutionary new materials. All four of the extreme environments offer new strategies for controlling the atomic- and molecular-level structure in unprecedented ways to produce materials with tailored functionalities.

To achieve the breakthroughs needed to understand the atomic and molecular processes that occur within the bulk and at surfaces in materials in extreme environments will require advances in the final two cross-cutting areas, characterization and computation. Elucidating changes in structure and dynamics over broad timescales (femtoseconds to many seconds) and length scales (nanoscale to macroscale) is critical to realizing the revolutionary materials required for future energy technologies. Advances in characterization tools, including diffraction, scattering, spectroscopy, microscopy, and imaging, can provide this critical information. Of particular importance is the need to combine two or more of these characterization tools to permit so-called "multi-dimensional" analysis of materials and surfaces in situ. These advances will enable the elucidation of fundamental chemical and physical mechanisms that are at the heart of materials performance (and failure) and catalyze the discovery of new materials required for the next generation of energy technologies.

Complementing these characterization techniques are computational techniques required for modeling and predicting materials behavior under extreme conditions. Recent advances in theory and algorithms, coupled with enormous and growing computational power and ever more sophisticated experimental methods, are opening up exciting new possibilities for taking advantage of predictive theory and simulation to design and predict of the properties and performance of new materials required for extreme environments. New theoretical tools are needed to describe new phenomena and processes that occur under extreme conditions. These various tools need to be integrated across broad length scales—atomic to macroscopic—to model and predict the properties of real materials in response to extreme environments. Together with advanced synthesis and characterization techniques, these new capabilities in theory and modeling offer exciting new capabilities to accelerate scientific discovery and shorten the development cycle from discovery to application.

In concluding the workshop, the panelists were confident that today's gaps in materials performance under extreme conditions could be bridged if the physical and chemical changes that occur in bulk materials and at the interface with the extreme environment could be understood from the atomic to macroscopic scale. These complex and interrelated phenomena can be unraveled as advances are realized in characterization and computational tools. These advances will allow structural changes, including defects, to be observed in real time and then modeled so the response of materials can be predicted. The concept of exploiting these extreme environments to create revolutionary new materials was viewed to be particularly exciting. Adding these parameters to the toolkit of materials synthesis opens unimaginable possibilities for developing materials with tailored properties. The knowledge needed for bridging these technology gaps requires significant investment in basic research, and this research needs to be coupled closely with the applied research and technology communities and industry that will drive future energy technologies. These investments in fundamental research of materials under extreme conditions will have a major impact on the development of technologies that can meet future requirements for abundant, affordable, and clean energy. However, this research will enable the development of materials that will have a much broader impact in other applications that are critical to the security and economy of this nation.



Basic Research Needs: Catalysis for Energy

This report is based on a BES Workshop on Basic Research Needs in Catalysis for Energy Applications, August 6-8, 2007, to identify research needs and opportunities for catalysis to meet the nation's energy needs, provide an assessment of where the science and technology now stand, and recommend the directions for fundamental research that should be pursued to meet the goals described.

The United States continues to rely on petroleum and natural gas as its primary sources of fuels. As the domestic reserves of these feedstocks decline, the volumes of imported fuels grow, and the environmental impacts resulting from fossil fuel combustion become severe, we as a nation must earnestly reassess our energy future.

Catalysis—the essential technology for accelerating and directing chemical transformation—is the key to realizing environmentally friendly, economical processes for the conversion of fossil energy feedstocks. Catalysis also is the key to developing new technologies for converting alternative feedstocks, such as biomass, carbon dioxide, and water.

With the declining availability of light petroleum feedstocks that are high in hydrogen and low in sulfur and nitrogen, energy producers are turning to ever-heavier fossil feedstocks, including heavy oils, tar sands, shale oil, and coal. Unfortunately, the heavy feedstocks yield less fuel than light petroleum and contain more sulfur and nitrogen. To meet the demands for fuels, a deep understanding of the chemistry of complex fossil-energy feedstocks will be required together with such understanding of how to design catalysts for processing these feedstocks.

The United States has the capacity to grow and convert enough biomass to replace nearly a third of the nation's current gasoline use. Building on catalysis for petroleum conversion, researchers have identified potential catalytic routes for biomass. However, biomass differs so much in composition and reactivity from fossil fuels that this starting point is inadequate. The technology for economically converting biomass into widely usable fuels does not exist, and the science underpinning its development is only now starting to emerge.

The challenge is to understand the chemistry by which cellulose- and lignin-derived molecules are converted to fuels and to use this knowledge as a basis for identifying the needed catalysts. To obtain energy densities similar to those of currently used fuels, the products of biomass conversion must have oxygen contents lower than that of biomass. Oxygen must be removed by using hydrogen derived from biomass or other sources in a manner that minimizes the yield of carbon dioxide as a byproduct.

Catalytic conversion of carbon dioxide into liquid fuels using solar and electrical energy would enable the carbon in carbon dioxide to be recycled into fuels, thereby reducing its contribution to atmospheric warming. Likewise, the catalytic generation of hydrogen from water could provide a carbon-free source of hydrogen for fuel and for processing of fossil and biomass feedstocks. The underlying science is far from sufficient for design of efficient catalysts and economical processes.

Grand Challenges

To realize the full potential of catalysis for energy applications, scientists must develop a profound understanding of catalytic transformations so that they can design and build effective catalysts with atom-by-atom precision and convert reactants to products with molecular precision. Moreover, they must build tools to make real-time, spatially resolved measurements of operating catalysts. Ultimately, scientists must use these tools to achieve a fundamental understanding of catalytic processes occurring in multiscale, multiphase environments.

The first grand challenge identified in this report centers on understanding mechanisms and dynamics of catalyzed reactions. Catalysis involves chemical transformations that must be understood at the atomic scale because catalytic reactions present an intricate dance of chemical bond-breaking and bond-forming steps. Structures of solid catalyst surfaces, where the reactions occur on only a few isolated sites and in the presence of highly complex mixtures of molecules interacting with the surface in myriad ways, are extremely difficult to describe.

To discover new knowledge about mechanisms and dynamics of catalyzed reactions, scientists need to image surfaces at the atomic scale and probe the structures and energetics of the reacting molecules on varying time and length scales. They also need to apply theory to validate the results.

The difficulties of developing a clear understanding of the mechanisms and dynamics of catalyzed reactions are magnified by the high temperatures and pressures at which the reactions occur and the influence of the molecules undergoing transformation on the catalyst. The catalyst structure changes as the reacting molecules become part of it en route to forming products. Although the scientific challenge of understanding catalyst structure and function is great, recent advances in characterization science and facilities provide the means for meeting it in the long term.

The second grand challenge in the report centers on design and controlled synthesis of catalyst structures. Fundamental investigations of catalyst structures and the mechanisms of catalytic reactions provide the necessary foundation for the synthesis of improved catalysts. Theory can serve as a predictive design tool, guiding synthetic approaches for construction of materials with precisely designed catalytic surface structures at the nano and atomic scales.

Success in the design and controlled synthesis of catalytic structures requires an interplay between (1) characterization of catalysts as they function, including evaluation of their performance under technologically realistic conditions, and (2) synthesis of catalyst structures to achieve high activity and product selectivity.

Priority Research Directions

The workshop process identified three priority research directions for advancing catalysis science for energy applications:

Advanced catalysts for the conversion of heavy fossil energy feedstocks

The depletion of light, sweet crude oil has caused increasing use of heavy oils and other heavy feedstocks. The complicated nature of the molecules in these feedstocks, as well as their high heteroatom contents, requires catalysts and processing routes entirely different from those used in today's petroleum refineries.

To advance catalytic technologies for converting heavy feedstocks, scientists must (1) identify and quantify the heavy molecules (now possible with methods such as high-resolution mass spectrometry) and (2) determine data to represent the reactivities of the molecules in the presence of the countless other kinds of molecules interacting with the catalysts.

Methods for determining reactivities of individual compounds within complex feedstocks reacting under industrial conditions soon will be available. Reactivity data, when combined with fundamental understanding of how the reactants interact with the catalysts, will facilitate the selection of new catalysts for heavy feedstocks and the prediction of properties of the fuels produced.

Understanding the chemistry of lignocellulosic biomass deconstruction and conversion to fuels

The United States potentially could harvest 1.3 billion tons of biomass annually. Converting this resource to ethanol would produce more than 60 billion gallons/year, enough to replace 30 percent of the nation's current gasoline use.

Scientists must develop fundamental understanding of biomass deconstruction, either through high-temperature pyrolysis or low-temperature catalytic conversion, before engineers can create commercial biomass conversion technologies. Pyrolysis generates gases and liquids for processing into fuels or blending with existing petroleum refinery streams. Low-temperature deconstruction produces sugars and lignin for conversion into molecules with higher energy densities than the parent biomass.

Scientists also must discover and develop new catalysts for targeted transformations of these biomass-derived molecules into fuels. Developing a molecular-scale understanding of deconstruction and conversion of biomass products to fuels would contribute to the development of optimal processes for particular biomass sources. Knowledge of how catalyst structure and composition affect the kinetics of individual processes could lead to new catalysts with properties adjusted for maximum activity and selectivity for high- and low-temperature processing of biomass.

Photo- and electro-driven conversions of carbon dioxide and water

Catalytic conversion of carbon dioxide to liquid fuels facilitated by the input of solar or electrical energy presents an immense opportunity for new sources of energy. Furthermore, the catalytic generation of hydrogen from water could provide a carbon-free source of hydrogen for fuel and for processing of fossil and biomass feedstocks. Although these electrolytic processes are possible, they are not now economical, because they depend on expensive and rare materials, such as platinum, and require significantly more energy than the minimum dictated by thermodynamics.

Scientists have explored the use of photons to drive thermodynamically uphill reactions, but the efficiencies of the best-known processes are very low. To dramatically increase efficiencies, we need to understand the elementary processes by which photocatalysts and electrocatalysts operate and the phenomena that limit their effectiveness. This knowledge would guide the search for more efficient catalysts.

To address the challenge of increased efficiency, scientists must develop fundamental understanding on the basis of novel spectroscopic methods to probe the surfaces of photocatalysts and electrocatalysts in the presence of liquid electrolytes. New catalysts will have to involve multiple-site structures and be able to drive the multiple-electron and hydrogen transfer reactions required to produce fuels from carbon dioxide and water. Theoretical investigations also are needed to understand the manifold processes occurring on photocatalysts and electrocatalysts, many of which are unique to the conditions of their use. Basic research to address these challenges will result in fundamental knowledge and expertise crucial for developing efficient, durable, and scalable catalysts.

Crosscutting Research Issues

Two broad issues cut across the grand challenges and the priority research directions for development of efficient, economical, and environmentally friendly catalytic processes for energy applications:

Experimental characterization of catalystsas they function is a theme common to all the processes mentioned here—ranging from heavy feedstock refining to carbon dioxide conversion to fuels. The scientific community needs a fundamental understanding of catalyst structures and catalytic reaction mechanisms to design and prepare improved catalysts and processes for energy conversion. Attainment of this understanding requires development of new techniques and facilities for investigating catalysts as they function in the presence of complex, real feedstocks at high temperatures and pressures.

The community also needs improved methods for characterizing the feedstocks and products—to the point of identifying individual compounds in these complex mixtures. The dearth of information characterizing biomass-derived feedstocks and the growing complexity of the available heavy fossil feedstocks, as well as the intrinsic complexity of catalyst surfaces, magnify the difficulty of this challenge.

Implied in the need for better characterization is the need for advanced methods and instrument hardware and software far beyond today's capabilities. Improved spectroscopic and microscopic capabilities, specifically including synchrotron-based equipment and methods, will provide significantly enhanced temporal, spatial, and energy resolution of catalysts and new opportunities for elucidating their performance under realistic reaction conditions.

Achieving these crosscutting goals for better catalyst characterization will require breakthrough developments in techniques and much improved methodologies for combining multiple complementary techniques.

Advances in theory and computation are also required to significantly advance catalysis for energy applications. A major challenge is to understand the mechanisms and dynamics of catalyzed transformations, enabling rational design of catalysts. Molecular-level understanding is essential to "tune" a catalyst to produce the right products with minimal energy consumption and environmental impact. Applications of computational chemistry and methods derived from advanced chemical theory are crucial to the development of fundamental understanding of catalytic processes and ultimately to first-principles catalyst design. Development of this understanding requires breakthroughs in theoretical and computational methods to allow treatment of the complexity of the molecular reactants and condensed-phase and interfacial catalysts needed to convert new energy feedstocks to useful products.

Computation, when combined with advanced experimental techniques, is already leading to broad new insights into catalyst behavior and the design of new materials. The development of new theories and computational tools that accurately predict thermodynamic properties, dynamical behavior, and coupled kinetics of complex condensed-phase and interfacial processes is a crosscutting priority research direction to address the grand challenges of catalysis science, especially in the area of advanced energy technologies.

Scientific and Technological Impact

The urgent need for fuels in an era of declining resources and pressing environmental concerns demands a resurgence in catalysis science, requiring a massive commitment of programmatic leadership and improved experimental and theoretical methods. These elements will make it possible to follow, in real time, catalytic reactions on an atomic scale on surfaces that are nonuniform and laden with large molecules undergoing complex competing processes. The understanding that will emerge promises to engender technology for economical catalytic processing of ever more challenging fossil feedstocks and for breakthroughs needed to create an industry for energy production from biomass. These new technologies are needed for a sustainable supply of energy from domestic sources and mitigation of the problem of greenhouse gas emissions.



Future Science Needs and Opportunities for Electron Scattering: Next-Generation Instrumentation and Beyond

This report is based on a BES Workshop entitled "Future Science Needs and Opportunities for Electron Scattering: Next-Generation Instrumentation and Beyond," March 1-2, 2007, to identify emerging basic science and engineering research needs and opportunities that will require major advances in electron-scattering theory, technology, and instrumentation.

The workshop was organized to help define the scientific context and strategic priorities for the U.S. Department of Energy's Office of Basic Energy Sciences (DOE-BES) electron-scattering development for materials characterization over the next decade and beyond. Attendees represented university, national laboratory, and commercial research organizations from the United States and around the world. The workshop comprised plenary sessions, breakout groups, and joint open discussion summary sessions. Complete information about this workshop is available at http://www.amc.anl.gov/DoE-ElectronScatteringWorkshop-2007

SCIENTIFIC CHALLENGES FACING THE CHARACTERIZATION OF MATERIALS

In the last 40 years, advances in instrumentation have gradually increased the resolution capabilities of commercial electron microscopes. Within the last decade, however, a revolution has occurred, facilitating 1-nm resolution in the scanning electron microscope and sub-Ångstrom resolution in the transmission electron microscope. This revolution was a direct result of decades-long research efforts concentrating on electron optics, both theoretically and in practice, leading to implementation of aberration correctors that employ multi-pole electron lenses. While this improvement has been a remarkable achievement, it has also inspired the scientific community to ask what other capabilities are required beyond "image resolution" to more fully address the scientific problems of today's technologically complex materials. During this workshop, a number of scientific challenges requiring breakthroughs in electron scattering and/or instrumentation for characterization of materials were identified. Although the individual scientific problems identified in the workshop were wide-ranging, they are well represented by seven major scientific challenges. These are listed in Table 1, together with their associated application areas as proposed by workshop attendees. Addressing these challenges will require dedicated long-term developmental efforts similar to those that have been applied to the electron optics revolution. This report summarizes the scientific challenges identified by attendees and then outlines the technological issues that need to be addressed by a long-term research and development (R&D) effort to overcome these challenges.

TECHNOLOGICAL CHALLENGES

A recurring message voiced during the meeting was that, while improved image resolution in commercially available tools is significant, this is only the first of many breakthroughs required to answer today's most challenging problems. The major technological issues that were identified, as well as a measure of their relative priority, appear in Table 2. These issues require not only the development of innovative instrumentation but also new analytical procedures that connect experiment, theory, and modeling.

Table 1 Scientific Challenges and Applications Areas Identified during the Workshop

Theme Application Area

1. The nanoscale origin of macroscopic properties

High-performance 21st century materials in both structural engineering and electronic applications

2. The role of individual atoms, point defects, and dopants in materials

Semiconductors, catalysts, quantum phenomena and confinement, fracture, embrittlement, solar energy, nuclear power, radiation damage

3. Characterization of interfaces at arbitrary orientations

Semiconductors, three-dimensional geometries for nanostructures, grain-boundary-dominated processes, hydrogen storage

4. The interface between ordered and disordered materials

Dynamic behavior of the liquid-solid interface, organic/inorganic interfaces, friction/wear, grain boundaries, welding, polymer/metal/oxide composites, self-assembly

5. Mapping of electromagnetic (EM) fields in and around nanoscale matter

Ferroelectric/magnetic structures, switching, tunneling and transport, quantum confinement/proximity, superconductivity

6. Probing structures in their native environments

Catalysis, fuel cells, organic/inorganic interfaces, functionalized nanoparticles for health care, polymers, biomolecular processes, biomaterials, soft-condensed matter, non-vacuum environments

7. The behavior of matter far from equilibrium

High radiation, high-pressure and high-temperature environments, dynamic/transient behavior, nuclear and fusion energy, outer space, nucleation, growth and synthesis in solution, corrosion, phase transformations

Table 2 Functionality Required to Address Challenges in Table 1

Functionality Required Priority

1

In-situ environments permitting observation of processes under conditions that replicate real-world/real-time conditions (temperature, pressure, atmosphere, EM fields, fluids) with minimal loss of image and/or spectral resolution

A

2

Detectors that enhance by more than an order of magnitude the temporal, spatial, and/or collection efficiency of existing technologies for electrons, photons, and/or X-rays

A

3

Higher temporal resolution instruments for dynamic studies with a continuous range of operating conditions from microseconds to femtoseconds A 4. Sources having higher brightness, temporal resolution, and polarization

A

4

Sources having higher brightness, temporal resolution, and polarization

B

5

Electron-optical configurations designed to study complex interactions of nanoscale objects under multiple excitation processes (photons, fields, …)

B

6

Virtualized instruments that are operating in connection with experimental tools, allowing real-time data quantitative analysis or simulation, and community software tools for routine and robust data analysis

C

Some research efforts have already begun to address these topics. However, a dedicated and coordinated approach is needed to address these challenges more rapidly. For example, the principles of aberration correction for electron-optical lenses were established theoretically by Scherzer (Zeitschrift für Physik 101(9-10), 593-603) in 1936, but practical implementation was not realized until 1997 (a 61-year development cycle). Reducing development time to less than a decade is essential in addressing the scientific issues in the ever-growing nanoscale materials world. To accomplish this, DOE should make a concerted effort to revise how it funds advanced resources and R&D for electron beam instrumentation across its programs.



Basic Research Needs for Electrical Energy Storage

This report is based on a BES Workshop on Basic Research Needs for Electrical Energy Storage (EES), April 2-4, 2007, to identify basic research needs and opportunities underlying batteries, capacitors, and related EES technologies, with a focus on new or emerging science challenges with potential for significant long-term impact on the efficient storage and release of electrical energy.

The projected doubling of world energy consumption within the next 50 years, coupled with the growing demand for low- or even zero-emission sources of energy, has brought increasing awareness of the need for efficient, clean, and renewable energy sources. Energy based on electricity that can be generated from renewable sources, such as solar or wind, offers enormous potential for meeting future energy demands. However, the use of electricity generated from these intermittent, renewable sources requires efficient electrical energy storage. For commercial and residential grid applications, electricity must be reliably available 24 hours a day; even second-to-second fluctuations cause major disruptions with costs estimated to be tens of billions of dollars annually. Thus, for large-scale solar- or wind-based electrical generation to be practical, the development of new EES systems will be critical to meeting continuous energy demands and effectively leveling the cyclic nature of these energy sources. In addition, greatly improved EES systems are needed to progress from today's hybrid electric vehicles to plug-in hybrids or all-electric vehicles. Improvements in EES reliability and safety are also needed to prevent premature, and sometimes catastrophic, device failure. Chemical energy storage devices (batteries) and electrochemical capacitors (ECs) are among the leading EES technologies today. Both are based on electrochemistry, and the fundamental difference between them is that batteries store energy in chemical reactants capable of generating charge, whereas electrochemical capacitors store energy directly as charge.

The performance of current EES technologies falls well short of requirements for using electrical energy efficiently in transportation, commercial, and residential applications. For example, EES devices with substantially higher energy and power densities and faster recharge times are needed if all-electric/plug-in hybrid vehicles are to be deployed broadly as replacements for gasoline-powered vehicles. Although EES devices have been available for many decades, there are many fundamental gaps in understanding the atomic- and molecular-level processes that govern their operation, performance limitations, and failure. Fundamental research is critically needed to uncover the underlying principles that govern these complex and interrelated processes. With a full understanding of these processes, new concepts can be formulated for addressing present EES technology gaps and meeting future energy storage requirements.

BES worked closely with the DOE Office of Energy Efficiency and Renewable Energy and the DOE Office of Electricity Delivery and Energy Reliability to clearly define future requirements for EES from the perspective of applications relevant to transportation and electricity distribution, respectively, and to identify critical technology gaps. In addition, leaders in EES industrial and applied research laboratories were recruited to prepare a technology resource document, Technology and Applied R&D Needs for Electrical Energy Storage, which provided the groundwork for and served as a basis to inform the deliberation of basic research discussions for the workshop attendees. The invited workshop attendees, numbering more than 130, included representatives from universities, national laboratories, and industry, including a significant number of scientists from Japan and Europe. A plenary session at the beginning of the workshop captured the present state of the art in research and development and technology needs required for EES for the future. The workshop participants were asked to identify key priority research directions that hold particular promise for providing needed advances that will, in turn, revolutionize the performance of EES. Participants were divided between two panels focusing on the major types of EES, chemical energy storage and capacitive energy storage. A third panel focused on cross-cutting research that will be critical to achieving the technical breakthroughs required to meet future EES needs. A closing plenary session summarized the most urgent research needs that were identified for both chemical and capacitive energy storage. The research directions identified by the panelists are presented in this report in three sections corresponding to the findings of the three workshop panels.

The panel on chemical energy storage acknowledged that progressing to the higher energy and power densities required for future batteries will push materials to the edge of stability; yet these devices must be safe and reliable through thousands of rapid charge-discharge cycles. A major challenge for chemical energy storage is developing the ability to store more energy while maintaining stable electrode-electrolyte interfaces. The need to mitigate the volume and structural changes to the active electrode sites accompanying the charge-discharge cycle encourages exploration of nanoscale structures. Recent developments in nanostructured and multifunctional materials were singled out as having the potential to dramatically increase energy capacity and power densities. However, an understanding of nanoscale phenomena is needed to take full advantage of the unique chemistry and physics that can occur at the nanoscale. Further, there is an urgent need to develop a fundamental understanding of the interdependence of the electrolyte and electrode materials, especially with respect to controlling charge transfer from the electrode to the electrolyte. Combining the power of new computational capabilities and in situ analytical tools could open up entirely new avenues for designing novel multifunctional nanomaterials with the desired physical and chemical properties, leading to greatly enhanced performance.

The panel on capacitive storage recognized that, in general, ECs have higher power densities than batteries, as well as sub-second response times. However, energy storage densities are currently lower than they are for batteries and are insufficient for many applications. As with batteries, the need for higher energy densities requires new materials. Similarly, advances in electrolytes are needed to increase voltage and conductivity while ensuring stability. Understanding how materials store and transport charge at electrode-electrolyte interfaces is critically important and will require a fundamental understanding of charge transfer and transport mechanisms. The capability to synthesize nanostructured electrodes with tailored, high-surface-area architectures offers the potential for storing multiple charges at a single site, increasing charge density. The addition of surface functionalities could also contribute to high and reproducible charge storage capabilities, as well as rapid charge-discharge functions. The design of new materials with tailored architectures optimized for effective capacitive charge storage will be catalyzed by new computational and analytical tools that can provide the needed foundation for the rational design of these multifunctional materials. These tools will also provide the molecular-level insights required to establish the physical and chemical criteria for attaining higher voltages, higher ionic conductivity, and wide electrochemical and thermal stability in electrolytes.

The third panel identified four cross-cutting research directions that were considered to be critical for meeting future technology needs in EES:

  1. Advances in Characterization
  2. Nanostructured Materials
  3. Innovations in Electrolytes
  4. Theory, Modeling, and Simulation

Exceptional insight into the physical and chemical phenomena that underlie the operation of energy storage devices can be afforded by a new generation of analytical tools. This information will catalyze the development of new materials and processes required for future EES systems. New in situ photon- and particle-based microscopic, spectroscopic, and scattering techniques with time resolution down to the femtosecond range and spatial resolution spanning the atomic and mesoscopic scales are needed to meet the challenge of developing future EES systems. These measurements are critical to achieving the ability to design EES systems rationally, including materials and novel architectures that exhibit optimal performance. This information will help identify the underlying reasons behind failure modes and afford directions for mitigating them.

The performance of energy storage systems is limited by the performance of the constituent materials—including active materials, conductors, and inert additives. Recent research suggests that synthetic control of material architectures (including pore size, structure, and composition; particle size and composition; and electrode structure down to nanoscale dimensions) could lead to transformational breakthroughs in key energy storage parameters such as capacity, power, charge-discharge rates, and lifetimes. Investigation of model systems of irreducible complexity will require the close coupling of theory and experiment in conjunction with well-defined structures to elucidate fundamental materials properties. Novel approaches are needed to develop multifunctional materials that are self-healing, self-regulating, failure-tolerant, impurity-sequestering, and sustainable. Advances in nanoscience offer particularly exciting possibilities for the development of revolutionary three-dimensional architectures that simultaneously optimize ion and electron transport and capacity.

The design of EES systems with long cycle lifetimes and high energy-storage capacities will require a fundamental understanding of charge transfer and transport processes. The interfaces of electrodes with electrolytes are astonishingly complex and dynamic. The dynamic structures of interfaces need to be characterized so that the paths of electrons and attendant trafficking of ions may be directed with exquisite fidelity. New capabilities are needed to "observe" the dynamic composition and structure at an electrode surface, in real time, during charge transport and transfer processes. With this underpinning knowledge, wholly new concepts in materials design can be developed for producing materials that are capable of storing higher energy densities and have long cycle lifetimes.

A characteristic common to chemical and capacitive energy storage devices is that the electrolyte transfers ions/charge between electrodes during charge and discharge cycles. An ideal electrolyte provides high conductivity over a broad temperature range, is chemically and electrochemically inert at the electrode, and is inherently safe. Too often the electrolyte is the weak link in the energy storage system, limiting both performance and reliability of EES. At present, the myriad interactions that occur in electrolyte systems—ion-ion, ion-solvent, and ion-electrode—are poorly understood. Fundamental research will provide the knowledge that will permit the formulation of novel designed electrolytes, such as ionic liquids and nanocomposite polymer electrolytes, that will enhance the performance and lifetimes of electrolytes.

Advances in fundamental theoretical methodologies and computer technologies provide an unparalleled opportunity for understanding the complexities of processes and materials needed to make the groundbreaking discoveries that will lead to the next generation of EES. Theory, modeling, and simulation can effectively complement experimental efforts and can provide insight into mechanisms, predict trends, identify new materials, and guide experiments. Large multiscale computations that integrate methods at different time and length scales have the potential to provide a fundamental understanding of processes such as phase transitions in electrode materials, ion transport in electrolytes, charge transfer at interfaces, and electronic transport in electrodes.

Revolutionary breakthroughs in EES have been singled out as perhaps the most crucial need for this nation's secure energy future. The BES Workshop on Basic Research Needs for Electrical Energy Storage concluded that the breakthroughs required for tomorrow's energy storage needs will not be realized with incremental evolutionary improvements in existing technologies. Rather, they will be realized only with fundamental research to understand the underlying processes involved in EES, which will in turn enable the development of novel EES concepts that incorporate revolutionary new materials and chemical processes. Recent advances have provided the ability to synthesize novel nanoscale materials with architectures tailored for specific performance; to characterize materials and dynamic chemical processes at the atomic and molecular level; and to simulate and predict structural and functional relationships using modern computational tools. Together, these new capabilities provide unprecedented potential for addressing technology and performance gaps in EES devices.



Basic Research Needs for Geosciences: Facilitating 21st Century Energy Systems

This report is based on a BES Workshop on Basic Research Needs for Geosciences: Facilitating 21st Century Energy Systems, February 21-23, 2007, to identify research areas in geosciences, such as behavior of multiphase fluid-solid systems on a variety of scales, chemical migration processes in geologic media, characterization of geologic systems, and modeling and simulation of geologic systems, needed for improved energy systems.

Serious challenges must be faced in this century as the world seeks to meet global energy needs and at the same time reduce emissions of greenhouse gases to the atmosphere. Even with a growing energy supply from alternative sources, fossil carbon resources will remain in heavy use and will generate large volumes of carbon dioxide (CO2). To reduce the atmospheric impact of this fossil energy use, it is necessary to capture and sequester a substantial fraction of the produced CO2. Subsurface geologic formations offer a potential location for long-term storage of the requisite large volumes of CO2. Nuclear energy resources could also reduce use of carbon-based fuels and CO2 generation, especially if nuclear energy capacity is greatly increased. Nuclear power generation results in spent nuclear fuel and other radioactive materials that also must be sequestered underground. Hence, regardless of technology choices, there will be major increases in the demand to store materials underground in large quantities, for long times, and with increasing efficiency and safety margins.

Rock formations are composed of complex natural materials and were not designed by nature as storage vaults. If new energy technologies are to be developed in a timely fashion while ensuring public safety, fundamental improvements are needed in our understanding of how these rock formations will perform as storage systems.

This report describes the scientific challenges associated with geologic sequestration of large volumes of carbon dioxide for hundreds of years, and also addresses the geoscientific aspects of safely storing nuclear waste materials for thousands to hundreds of thousands of years. The fundamental crosscutting challenge is to understand the properties and processes associated with complex and heterogeneous subsurface mineral assemblages comprising porous rock formations, and the equally complex fluids that may reside within and flow through those formations. The relevant physical and chemical interactions occur on spatial scales that range from those of atoms, molecules, and mineral surfaces, up to tens of kilometers, and time scales that range from picoseconds to millennia and longer. To predict with confidence the transport and fate of either CO2 or the various components of stored nuclear materials, we need to learn to better describe fundamental atomic, molecular, and biological processes, and to translate those microscale descriptions into macroscopic properties of materials and fluids. We also need fundamental advances in the ability to simulate multiscale systems as they are perturbed during sequestration activities and for very long times afterward, and to monitor those systems in real time with increasing spatial and temporal resolution. The ultimate objective is to predict accurately the performance of the subsurface fluid-rock storage systems, and to verify enough of the predicted performance with direct observations to build confidence that the systems will meet their design targets as well as environmental protection goals.

The report summarizes the results and conclusions of a Workshop on Basic Research Needs for Geosciences held in February 2007. Five panels met, resulting in four Panel Reports, three Grand Challenges, six Priority Research Directions, and three Crosscutting Research Issues. The Grand Challenges differ from the Priority Research Directions in that the former describe broader, long-term objectives while the latter are more focused.

GRAND CHALLENGES

Computational thermodynamics of complex fluids and solids. Predictions of geochemical transport in natural materials must start with detailed knowledge of the chemical properties of multicomponent fluids and solids. New modeling strategies for geochemical systems based on first-principles methods are required, as well as reliable tools for translating atomic-and molecular-scale descriptions to the many orders of magnitude larger scales of subsurface geologic systems. Specific challenges include calculation of equilibrium constants and kinetics of heterogeneous reactions, descriptions of adsorption and other mineral surface processes, properties of transuranic elements and compounds, and mixing and transport properties for multicomponent liquid, solid and supercritical solutions. Significant advances are required in calculations based on the electronic Schrödinger equation, scaling of solution methods, and representation in terms of Equations of State. Calibration of models with a new generation of experiments will be critical.

Integrated characterization, modeling, and monitoring of geologic systems. Characterization of the subsurface is inextricably linked to the modeling and monitoring of processes occurring there. More accurate descriptions of the behavior of subsurface storage systems will require that the diverse, independent approaches currently used for characterizing, modeling and monitoring be linked in a revolutionary and comprehensive way and carried out simultaneously. The challenges arise from the inaccessibility and complexity of the subsurface, the wide range of scales of variability, and the potential role of coupled nonlinear processes. Progress in subsurface simulation requires advances in the application of geological process knowledge for determining model structure and the effective integration of geochemical and high-resolution geophysical measurements into model development and parameterization. To fully integrate characterization and modeling will require advances in methods for joint inversion of coupled process models that effectively represent nonlinearities, scale effects, and uncertainties.

Simulation of multiscale geologic systems for ultra-long times. Anthropogenic perturbations of subsurface storage systems will occur over decades, but predictions of storage performance will be needed that span hundreds to many thousands of years, time scales that reach far beyond standard engineering practice. Achieving this simulation capability requires a major advance in modeling capability that will accurately couple information across scales, i.e., account for the effects of small-scale processes on larger scales, and the effects of fast processes as well as the ultra-slow evolution on long time scales. Cross-scale modeling of complex dynamic subsurface systems requires the development of new computational and numerical methods of stochastic systems, new multiscale formulations, data integration, improvements in inverse theory, and new methods for optimization.

PRIORITY RESEARCH DIRECTIONS

Mineral-water interface complexity and dynamics. Natural materials are structurally complex, with variable composition, roughness, defect content, and organic and mineral coatings. There is an overarching need to interrogate the complex structure and dynamics at mineral-water interfaces with increasing spatial and temporal resolution using existing and emerging experimental and computational approaches. The fundamental objectives are to translate a molecular-scale description of complex mineral surfaces to thermodynamic quantities for the purpose of linking with macroscopic models, to follow interfacial reactions in real time, and to understand how minerals grow and dissolve and how the mechanisms couple dynamically to changes at the interface.

Nanoparticulate and colloid chemistry and physics. Colloidal particles play critical roles in dispersion of contaminants from energy production, use, or waste isolation sites. New advances are needed in characterization of colloids, sampling technologies, and conceptual models for reactivity, fate, and transport of colloidal particles in aqueous environments. Specific advances will be needed in experimental techniques to characterize colloids at the atomic level and to build quantitative models of their properties and reactivity.

Dynamic imaging of flow and transport. Improved imaging in the subsurface is needed to allow in situ multiscale measurement of state variables as well as flow, transport, fluid age, and reaction rates. Specific research needs include development of smart tracers, identification of environmental tracers that would allow age dating fluids in the 50-3000 year range, methods for measuring state variables such as pressure and temperature continuously in space and time, and better models for the interactions of physical fields, elastic waves, or electromagnetic perturbations with fluid-filled porous media.

Transport properties and in situ characterization of fluid trapping, isolation, and immobilization. Mechanisms of immobilization of injected CO2 include buoyancy trapping of fluids by geologic seals, capillary trapping of fluid phases as isolated bubbles within rock pores, and sorption of CO2 or radionuclides on solid surfaces. Specific advances will be needed in our ability to understand and represent the interplay of interfacial tension, surface properties, buoyancy, the state of stress, and rock heterogeneity in the subsurface.

Fluid-induced rock deformation. CO2 injection affects the thermal, mechanical, hydrological, and chemical state of large volumes of the subsurface. Accurate forecasting of the effects requires improved understanding of the coupled stress-strain and flow response to injection-induced pressure and hydrologic perturbations in multiphase-fluid saturated systems. Such effects manifest themselves as changes in rock properties at the centimeter scale, mechanical deformation at meter-to-kilometer scales, and modified regional fluid flow at scales up to 100 km. Predicting the hydromechanical properties of rocks over this scale range requires improved models for the coupling of chemical, mechanical, and hydrological effects. Such models could revolutionize our ability to understand shallow crustal deformation related to many other natural processes and engineering applications.

Biogeochemistry in extreme subsurface environments. Microorganisms strongly influence the mineralogy and chemistry of geologic systems. CO2 and nuclear material isolation will perturb the environments for these microorganisms significantly. Major advances are needed to describe how populations of microbes will respond to the extreme environments of temperature, pH, radiation, and chemistry that will be created, so that a much clearer picture of biogenic products, potential for corrosion, and transport or immobilization of contaminants can be assembled.

CROSSCUTTING RESEARCH ISSUES

The microscopic basis of macroscopic complexity. Classical continuum mechanics relies on the assumption of a separation between the length scales of microscopic fluctuations and macroscopic motions. However, in geologic problems this scale separation often does not exist. There are instead fluctuations at all scales, and the resulting macroscopic behavior can then be quite complex. The essential need is to develop a scientific basis of "emergent" phenomena based on the microscopic phenomena.

Highly reactive subsurface materials and environments. The emplacement of energy system byproducts into geological repositories perturbs temperature and pressure, imposes chemical gradients, creates intense radiation fields, and can cause reactions that alter the minerals, pore fluids, and emplaced materials. Strong interactions between the geochemical environment and emplaced materials are expected. New insight is needed on equilibria in compositionally complex systems, reaction kinetics in concentrated aqueous and other solutions, reaction kinetics under near-equilibrium undersaturated and supersaturated conditions, and transient reaction kinetics.

Thermodynamics of the solute-to-solid continuum. Reactions involving solutes, colloids, particles, and surfaces control the transport of chemical constituents in the subsurface environment. A rigorous structural, kinetic, and thermodynamic description of the complex chemical reality between the molecular and the macroscopic scale is a fundamental scientific challenge. Advanced techniques are needed for characterizing particles in the nanometer-tomicrometer size range, combined with a new description of chemical thermodynamics that does not rely on a sharp distinction between solutes and solids.

TECHNICAL AND SCIENTIFIC IMPACT

The Grand Challenges, Priority Research Directions, and Crosscutting Issues described in this report define a science-based approach to understanding the long-term behavior of subsurface geologic systems in which anthropogenic CO2 and nuclear materials could be stored. The research areas are rich with opportunities to build fundamental knowledge of the physics, chemistry, and materials science of geologic systems that will have impacts well beyond the specific applications. The proposed research is based on development of a new level of understanding—physical, chemical, biological, mathematical, and computational—of processes that happen at the microscopic scale of atoms, molecules and mineral surfaces, and how those processes translate to material behavior over large length scales and on ultra-long time scales. Addressing the basic science issues described would revolutionize our ability to understand, simulate, and monitor all of the subsurface settings in which transport is critical, including the movement of contaminants, the emplacement of minerals, or the management of aquifers. The results of the research will have a wide range of implications from physics and chemistry, to material science, biology and earth science.



Basic Research Needs for Clean and Efficient Combustion of 21st Century Transportation Fuels

This report is based on a BES Workshop on Clean and Efficient Combustion of 21st Century Transportation Fuels, October 29-November 1, 2006, to identify basic research needs and opportunities underlying utilization of evolving transportation fuels, with a focus on new or emerging science challenges that have the potential for significant long-term impact on fuel efficiency and emissions.

From the invention of the wheel, advances in transportation have increased the mobility of human kind, enhancing the quality of life and altering our very perception of time and distance. Early carts and wagons driven by human or animal power allowed the movement of people and goods in quantities previously thought impossible. With the rise of steam power, propeller driven ships and railroad locomotives shrank the world as never before. Ocean crossings were no longer at the whim of the winds, and continental crossings went from grand adventures to routine, scheduled outings. The commercialization of the internal combustion engine at the turn of the twentieth century brought about a new, and very personal, revolution in transportation, particularly in the United States. Automobiles created an unbelievable freedom of movement: A single person could travel to any point in the county in a matter of days, on a schedule of his or her own choosing. Suburbs were built on the promise of cheap, reliable, personal transportation. American industry grew to depend on internal combustion engines to produce and transport goods, and farmers increased yields and efficiency by employing farm machinery. Airplanes, powered by internal combustion engines, shrank the world to the point where a trip between almost any two points on the globe is now measured not in days or months, but in hours.

Transportation is the second largest consumer of energy in the United States, accounting for nearly 60% of our nation's use of petroleum, an amount equivalent to all of the oil imported into the U.S. The numbers are staggering—the transport of people and goods within the U.S. burns almost one million gallons of petroleum each minute of the day. Our Founding Fathers may not have foreseen freedom of movement as an inalienable right, but Americans now view it as such.

Knowledge is power, a maxim that is literally true for combustion. In our global, just-in-time economy, American competitiveness and innovation require an affordable, diverse, stable, and environmentally acceptable energy supply. Currently 85% of our nation's energy comes from hydrocarbon sources, including natural gas, petroleum, and coal; 97% of transportation energy derives from petroleum, essentially all from combustion in gasoline engines (65%), diesel engines (20%), and jet turbines (12%). The monolithic nature of transportation technologies offers the opportunity for improvements in efficiency of 25-50% through strategic technical investment in advanced fuel/engine concepts and devices. This investment is not a matter of choice, but, an economic, geopolitical, and environmental necessity. The reality is that the internal combustion engine will remain the primary driver of transport for the next 30-50 years, whether or not one believes that the peak in oil is past or imminent, or that hydrogen-fueled and electric vehicles will power transport in the future, or that geopolitical tensions will ease through international cooperation. Rational evaluation of U.S. energy security must include careful examination of how we achieve optimally efficient and clean combustion of precious transportation fuels in the 21st century.

The Basic Energy Sciences Workshop on Clean and Efficient Combustion of 21st Century Transportation Fuels

Our historic dependence on light, sweet crude oil for our transportation fuels will draw to a close over the coming decades as finite resources are exhausted. New fuel sources, with differing characteristics, are emerging to displace crude oil. As these new fuel streams enter the market, a series of new engine technologies are also under development, promising improved efficiency and cleaner combustion. To date, however, a coordinated strategic effort to match future fuels with evolving engines is lacking.

To provide the scientific foundation to enable technology breakthroughs in transportation fuel utilization, the Office of Basic Energy Sciences in the U.S. Department of Energy (DOE) convened the Workshop on Basic Research Needs for Clean and Efficient Combustion of 21st Century Transportation Fuels from October 30 to November 1, 2006. This report is a summary of that Workshop. It reflects the collective output of the Workshop participants, which included over 80 leading scientists and engineers representing academia, industry, and national laboratories in the United States and Europe. Researchers specializing in basic science and technological applications were well represented, producing a stimulating and engaging forum. Workshop planning and execution involved advance coordination with DOE Office of Energy Efficiency and Renewable Energy, FreedomCAR and Vehicle Technologies, which manages applied research and development of transportation technologies.

Priority research directions were identified by three panels, each made up of a subset of the Workshop attendees and interested observers. The first two panels were differentiated by their focus on engines or fuels and were similar in their strategy of working backward from technology drivers to scientific research needs. The first panel focused on Novel Combustion, as embodied in promising new engine technologies. The second panel focused on Fuel Utilization, inspired by the unique (and largely unknown) challenges of the emerging fuel streams entering the market. The third panel explored crosscutting science themes and identified general gaps in our scientific understanding of 21st-century fuel combustion. Subsequent to the Workshop, co-chairs and panel leads distilled the collective output to produce eight distinct, targeted research areas that advance one overarching grand challenge: to develop a validated, predictive, multi-scale, combustion modeling capability to optimize the design and operation of evolving fuels in advanced engines for transportation applications.

Fuels and Engines

Transportation fuels for automobile, truck and aircraft engines are currently produced by refining petroleum-based sweet crude oil, from which gasoline, diesel fuel and jet fuel are each made with specific physical and chemical characteristics dictated by the type of engine in which they are to be burned. Standardized fuel properties and restricted engine operating domains couple to provide reliable performance. As new fuels derived from oil sands, oil shale, coal, and bio-feedstocks emerge as replacements for light, sweet crude oil, both uncertainties and strategic opportunities arise. Rather than pursue energy-intensive refining of these qualitatively different emerging fuels to match current fuel formulations, we must strive to achieve a "dual revolution" by interdependently advancing both fuel and engine technologies. Spark-ignited gasoline engines equipped with catalytic after-treatment operate cleanly but well below optimal efficiency due to low compression ratios and throttle-plate losses used to control air intake. Diesel engines operate more efficiently at higher compression ratios but sample broad realms of fuel/air ratio, thereby producing soot and NOx for which burnout and/or removal can prove problematic. A number of new engine technologies are attempting to overcome these efficiency and emissions compromises. Direct injection gasoline engines operate without throttle plates, increasing efficiency, while retaining the use of a catalytic converter. Ultra-lean, high-pressure, low-temperature diesel combustion seeks to avoid the conditions that form pollutants, while maintaining very high efficiency. A new form of combustion, homogeneous charge compression ignition (HCCI) seeks to combine the best of diesel and gasoline engines. HCCI employs a premixed fuel-air charge that is ignited by compression, with the ignition timing controlled by in-cylinder fuel chemistry. Each of these advanced combustion strategies must permit and even exploit fuel flexibility as the 21st-century fuel stream matures. The opportunity presented by new fuel sources and advanced engine concepts offers such an overwhelming design and operation parameter space that only those technologies that build upon a predictive science capability will likely mature to a product within a useful timeframe.

Research Directions

The Workshop identified a single, overarching grand challenge: The development of a validated, predictive, multi-scale, combustion modeling capability to optimize the design and operation of evolving fuels in advanced engines for transportation applications. A broad array of discovery research and scientific inquiry that integrates experiment, theory, modeling and simulation will be required. This predictive capability, if attained, will change fundamentally the process for fuels research and engine development by establishing a scientific understanding of sufficient depth and flexibility to facilitate realistic simulation of fuel combustion in existing and proposed engines. Similar understanding in aeronautics has produced the beautiful and efficient complex curves of modern aircraft wings. These designs could never have been realized through cut-and-try engineering, but rather rely on the prediction and optimization of complex air flows. An analogous experimentally validated, predictive capability for combustion is a daunting challenge for numerous reasons: (1) spatial scales of importance range from the dimensions of the atom up to that of an engine piston; (2) the combustion chemistry of 21st-century fuels is astonishingly complex with hundreds of different fuel molecules and many thousands of possible reactions contributing to the oxidative release of energy stored in chemical bonds—chemical details also dictate emissions profiles, engine knock conditions and, for HCCI, ignition timing; (3) evolving engine designs will operate under dilute conditions at very high pressures and compression ratios—we possess neither sufficient concepts nor experimental tools to address these new operating conditions; (4) turbulence, transport, and radiative phenomena have a profound impact on local chemistry in most combustion media but are poorly understood and extremely challenging to characterize; (5) even assuming optimistic growth in computing power for existing and envisioned architectures, combustion phenomena are and will remain too complex to simulate in their complete detail, and methods that condense information and accurately propagate uncertainties across length and time scales will be required to optimize fuel/engine design and operation. Eight priority research directions, each of which focuses on crucial elements of the overarching grand challenge, are cited as most critical to the path forward by the Workshop participants.

In addition to the unifying grand challenge and specific priority research directions, the Workshop produced a keen sense of urgency and opportunity for the development of revolutionary combustion technology for transportation based upon fundamental combustion science. Internal combustion engines are often viewed as mature technology, developed in an Edisonian fashion over a hundred years. The participants at the Workshop were unanimous in their view that only through the achievable goal of truly predictive combustion science will the engines of the 21st century realize unparalleled efficiency and cleanliness in the challenging environment of changing fuel streams.



Basic Research Needs for Advanced Nuclear Energy Systems

This report is based on a BES Workshop on Advanced Nuclear Energy Systems, July 31-August 3, 2006, to identify new, emerging, and scientifically challenging areas in materials and chemical sciences that have the potential for significant impact on advanced nuclear energy systems.

The global utilization of nuclear energy has come a long way from its humble beginnings in the first sustained nuclear reaction at the University of Chicago in 1942. Today, there are over 440 nuclear reactors in 31 countries producing approximately 16% of the electrical energy used worldwide. In the United States, 104 nuclear reactors currently provide 19% of electrical energy used nationally. The International Atomic Energy Agency projects significant growth in the utilization of nuclear power over the next several decades due to increasing demand for energy and environmental concerns related to emissions from fossil plants. There are 28 new nuclear plants currently under construction including 10 in China, 8 in India, and 4 in Russia. In the United States, there have been notifications to the Nuclear Regulatory Commission of intentions to apply for combined construction and operating licenses for 27 new units over the next decade.

The projected growth in nuclear power has focused increasing attention on issues related to the permanent disposal of nuclear waste, the proliferation of nuclear weapons technologies and materials, and the sustainability of a once-through nuclear fuel cycle. In addition, the effective utilization of nuclear power will require continued improvements in nuclear technology, particularly related to safety and efficiency. In all of these areas, the performance of materials and chemical processes under extreme conditions is a limiting factor. The related basic research challenges represent some of the most demanding tests of our fundamental understanding of materials science and chemistry, and they provide significant opportunities for advancing basic science with broad impacts for nuclear reactor materials, fuels, waste forms, and separations techniques. Of particular importance is the role that new nanoscale characterization and computational tools can play in addressing these challenges. These tools, which include DOE synchrotron X-ray sources, neutron sources, nanoscale science research centers, and supercomputers, offer the opportunity to transform and accelerate the fundamental materials and chemical sciences that underpin technology development for advanced nuclear energy systems.

The fundamental challenge is to understand and control chemical and physical phenomena in multi-component systems from femto-seconds to millennia, at temperatures to 1000ºC, and for radiation doses to hundreds of displacements per atom (dpa). This is a scientific challenge of enormous proportions, with broad implications in the materials science and chemistry of complex systems. New understanding is required for microstructural evolution and phase stability under relevant chemical and physical conditions, chemistry and structural evolution at interfaces, chemical behavior of actinide and fission-product solutions, and nuclear and thermo-mechanical phenomena in fuels and waste forms. First-principles approaches are needed to describe f-electron systems, design molecules for separations, and explain materials failure mechanisms. Nanoscale synthesis and characterization methods are needed to understand and design materials and interfaces with radiation, temperature, and corrosion resistance. Dynamical measurements are required to understand fundamental physical and chemical phenomena. New multiscale approaches are needed to integrate this knowledge into accurate models of relevant phenomena and complex systems across multiple length and time scales.

Workshop

The Department of Energy (DOE) Workshop on Basic Research Needs for Advanced Nuclear Energy Systems was convened in July 2006 to identify new, emerging, and scientifically challenging areas in materials and chemical sciences that have the potential for significant impact on advanced nuclear energy systems. Sponsored by the DOE Office of Basic Energy Sciences (BES), the workshop provided recommendations for priority research directions and crosscutting research themes that underpin the development of advanced materials, fuels, waste forms, and separations technologies for the effective utilization of nuclear power. A total of 235 invited experts from 31 universities, 11 national laboratories, 6 industries, 3 government agencies, and 11 foreign countries attended the workshop.

The workshop was the sixth in a series of BES workshops focused on identifying basic research needs to overcome short-term showstoppers and to formulate long-term grand challenges related to energy technologies. These workshops have followed a common format that includes the development of a technology perspectives resource document prior to the workshop, a plenary session including invited presentations from technology and research experts, and topical panels to determine basic research needs and recommended research directions. Reports from the workshops are available on the BES website at http://www.sc.doe.gov/bes/reports/list.html.

The workshop began with a plenary session of invited presentations from national and international experts on science and technology related to nuclear energy. The presentations included nuclear technology, industry, and international perspectives, and an overview of the Global Nuclear Energy Partnership. Frontier research presentations were given on relevant topics in materials science, chemistry, and computer simulation. Following the plenary session, the workshop divided into six panels: Materials under Extreme Conditions, Chemistry under Extreme Conditions, Separations Science, Advanced Actinide Fuels, Advanced Waste Forms, and Predictive Modeling and Simulation. In addition, there was a crosscut panel that looked for areas of synergy across the six topical panels. The panels were composed of basic research leaders in the relevant fields from universities, national laboratories, and other institutions. In advance of the workshop, panelists were provided with a technology perspectives resource document that described the technology and applied R&D needs for advanced nuclear energy systems. In addition, technology experts were assigned to each of the panels to ensure that the basic research discussions were informed by a current understanding of technology issues.

The panels were charged with defining the state of the art in their topical research area, describing the related basic research challenges that must be overcome to provide breakthrough technology opportunities, and recommending basic research directions to address these challenges. These basic research challenges and recommended research directions were consolidated into Scientific Grand Challenges, Priority Research Directions, and Crosscutting Research Themes. These results are summarized below and described in detail in the full report.

Scientific Grand Challenges

Scientific Grand Challenges represent barriers to fundamental understanding that, if overcome, could transform the related scientific field. Historical examples of scientific grand challenges with far-reaching scientific and technological impacts include the structure of DNA, the understanding of quantum behavior, and the explanation of nuclear fission. Theoretical breakthroughs and new experimental capabilities are often key to addressing these challenges. In advanced nuclear energy systems, scientific grand challenges focus on the fundamental materials and chemical sciences that underpin the performance of materials and processes under extreme conditions of radiation, temperature, and corrosive environments. Addressing these challenges offers the potential of revolutionary new approaches to developing improved materials and processes for nuclear applications. The workshop identified the following three Scientific Grand Challenges.

Resolving the f-electron challenge to master the chemistry and physics of actinides and actinide-bearing materials.The introduction of new actinide-based fuels for advanced nuclear energy systems requires new chemical separations strategies and predictive understanding of fuel and waste-form fabrication and performance. However, current computational electronic-structure approaches are inadequate to describe the electronic behavior of actinide materials, and the multiplicity of chemical forms and oxidation states for these elements complicates their behavior in fuels, solutions, and waste forms. Advances in density functional theory as well as in the treatment of relativistic effects are needed in order to understand and predict the behavior of these strongly correlated electron systems.

Developing a first-principles, multiscale description of material properties in complex materials under extreme conditions.The long-term stability and mechanical integrity of structural materials, fuels, claddings, and waste forms are governed by the kinetics of microstructure and interface evolution under the combined influence of radiation, high temperature, and stress. Controlling the mechanical and chemical properties of materials under these extreme conditions will require the ability to relate phase stability and mechanical behavior to a first-principles understanding of defect production, diffusion, trapping, and interaction. New synthesis techniques based on the nanoscale design of materials offer opportunities for mitigating the effects of radiation damage through the development and control of nanostructured defect sinks. However, a unified, predictive multiscale theory that couples all relevant time and length scales in microstructure evolution and phase stability must be developed. In addition, fundamental advances are needed in nanoscale characterization, diffusion, thermodynamics, and in situ studies of fracture and deformation.

Understanding and designing new molecular systems to gain unprecedented control of chemical selectivity during processing.Advanced separations technologies for nuclear fuel reprocessing will require unprecedented control of chemical selectivity in complex environments. This control requires the ability to design, synthesize, characterize, and simulate molecular systems that selectively trap and release target molecules and ions with high efficiency under extreme conditions and to understand how mesoscale phenomena such as nanophase behavior and energetics in macromolecular systems impact partitioning. New capabilities in molecular spectroscopy, imaging, and computational modeling offer opportunities for breakthroughs in this area.

Priority Research Directions

Priority Research Directions are areas of basic research that have the highest potential for impact in a specific research or technology area. They represent opportunities that align with scientific grand challenges, emerging research opportunities, and related technology priorities. The workshop identified nine Priority Research Directions for basic research related to advanced nuclear energy systems.

Nanoscale design of materials and interfaces that radically extend performance limits in extreme radiation environments.The fundamental understanding of the interaction of defects with nanostructures offers the potential for the design of materials and interfaces that mitigate radiation damage by controlling defect behavior. New research is needed in the design, synthesis, nanoscale characterization, and time-resolved study of nanostructured materials and interfaces that offer the potential to control defect production, trapping, and interaction under extreme conditions.

Physics and chemistry of actinide-bearing materials and the f-electron challenge.A robust theory of the electronic structure of actinides will provide an improved understanding of their physical and chemical properties and behavior, leading to opportunities for advances in fuels and waste forms. New advances in exchange and correlation functionals in density functional theory as well as in the treatment of relativistic effects and in software implementation on advanced computer architectures are needed to overcome the challenges of adequately treating the behavior of 4f and 5f electrons, namely, strong correlation, spin-orbit coupling, and multiplet complexity, as well as additional relativistic effects. Advances are needed in the application of these new electronic structure methods for f-element-containing molecules and solids to calculate the properties of defects in multi-component systems, and in the fundamental understanding of related chemical and physical properties at high temperature.

Microstructure and property stability under extreme conditions.The predictive understanding of microstructural evolution and property changes under extreme conditions is essential for the rational design of materials for structural, fuels, and waste-form applications. Advances are needed to develop a first-principles understanding of the relationship of defect properties and microstructural evolution to mechanical behavior and phase stability. This will require a closely coupled approach of in situ studies of nanoscale and mechanical behavior with multiscale theory.

Mastering actinide and fission product chemistry under all chemical conditions.A more accurate understanding of the electronic structure of the complexes of actinide and fission products will expand our ability to predict their behavior quantitatively under conditions relevant to all stages in fuel reprocessing (separations, dissolution, and stabilization of waste forms) and in new media that are proposed for advanced processing systems. This knowledge must be supplemented by accurate prediction and manipulation of solvent properties and chemical reactivities in non-traditional separation systems such as modern "tunable" solvent systems. This will require quantitative, fundamental understanding of the mechanisms of solvent tunability, the factors limiting control over solvent properties, the forces driving chemical speciation, and modes of controlling reactions. Basic research needs include f-element electronic structure and bonding, speciation and reactivity, thermodynamics, and solution behavior.

Exploiting organization to achieve selectivity at multiple length scales. Harnessing the complexity of organization that occurs at the mesoscale in solution or at interfaces will lead to new separation systems that provide for greatly increased selectivity in the recovery of target species and reduced formation of secondary waste streams through ligand degradation. Research directions include design of ligands and other selectivity agents, expanding the range of selection/release mechanisms, fundamental understanding of phase phenomena and self-assembly in separations, and separations systems employing aqueous solvents.

Adaptive material-environment interfaces for extreme chemical conditions.Chemistry at interfaces will play a crucial role in the fabrication, performance, and stability of materials in almost every aspect of Advanced Nuclear Energy Systems, from fuel, claddings, and pressure vessels in reactors to fuel reprocessing and separations, and ultimately to long-term waste storage. Revolutionary advances in the understanding of interfacial chemistry of materials through developments in new modeling and in situ experimental techniques offer the ability to design material interfaces capable of providing dynamic, universal stability over a wide range of conditions and with much greater "self-healing" capabilities. Achieving the necessary scientific advances will require moving beyond interfacial chemistry in ultra-high-vacuum environments to the development of in situ techniques for monitoring the chemistry at fluid/solid and solid/solid interfaces under conditions of high pressure and temperature and harsh chemical environments.

Fundamental effects of radiation and radiolysis in chemical processes.The reprocessing of nuclear fuel and the storage of nuclear waste present environments that include substantial radiation fields. A predictive understanding of the chemical processes resulting from intense radiation, high temperatures, and extremes of acidity and redox potential on chemical speciation is required to enhance efficient, targeted separations processes and effective storage of nuclear waste. In particular, the effect of radiation on the chemistries of ligands, ionic liquids, polymers, and molten salts is poorly understood. There is a need for an improved understanding of the fundamental processes that affect the formation of radicals and ultimately control the accumulation of radiation-induced damage to separation systems and waste forms.

Fundamental thermodynamics and kinetic processes in multi-component systems for fuel fabrication and performance.The fabrication and performance of advanced nuclear fuels, particularly those containing the minor actinides, is a significant challenge that requires a fundamental understanding of the thermodynamics, transport, and chemical behavior of complex materials during processing and irradiation. Global thermochemical models of complex phases that are informed by ab initio calculations of materials properties and high-throughput predictive models of complex transport and phase segregation will be required for full fuel fabrication and performance calculations. These models, when coupled with appropriate experimental efforts, will lead to significantly improved fuel performance by creating novel tailored fuel forms.

Predictive multiscale modeling of materials and chemical phenomena in multi-component systems under extreme conditions.The advent of large-scale (petaflop) simulations will significantly enhance the prospect of probing important molecular-level mechanisms underlying the macroscopic phenomena ofsolution and interfacial chemistry in actinide-bearing systems and of materials and fuels fabrication, performance, and failure under extreme conditions. There is an urgent need to develop multiscale algorithms capable of efficiently treating systems whose time evolution is controlled by activated processes and rare events. Although satisfactory solutions are lacking, there are promising directions, including accelerated molecular dynamics (MD) and adaptive kinetic Monte Carlo methods, which should be pursued. Many fundamental problems in advanced nuclear energy systems will benefit from multi-physics, multiscale simulation methods that can span time scales from picoseconds to seconds and longer, including fission product transport in nuclear fuels, the evolution of microstructure of irradiated materials, the migration of radionuclides in nuclear waste forms, and the behavior of complex separations media.

Crosscutting Research Themes

Crosscutting Research Themes are research directions that transcend a specific research area or discipline, providing a foundation for progress in fundamental science on a broad front. These themes are typically interdisciplinary, leveraging results from multiple fields and approaches to provide new insights and underpinning understanding. Many of the fundamental science issues related to materials, fuels, waste forms, and separations technologies have crosscutting themes and synergies. The workshop identified four crosscutting basic research themes related to materials and chemical processes for advanced nuclear energy systems:

Tailored nanostructures for radiation-resistant functional and structural materials.There is evidence that the design and control of specialized nanostructures and defect complexes can create sinks for radiation-induced defects and impurities, enabling the development of highly radiation-resistant materials. New capabilities in the synthesis and characterization of materials with controlled nanoscale structure offer opportunities for the development of tailored nanostructures for structural applications, fuels, and waste forms. This approach crosscuts advanced materials synthesis and processing, radiation effects, nanoscale characterization, and simulation.

Solution and solid-state chemistry of 4f and 5f electron systems.Advances in the basic science of 4f and 5f electron systems in materials and solutions offer the opportunity to extend condensed matter physics and reaction chemistry on a broad front, including applications that impact the development of nuclear fuels, waste forms, and separations technologies. This is a key enabling science for the fundamental understanding of actinide-bearing materials and solutions.

Physics and chemistry at interfaces and in confined environments.Controlling the structure and composition of interfaces is essential to ensuring the long-term stability of reactor materials, fuels, and waste forms. The fundamental understanding of interface science and related transport and chemical phenomena in extreme environments crosscuts many science and technology areas. New computational and nanoscale structure and dynamics measurement tools offer significant opportunities for advancing interface science with broad impacts on the predictive design of advanced materials and processes for nuclear energy applications.

Physical and chemical complexity in multi-component systems.Advanced fuels, waste forms, and separations technologies are highly interactive, multi-component systems. A fundamental understanding of these complex systems and related structural and phase stability and chemical reactivity under extreme conditions is needed to develop and predict the performance of materials and separations processes in advanced nuclear energy systems. This is a challenging problem in complexity with broad implications across science and technology.

Taken together, these Scientific Grand Challenges, Priority Research Directions, and Crosscutting Research Themes define the landscape for a science-based approach to the development of materials and chemical processes for advanced nuclear energy systems. Building upon new experimental tools and computational capabilities, they presage a renaissance in fundamental science that underpins the development of materials, fuels, waste forms, and separations technologies for nuclear energy applications. Addressing these basic research needs offers the potential to revolutionize the science and technology of advanced nuclear energy systems by enabling new materials, processes, and predictive modeling, with resulting improvements in performance and reduction in development times. The fundamental research outlined in this report offers an outstanding opportunity to advance the materials, chemical, and computational science of complex systems at multiple length and time scales, furthering both fundamental understanding and the technology of advanced nuclear energy systems.


Basic Research Needs for Solid-State Lighting

This report is based on a BES Workshop on Solid-State Lighting (SSL), May 22-24, 2006, to examine the gap separating current state-of-the-art SSL technology from an energy efficient, high-quality, and economical SSL technology suitable for general illumination; and to identify the most significant fundamental scientific challenges and research directions that would enable that gap to be bridged.

Since fire was first harnessed, artificial lighting has gradually broadened the horizons of human civilization. Each new advance in lighting technology, from fat-burning lamps to candles to gas lamps to the incandescent lamp, has extended our daily work and leisure further past the boundaries of sunlit times and spaces. The incandescent lamp did this so dramatically after its invention in the 1870s that the light bulb became the very symbol of a "good idea."

Today, modern civilization as we know it could not function without artificial lighting; artificial lighting is so seamlessly integrated into our daily lives that we tend not to notice it until the lights go out. Our dependence is even enshrined in daily language: an interruption of the electricity supply is commonly called a "blackout."

This ubiquitous resource, however, uses an enormous amount of energy. In 2001, 22% of the nation's electricity, equivalent to 8% of the nation's total energy, was used for artificial light. The cost of this energy to the consumer was roughly $50 billion per year or approximately $200 per year for every person living in the U.S. The cost of this energy to the environment was approximately 130 million tons of carbon emitted into our atmosphere, or about 7% of all the carbon emitted by the U.S. Our increasingly precious energy resources and the growing threat of climate change demand that we reduce the energy and environmental cost of artificial lighting, an essential and pervasive staple of modern life.

There is ample room for reducing this energy and environmental cost. The artificial lighting we take for granted is extremely inefficient primarily because all these technologies generate light as a by-product of indirect processes producing heat or plasmas. Incandescent lamps (a heated wire in a vacuum bulb) convert only about 5% of the energy they consume into visible light, with the rest emerging as heat. Fluorescent lamps (a phosphor-coated gas discharge tube, invented in the 1930s) achieve a conversion efficiency of only about 20%. These low efficiencies contrast starkly with the relatively high efficiencies of other common building technologies: heating is typically 70% efficient, and electric motors are typically 85 to 95% efficient. About 1.5 billion light bulbs are sold each year in the U.S. today, each one an engine for converting the earth's precious energy resources mostly into waste heat, pollution, and greenhouse gases.

SOLID-STATE LIGHTING

There is no physical reason why a 21st century lighting technology should not be vastly more efficient, thereby reducing equally vastly our energy consumption. If a 50%-efficient technology were to exist and be extensively adopted, it would reduce energy consumption in the U.S. by about 620 billion kilowatt-hours per year by the year 2025 and eliminate the need for about 70 nuclear plants, each generating a billion Watts of power.

Solid-state lighting (SSL) is the direct conversion of electricity to visible white light using semiconductor materials and has the potential to be just such an energy-efficient lighting technology. By avoiding the indirect processes (producing heat or plasmas) characteristic of traditional incandescent and fluorescent lighting, it can work at a far higher efficiency, "taking the heat out of lighting," it might be said. Recently, for example, semiconductor devices emitting infrared light have demonstrated an efficiency of 76%. There is no known fundamental physical barrier to achieving similar (or even higher) efficiencies for visible white light, perhaps approaching 100% efficiency.

Despite this tantalizing potential, however, SSL suitable for illumination today has an efficiency that falls short of a perfect 100% by a factor of fifteen. Partly because of this inefficiency, the purchase cost of SSL is too high for the average consumer by a factor ten to a hundred, and SSL suitable for illumination today has a cost of ownership twenty times higher than that expected for a 100% efficient light source.

The reason is that SSL is a dauntingly demanding technology. To generate light near the theoretical efficiency limit, essentially every electron injected into the material must result in a photon emitted from the device. Furthermore, the voltage required to inject and transport the electrons to the light-emitting region of the device must be no more than that corresponding to the energy of the resulting photon. It is insufficient to generate "simple" white light; the distribution of photon wavelengths must match the spectrum perceived by the human eye to render colors accurately, with no emitted photons outside the visible range. Finally, all of these constraints must be achieved in a single device with an operating lifetime of at least a thousand hours (and preferably ten to fifty times longer), at an ownership cost-of-light comparable to, or lower than, that of existing lighting technology.

Where promising demonstrations of higher efficiency exist, they are typically achieved in small devices (to enhance light extraction), at low brightness (to minimize losses) or with low color-rendering quality (overemphasizing yellow and green light, to which the eye is most sensitive). These restrictions lead to a high cost of ownership for high-quality light that would prevent the widespread acceptance of SSL. For example, Cree Research recently (June 2006) demonstrated a 131 lm/W white light device, which translates roughly to 35% efficiency but with relatively low lumen output. With all devices demonstrated to date, a very large gap is apparent between what is achievable today and the 100% (or roughly 375 lm/W) efficiency that should be possible with SSL.

Today, we cannot produce white SSL that is simultaneously high in efficiency, low in cost, and high in color-rendering quality. In fact, we cannot get within a factor of ten in either efficiency or cost. Doing so in the foreseeable future will require breakthroughs in technology, stimulated by a fundamental understanding of the science of light-emitting materials.

THE BASIC ENERGY SCIENCES WORKSHOP ON SOLID-STATE LIGHTING

To accelerate the laying of the scientific foundation that would enable such technology breakthroughs, the Office of Basic Energy Sciences in the U.S. Department of Energy (DOE) convened the Workshop on Basic Energy Needs for Solid-State Lighting from May 22 to 24, 2006. This report is a summary of that workshop. It reflects the collective output of the workshop attendees, which included 80 scientists representing academia, national laboratories, and industry in the United States, Europe, and Asia. Workshop planning and execution involved advance coordination with the DOE Office of Energy Efficiency and Renewable Energy, Building Technologies program, which manages applied research and development of SSL technologies and the Next Generation Lighting Initiative.

The Workshop identified two Grand Challenges, seven Priority Research Directions, and five Cross-Cutting Research Directions. These represent the most specific outputs of the workshop.

GRAND CHALLENGES

The Grand Challenges are broad areas of discovery research and scientific inquiry that will lay the groundwork for the future of SSL. The first Grand Challenge aims to change the very paradigm by which SSL structures are designed—moving from serendipitous discovery toThis report is based on a BES Workshop on Solid-State Lighting (SSL), May 22-24, 2006, to examine the gap separating current state-of-the-art SSL technology from an energy efficient, high-quality, and economical SSL technology suitable for general illumination; and to identify the most significant fundamental scientific challenges and research directions that would enable that gap to be bridged.wards rational design. The second Grand Challenge aims to understand and control the essential roadblock to SSL—the microscopic pathways through which losses occur as electrons produce light.

Rational Design of SSL Structures. Many materials must be combined in order to form a light-emitting device, each individual material working in concert with the others to control the flow of electrons so that all their energy produces light. Today, novel light-emitting and charge-transporting materials tend to be discovered rather than designed "with the end in mind." To approach 100% efficiency, fundamental building blocks should be designed so they work together seamlessly, but such a design process will require much greater insight than we currently possess. Hence, our aim is to understand light-emitting organic and inorganic (and hybrid) materials and nanostructures at a fundamental level to enable the rational design of low-cost, high-color-quality, near-100% efficient SSL structures from the ground up. The anticipated results are tools for rational, informed exploration of technology possibilities; and insights that open the door to as-yet-unimagined ways of creating and using artificial light.

Controlling Losses in the Light-Emission Process. The key to high efficiency SSL is using electrons to produce light but not heat. That this does not occur in today's SSL structures stems from the abundance of decay pathways that compete with light emission for electronic excitations in semiconductors. Hence, our aim is to discover and control the materials and nanostructure properties that mediate the competing conversion of electrons to light and heat, enabling the conversion of every injected electron into useful photons. The anticipated results are ultra-high-efficiency light-emitting materials and nanostructures, and a deep scientific understanding of how light interacts with matter, with broad impact on science and technology areas beyond SSL.

RESEARCH DIRECTIONS

The Priority and Cross-Cutting Research Directions are narrower areas of discovery research and use-inspired basic research targeted at a particular materials set or at a particular area of scientific inquiry believed to be central to one or more roadblocks in the path towards future SSL technology. These Research Directions also support one or both Grand Challenges.

The Research Directions were identified by three panels, each of which was comprised of a subset of the workshop attendees and interested observers. The first two panels, which identified the Priority Research Directions, were differentiated by choice of materials set. The first, LED Science, focused on inorganic light-emitting materials such as the Group III nitrides, oxides, and novel oxychalcogenides. The second, OLED Science, considered organic materials that are carbon-based molecular, polymeric, or dendrimeric compounds. The third panel, which identified the Cross-Cutting Research Directions, explored cross&not;cutting and novel materials science and optical physics themes such as light extraction from solids, hybrid organic-inorganic and unconventional materials, and light-matter interactions.

LED Science. Single-color, inorganic, light-emitting diodes (LEDs) are already widely used and are bright, robust, and long-lived. The challenge is to achieve white-light emission with high-efficiency and high-color rendering quality at acceptable cost while maintaining these advantages. The bulk of current research focuses on the Group III-nitride materials. Our understanding of how these materials behave and can be controlled has advanced significantly in the past decade, but significant scientific mysteries remain. These include (1) determining whether there are as-yet undiscovered or undeveloped materials that may offer significant advantages over current materials; (2) understanding and optimizing ways of generating white light from other wavelengths; (3) determining the role of piezoelectric and polar effects throughout the device but particularly at interfaces; and (4) understanding the basis for some of the peculiarities of the nitrides, the dominant inorganic SSL materials today, such as their apparent tolerance of high defect densities, and the difficulty of realizing efficient light emission at all visible wavelengths.

OLED Science. Organic light emitting devices (OLEDs) based on polymeric or molecular thin films have been under development for about two decades, mostly for applications in flat-panel displays, which are just beginning to achieve commercial success. They have a number of attractive properties for SSL, including ease (and potential affordability) of processing and the ability to tune device properties via chemical modification of the molecular structure of the thin film components. This potential is coupled with challenges that have so far prevented the simultaneous achievement of high brightness at high efficiency and long device lifetime. Organic thin films are often structurally complex, and thin films that were long considered "amorphous" can exhibit order on the molecular (nano) scale. Research areas of particularly high priority include (1) quantifying local order and understanding its role in the charge transport and light-emitting properties of organic thin films, (2) developing the knowledge and expertise to synthesize and characterize organic compounds at a level of purity approaching that of inorganic semiconductors, and understanding the role of various low-level impurities on device properties in order to control materials degradation under SSL-relevant conditions, and (3) understanding the complex interplay of effects among the many individual materials and layers in an OLED to enable an integrated approach to OLED design.

Cross-Cutting and Novel Materials Science and Optical Physics. Some areas of scientific research are relevant to all materials systems. While research on inorganic and organic materials has thus far proceeded independently, the optimal material system and device architecture for SSL may be as yet undiscovered and, furthermore, may require the integration of both classes of materials in a single system. Research directions that could enable new materials and architectures include (1) the design, synthesis, and integration of novel, nanoscale, heterogeneous building blocks, such as functionalized carbon nanotubes or quantum dots, with properties optimized for SSL, (2) the development of innovative architectures to control the flow of energy in a light emitting material to maximize the efficiency of light extraction, (3) the exploitation of strong coupling between light and matter to increase the quality and efficiency of emitted light, (4) the development of multiscale modeling techniques extending from the atomic or molecular scale to the device and system scale, and (5) the development and use of new experimental, theoretical, and computational tools to probe and understand the fundamental properties of SSL materials at the smallest scales of length and time.

SUMMARY

The workshop participants enthusiastically concluded that the time is ripe for new fundamental science to beget a revolution in lighting technology. SSL sources based on organic and inorganic materials have reached a level of efficiency where it is possible to envision their use for general illumination. The research areas articulated in this report are targeted to enable disruptive advances in SSL performance and realization of this dream. Broad penetration of SSL technology into the mass lighting market, accompanied by vast savings in energy usage, requires nothing less. These new "good ideas" will be represented not by light bulbs, but by an entirely new lighting technology for the 21st century and a bright, energy-efficient future indeed.


Basic Research Needs for Superconductivity

This report is based on a BES Workshop on Superconductivity, May 8-10, 2006, to examine the prospects for superconducting grid technology and its potential for significantly increasing grid capacity, reliability, and efficiency to meet the growing demand for electricity over the next century.

As an energy carrier, electricity has no rival with regard to its environmental cleanliness, flexibility in interfacing with multiple production sources and end uses, and efficiency of delivery. In fact, the electric power grid was named "the greatest engineering achievement of the 20th century" by the National Academy of Engineering. This grid, a technological marvel ingeniously knitted together from local networks growing out from cities and rural centers, may be the biggest and most complex artificial system ever built. However, the growing demand for electricity will soon challenge the grid beyond its capability, compromising its reliability through voltage fluctuations that crash digital electronics, brownouts that disable industrial processes and harm electrical equipment, and power failures like the North American blackout in 2003 and subsequent blackouts in London, Scandinavia, and Italy in the same year. The North American blackout affected 50 million people and caused approximately $6 billion in economic damage over the four days of its duration.

Superconductivity offers powerful new opportunities for restoring the reliability of the power grid and increasing its capacity and efficiency. Superconductors are capable of carrying current without loss, making the parts of the grid they replace dramatically more efficient. Superconducting wires carry up to five times the current carried by copper wires that have the same cross section, thereby providing ample capacity for future expansion while requiring no increase in the number of overhead access lines or underground conduits. Their use is especially attractive in urban areas, where replacing copper with superconductors in power-saturated underground conduits avoids expensive new underground construction. Superconducting transformers cut the volume, weight, and losses of conventional transformers by a factor of two and do not require the contaminating and flammable transformer oils that violate urban safety codes. Unlike traditional grid technology, superconducting fault current limiters are smart. They increase their resistance abruptly in response to overcurrents from faults in the system, thus limiting the overcurrents and protecting the grid from damage. They react fast in both triggering and automatically resetting after the overload is cleared, providing a new, self-healing feature that enhances grid reliability. Superconducting reactive power regulators further enhance reliability by instantaneously adjusting reactive power for maximum efficiency and stability in a compact and economic package that is easily sited in urban grids. Not only do superconducting motors and generators cut losses, weight, and volume by a factor of two, but they are also much more tolerant of voltage sag, frequency instabilities, and reactive power fluctuations than their conventional counterparts.

The challenge facing the electricity grid to provide abundant, reliable power will soon grow to crisis proportions. Continuing urbanization remains the dominant historic demographic trend in the United States and in the world. By 2030, nearly 90% of the U.S. population will reside in cities and suburbs, where increasingly strict permitting requirements preclude bringing in additional overhead access lines, underground cables are saturated, and growth in power demand is highest. The power grid has never faced a challenge so great or so critical to our future productivity, economic growth, and quality of life. Incremental advances in existing grid technology are not capable of solving the urban power bottleneck. Revolutionary new solutions are needed — the kind that come only from superconductivity.

The Basic Energy Sciences Workshop on Superconductivity

The Basic Energy Sciences (BES) Workshop on Superconductivity brought together more than 100 leading scientists from universities, industry, and national laboratories in the United States, Europe, and Asia. Basic and applied scientists were generously represented, creating a valuable and rare opportunity for mutual creative stimulation. Advance planning for the workshop involved two U.S. Department of Energy offices: the Office of Electricity Delivery and Energy Reliability, which manages research and development for superconducting technology, and the Office of Basic Energy Sciences, which manages basic research on superconductivity.

Performance of superconductors

The workshop participants found that superconducting technology for wires, power control, and power conversion had already passed the design and demonstration stages. The discovery of copper oxide superconductors in 1986 was a landmark event, bringing forth a new generation of superconducting materials with transition temperatures of 90 K or above, which allow cooling with inexpensive liquid nitrogen or mechanical cryocoolers. Cables, transformers, and rotating machines using first-generation (1G) wires based on Bi2Sr2Ca2Cu3Ox allowed new design principles and performance standards to be established that enabled superconducting grid technology to compete favorably with traditional copper devices. The early 2000s saw a paradigm shift to second-generation (2G) wires based on YBa2Cu3O7 that use a very different materials architecture; these have the potential for better performance over a larger operating range with respect to temperature and magnetic field. 2G wires have advanced rapidly; their current-carrying ability has increased by a factor of 10, and their usable length has increased to 300 meters, compared with only a few centimeters five years ago.

While 2G superconducting wires now considerably outperform copper wires in their capacity for and efficiency in transporting current, significant gaps in their performance improvements remain. The alternating-current (ac) losses in superconductors are a major source of heat generation and refrigeration costs; these costs decline significantly as the maximum lossless current-carrying capability increases. For the same operating current, a tenfold increase in the maximum current-carrying capability of the wire cuts the heat generated as a result of ac losses by the same factor of 10. For transporting current on the grid, an order-of-magnitude increase in current-carrying capability is needed to reduce the operational cost of superconducting lines and cables to competitive levels. Transformers, fault current limiters, and rotating machinery all contain coils of superconducting wire that create magnetic fields essential to their operation. 2G wires carry significantly less current in magnetic fields as small as 0.1 to 0.5 T, which are found in transformers and fault current limiters, and in fields of 3 to 5 T, which are needed for motors and generators. The fundamental factors that limit the current-carrying performance of 2G wires in magnetic fields must be understood and overcome to produce a five- to tenfold increase in their performance rating.

Increasing the current-carrying capability of superconductors requires blocking the motion of "Abrikosov vortices" — nanoscale tubes of magnetic flux that form spontaneously inside superconductors upon exposure to magnetic fields. Vortices are immobilized by artificial defects in the superconducting material that attract the vortices and pin them in place. To pin vortices effectively, an understanding not only of the pinning strength of individual defects for individual vortices but also of the collective effects of many defects interacting with many vortices is needed. The similarities of vortex pinning and flow to glacier flow around rock obstacles, avalanche flow in landslides, and earthquake motion at fault lines are reflected in the colloquial name "vortex matter." To achieve a five- to tenfold increase in vortex pinning and current-carrying ability in superconductors, we must learn how to bridge the scientific gap separating the microscopic behavior of individual vortices and pinning sites in a superconductor from its macroscopic current-carrying ability.

Cost of superconductors

Although superconducting wires perform significantly better than copper wires in transmitting electricity, their cost is still too high. The cost of manufactured superconducting wires must be reduced by a factor of 10 to 100 to make them competitive with copper. Much of the manufacturing cost arises from the complex architecture of 2G wires, which are made up of a flexible metallic substrate (often of a magnetic material) on which up to seven additional layers must be sequentially deposited while a specific crystalline orientation is maintained from layer to layer. Significant advances in materials science are needed to simplify the architecture and the manufacturing process while maintaining crystalline orientation, flexibility, superconductor composition, and protection from excessive heat if there is an accidental loss of superconductivity.

Beyond their manufacturing cost, the operating cost of superconductors must be reduced. Copper wires require no active cooling to operate, while superconductors must be cooled to temperatures of between 50 and 77 K for most applications. The added cost of refrigeration is a significant factor in superconductor operating cost. Reducing refrigeration costs for future generations of superconducting applications is a major technology driver for the discovery or design of new superconducting materials with higher transition temperatures.

Phenomena of superconductivity

These achievements and challenges in superconducting technology are matched by equally promising achievements and challenges in the fundamental science of superconductivity. Since 1986, new materials discoveries have pushed the superconducting transition temperature in elements from 12 to 20 K (for Li under pressure), in heavy fermion compounds from 1.5 to 18.5 K (for PuCoGa5), in noncuprate oxides from 13 to 30 K (for Ba1-xKxBiO3), in binary borides from 6 to 40 K (for MgB2), and in graphite intercalation compounds from 4.05 to 11.5 K (for CaC6). In addition, superconductivity has been discovered for the first time in carbon compounds like boron-doped diamond (11 K) and fullerides (up to 40 K for Cs3C60 under pressure), as well as in borocarbides (up to 16.5 K with metastable phases up to 23 K). We are finding that superconductivity, formerly thought to be a rare occurrence in special compounds, is a common behavior of correlated electrons or "electron matter" in materials. As of this writing, fully 55 elements display superconductivity at some combination of temperature and pressure; this number is up from 43 in 1986, an increase of 28%.

As the number and classes of materials displaying superconductivity have mushroomed, so also has the variety of pairing mechanisms and symmetries of superconductivity. The superconducting state is built of "Cooper pairs" — composite objects composed of two electrons bound by a pairing mechanism. The spatial relation of the charges in a pair is described by its pairing symmetry. Copper oxides are known to have d-wave pairing symmetry, in contrast to the s-wave pairing of conventional superconductors; Sr2RuO4 and certain organic superconductors appear to be p-wave. Superconductivity has been found close to magnetic order and can either compete against it or coexist with it, suggesting that spin plays a role in the pairing mechanism. Tantalizing glimpses of superconducting-like states at very high temperatures have been seen in the underdoped phase of yttrium barium copper oxide (YBCO), in the form of pseudogaps and of strong transverse electric fields induced by temperature gradients (the "vortex Nernst effect") that typically imply vortex motion. The proliferation of new classes of superconducting materials; of record-breaking transition temperatures in the known classes of superconductors; of unconventional pairing mechanisms and symmetries of superconductivity; and of exotic, superconducting-like features well above the superconducting transition temperature all imply that superconducting electron matter is a far richer field than we suspected even 10 years ago.

While there are many fundamental puzzles in this profusion of intriguing effects, the central challenge with the biggest impact is to understand the mechanisms of high-temperature superconductivity. This is difficult precisely because the mechanisms are entangled with these anomalous normal state effects. Such effects are noticeably absent in the normal states of conventional superconductors. In the underdoped copper oxides (as in other complex oxides), there are many signs of highly correlated normal states, like the spontaneous formation of stripes and pseudogaps that exist above the superconducting transition temperature. They may be necessary precursors to the high-temperature superconducting state, or perhaps competitors, and it seems clear that an explanation of superconductivity will include these correlated normal states in the same framework. For two decades, theorists have struggled and failed to find a solution, even as experimentalists tantalize them with ever more fascinating anomalous features. The more than 50 superconducting compounds in the copper oxide family demonstrate that the mechanism of superconductivity is robust, and that it is likely to apply widely in nature among other complex metals with highly correlated normal states. Although finding the mechanism is frustratingly difficult, its value, once found, makes the struggle compelling.

Research directions

The BES Workshop on Superconductivity identified seven "priority research directions" and two "cross-cutting research directions" that capture the promise of revolutionary advances in superconductivity science and technology. The first seven directions set a course for research in superconductivity that will exploit the opportunities uncovered by the workshop panels in materials, phenomena, theory, and applications. These research directions extend the reach of superconductivity to higher transition temperatures and higher current-carrying capabilities, create new families of superconducting materials with novel nanoscale structures, establish fundamental principles for understanding the rich variety of superconducting behavior within a single framework, and develop tools and materials that enable new superconducting technology for the electric power grid that will dramatically improve its capacity, reliability, and efficiency for the coming century.

The seven priority research directions identified by the workshop take full advantage of the rapid advances in nanoscale science and technology of the last five years. Superconductivity is ultimately a nanoscale phenomenon. Its two composite building blocks — Cooper pairs mediating the superconducting state and Abrikosov vortices mediating its current-carrying ability — have dimensions ranging from a tenth of a nanometer to a hundred nanometers. Their nanoscale interactions among themselves and with structures of comparable size determine all of their superconducting properties. The continuing development of powerful nanofabrication techniques, by top-down lithography and bottom-up self-assembly, creates promising new horizons for designer superconducting materials with higher transition temperatures and current-carrying ability. Nanoscale characterization techniques with ever smaller spatial and temporal resolution — including aberration-corrected electron microscopy, nanofocused x-ray beams from high-intensity synchrotrons, scanning probe microscopy, and ultrafast x-ray laser spectroscopy — allow us to track the motion of a single vortex interacting with a single pinning defect or to observe Cooper pair making and pair breaking near a magnetic impurity atom. The numerical simulation of superconducting phenomena in confined geometries using computer clusters of a hundred or more nodes allows the interaction of Cooper pairs and Abrikosov vortices with nanoscale boundaries and architectures to be isolated. Understanding these nanoscale interactions with artificial boundaries enables the numerical design of functional superconductors. The promise of nanoscale fabrication, characterization, and simulation for advancing the fundamental science of superconductivity and rational design of functional superconducting materials for next-generation grid technology has never been higher.

A key outcome of the BES Workshop on Superconductivity has been a strong sense of optimism and awareness of the opportunity that spans the community of participants in the basic and applied sciences. In the last decade, enormous strides have been made in understanding the science of high-temperature superconductivity and exploiting it for electricity production, distribution, and use. The promise of developing a smart, self-healing grid based on superconductors that require no cooling is an inspiring "grand energy challenge" that drives the frontiers of basic science and applied technology. Meeting this 21st century challenge would rival the 20th century achievement of providing electricity for everyone at the flick of a switch. The seven priority and two cross-cutting research directions identified by the workshop participants offer the potential for achieving this challenge and creating a transformational impact on our electric power infrastructure.



The Path to Sustainable Nuclear Energy Basic and Applied Research Opportunities for Advanced Fuel Cycles

This report is based on a small DOE-sponsored workshop held in September 2005

to identify new basic science that will be the foundation for advances in nuclear fuel-cycle technology in the near term, and for changing the nature of fuel cycles and of the nuclear energy industry in the long term

. The goals are to enhance the development of nuclear energy, to maximize energy production in nuclear reactor parks, and to minimize radioactive wastes, other environmental impacts, and proliferation risks.

The limitations of the once-through fuel cycle can be overcome by adopting a closed fuel cycle, in which the irradiated fuel is reprocessed and its components are separated into streams that are recycled into a reactor or disposed of in appropriate waste forms. The recycled fuel is irradiated in a reactor, where certain constituents are partially transmuted into heavier isotopes via neutron capture or into lighter isotopes via fission. Fast reactors are required to complete the transmutation of long-lived isotopes. Closed fuel cycles are encompassed by the Department of Energy's Advanced Fuel Cycle Initiative (AFCI), to which basic scientific research can contribute.

Two nuclear reactor system architectures can meet the AFCI objectives: a "single-tier" system or a "dual-tier" system. Both begin with light water reactors and incorporate fast reactors. The "dual-tier" systems transmute some plutonium and neptunium in light water reactors and all remaining transuranic elements (TRUs) in a closed-cycle fast reactor.

Basic science initiatives are needed in two broad areas:

  • Near-term impacts that can enhance the development of either "single-tier" or "dual-tier" AFCI systems, primarily within the next 20 years, through basic research. Examples:

    • Dissolution of spent fuel, separations of elements for TRU recycling and transmutation
    • Design, synthesis, and testing of inert matrix nuclear fuels and non-oxide fuels
    • Invention and development of accurate on-line monitoring systems for chemical and nuclear species in the nuclear fuel cycle
    • Development of advanced tools for designing reactors with reduced margins and lower costs
  • Long-term nuclear reactor development requires basic science breakthroughs:

    • Understanding of materials behavior under extreme environmental conditions
    • Creation of new, efficient, environmentally benign chemical separations methods
    • Modeling and simulation to improve nuclear reaction cross-section data, design new materials and separation system, and propagate uncertainties within the fuel cycle
    • Improvement of proliferation resistance by strengthening safeguards technologies and decreasing the attractiveness of nuclear materials

    A series of translational tools is proposed to advance the AFCI objectives and to bring the basic science concepts and processes promptly into the technological sphere. These tools have the potential to revolutionize the approach to nuclear engineering R&D by replacing lengthy experimental campaigns with a rigorous approach based on modeling, key fundamental experiments, and advanced simulations.

     

     


    Basic Research Needs for Solar Energy Utilization

    This report is based on a BES Workshop on Solar Energy Utilization, April 18-21, 2005, to examine the challenges and opportunities for the development of solar energy as a competitive energy source and to identify the technical barriers to large-scale implementation of solar energy and the basic research directions showing promise to overcome them.

    World demand for energy is projected to more than double by 2050 and to more than triple by the end of the century. Incremental improvements in existing energy networks will not be adequate to supply this demand sustainably. Finding sufficient supplies of clean energy for the future is one of society's most daunting challenges.

    Sunlight provides by far the largest of all carbon-neutral energy sources. More energy from sunlight strikes the Earth in one hour (4.3 × 1020 J) than all the energy consumed on the planet in a year (4.1 × 1020 J). We currently exploit this solar resource through solar electricity — a $7.5 billion industry growing at a rate of 35-40% per annum — and solar-derived fuel from biomass, which provides the primary energy source for over a billion people.

    Yet, in 2001, solar electricity provided less than 0.1% of the world's electricity, and solar fuel from modern (sustainable) biomass provided less than 1.5% of the world's energy.

    The huge gap between our present use of solar energy and its enormous undeveloped potential defines a grand challenge in energy research. Sunlight is a compelling solution to our need for clean, abundant sources of energy in the future. It is readily available, secure from geopolitical tension, and poses no threat to our environment through pollution or to our climate through greenhouse gases.

    This report of the Basic Energy Sciences Workshop on Solar Energy Utilization identifies the key scientific challenges and research directions that will enable efficient and economic use of the solar resource to provide a significant fraction of global primary energy by the mid 21st century. The report reflects the collective output of the workshop attendees, which included 200 scientists representing academia, national laboratories, and industry in the United States and abroad, and the U.S. Department of Energy's Office of Basic Energy Sciences and Office of Energy Efficiency and Renewable Energy.

    Solar energy conversion systems fall into three categories according to their primary energy product: solar electricity, solar fuels, and solar thermal systems. Each of the three generic approaches to exploiting the solar resource has untapped capability well beyond its present usage. Workshop participants considered the potential of all three approaches, as well as the potential of hybrid systems that integrate key components of individual technologies into novel cross-disciplinary paradigms.

    SOLAR ELECTRICITY

    The challenge in converting sunlight to electricity via photovoltaic solar cells is dramatically reducing the cost/watt of delivered solar electricity — by approximately a factor of 5-10 to compete with fossil and nuclear electricity and by a factor of 25-50 to compete with primary fossil energy. New materials to efficiently absorb sunlight, new techniques to harness the full spectrum of wavelengths in solar radiation, and new approaches based on nanostructured architectures can revolutionize the technology used to produce solar electricity. The technological development and successful commercialization of single-crystal solar cells demonstrates the promise and practicality of photovoltaics, while novel approaches exploiting thin films, organic semiconductors, dye sensitization, and quantum dots offer fascinating new opportunities for cheaper, more efficient, longer-lasting systems. Many of the new approaches outlined by the workshop participants are enabled by (1) remarkable recent advances in the fabrication of nanoscale architectures by novel top-down and bottom-up techniques; (2) advances in nanoscale characterization using electron, neutron, and x-ray scattering and spectroscopy; and (3) sophisticated computer simulations of electronic and molecular behavior in nanoscale semiconductor assemblies using density functional theory. Such advances in the basic science of solar electric conversion, coupled to the new semiconductor materials now available, could drive a revolution in the way that solar cells are conceived, designed, implemented, and manufactured.

    SOLAR FUELS

    The inherent day-night and sunny-cloudy cycles of solar radiation necessitate an effective method to store the converted solar energy for later dispatch and distribution. The most attractive and economical method of storage is conversion to chemical fuels. The challenge in solar fuel technology is to produce chemical fuels directly from sunlight in a robust, cost-efficient fashion.

    For millennia, cheap solar fuel production from biomass has been the primary energy source on the planet. For the last two centuries, however, energy demand has outpaced biomass supply. The use of existing types of plants requires large land areas to meet a significant portion of primary energy demand. Almost all of the arable land on Earth would need to be covered with the fastest-growing known energy crops, such as switchgrass, to produce the amount of energy currently consumed from fossil fuels annually. Hence, the key research goals are (1) application of the revolutionary advances in biology and biotechnology to the design of plants and organisms that are more efficient energy conversion "machines," and (2) design of highly efficient, all-artificial, molecular-level energy conversion machines exploiting the principles of natural photosynthesis. A key element in both approaches is the continued elucidation — by means of structural biology, genome sequencing, and proteomics — of the structure and dynamics involved in the biological conversion of solar radiation to sugars and carbohydrates. The revelation of these long-held secrets of natural solar conversion by means of cutting-edge experiment and theory will enable a host of exciting new approaches to direct solar fuel production. Artificial nanoscale assemblies of new organic and inorganic materials and morphologies, replacing natural plants or algae, can now use sunlight to directly produce H2 by splitting water and hydrocarbons via reduction of atmospheric CO2. While these laboratory successes demonstrate the appealing promise of direct solar fuel production by artificial molecular machines, there is an enormous gap between the present state of the art and a deployable technology. The current laboratory systems are unstable over long time periods, too expensive, and too inefficient for practical implementation. Basic research is needed to develop approaches and systems to bridge the gap between the scientific frontier and practical technology.

    SOLAR THERMAL SYSTEMS

    The key challenge in solar thermal technology is to identify cost-effective methods to convert sunlight into storable, dispatchable thermal energy. Reactors heated by focused, concentrated sunlight in thermal towers reach temperatures exceeding 3,000°C, enabling the efficient chemical production of fuels from raw materials without expensive catalysts. New materials that withstand the high temperatures of solar thermal reactors are needed to drive applications of this technology. New chemical conversion sequences, like those that split water to produce H2 using the heat from nuclear fission reactors, could be used to convert focused solar thermal energy into chemical fuel with unprecedented efficiency and cost effectiveness. At lower solar concentration temperatures, solar heat can be used to drive turbines that produce electricity mechanically with greater efficiency than the current generation of solar photovoltaics. When combined with solar-driven chemical storage/release cycles, such as those based on the dissociation and synthesis of ammonia, solar engines can produce electricity continuously 24 h/day. Novel thermal storage materials with an embedded phase transition offer the potential of high thermal storage capacity and long release times, bridging the diurnal cycle. Nanostructured thermoelectric materials, in the form of nanowires or quantum dot arrays, offer a promise of direct electricity production from temperature differentials with efficiencies of 20-30% over a temperature differential of a few hundred degrees Celsius. The much larger differentials in solar thermal reactors make even higher efficiencies possible. New low-cost, high-performance reflective materials for the focusing systems are needed to optimize the cost effectiveness of all concentrated solar thermal technologies.

    PRIORITY RESEARCH DIRECTIONS

    Workshop attendees identified thirteen priority research directions (PRDs) with high potential for producing scientific breakthroughs that could dramatically advance solar energy conversion to electricity, fuels, and thermal end uses. Many of these PRDs address issues of concern to more than one approach or technology. These cross-cutting issues include (1) coaxing cheap materials to perform as well as expensive materials in terms of their electrical, optical, chemical, and physical properties; (2) developing new paradigms for solar cell design that surpass traditional efficiency limits; (3) finding catalysts that enable inexpensive, efficient conversion of solar energy into chemical fuels; (4) identifying novel methods for self-assembly of molecular components into functionally integrated systems; and (5) developing materials for solar energy conversion infrastructure, such as transparent conductors and robust, inexpensive thermal management materials.

    A key outcome of the workshop is the sense of optimism in the cross-disciplinary community of solar energy scientists spanning academia, government, and industry. Although large barriers prevent present technology from producing a significant fraction of our primary energy from sunlight by the mid-21st century, workshop participants identified promising routes for basic research that can bring this goal within reach. Much of this optimism is based on the continuing, rapid worldwide progress in nanoscience. Powerful new methods of nanoscale fabrication, characterization, and simulation — using tools that were not available as little as five years ago — create new opportunities for understanding and manipulating the molecular and electronic pathways of solar energy conversion. Additional optimism arises from impressive strides in genetic sequencing, protein production, and structural biology that will soon bring the secrets of photosynthesis and natural bio-catalysis into sharp focus. Understanding these highly effective natural processes in detail will allow us to modify and extend them to molecular reactions that directly produce sunlight-derived fuels that fit seamlessly into our existing energy networks. The rapid advances on the scientific frontiers of nanoscience and molecular biology provide a strong foundation for future breakthroughs in solar energy conversion.



    Advanced Computational Materials Science: Application to Fusion and Generation IV Fission Reactors

    This report is based on a workshop held March 31-April 2, 2004,

    to determine the degree to which an increased effort in modeling and simulation could help bridge the gap between the data that is needed to support the implementation of advanced nuclear technologies and the data that can be obtained in available experimental facilities.

    The need to develop materials capable of performing in the severe operating environments expected in fusion and fission (Generation IV) reactors represents a significant challenge in materials science. There is a range of potential Gen-IV fission reactor design concepts and each concept has its own unique demands. Improved economic performance is a major goal of the Gen-IV designs. As a result, most designs call for significantly higher operating temperatures than the current generation of LWRs to obtain higher thermal efficiency. In many cases, the desired operating temperatures rule out the use of the structural alloys employed today. The very high operating temperature (up to 1000°C) associated with the NGNP is a prime example of an attractive new system that will require the development of new structural materials. Fusion power plants represent an even greater challenge to structural materials development and application. The operating temperatures, neutron exposure levels and thermo-mechanical stresses are comparable to or greater than those for proposed Gen-IV fission reactors. In addition, the transmutation products created in the structural materials by the high energy neutrons produced in the DT plasma can profoundly influence the microstructural evolution and mechanical behavior of these materials.

    Although the workshop addressed issues relevant to both Gen-IV and fusion reactor materials, much of the discussion focused on fusion; the same focus is reflected in this report. Most of the physical models and computational methods presented during the workshop apply equally to both types of nuclear energy systems. The primary factor that differentiates the materials development path for the two systems is that nearly prototypical irradiation environments for Gen-IV materials can be found or built in existing fission reactors. This is not the case for fusion. The only fusion-relevant, 14 MeV neutron sources ever built (such as the rotating target neutron sources, RTNS-I and -II at LLNL) were relatively low-power accelerator based systems. The RTNS-II "high" flux irradiation volume was quite small, less than 1 cm3, and only low doses could be achieved. The maximum dose data obtained was much less than 0.1 dpa. Thus, RTNS-II, which last operated in 1986, provided only a limited opportunity for fundamental investigations of the effects of 14 MeV neutrons characteristic of DT fusion.

    Historically, both the fusion and fission reactor programs have taken advantage of and built on research carried out by the other program. This leveraging can be expected to continue over the next ten years as both experimental and modeling activities in support of the Gen-IV program grow substantially. The Gen-IV research will augment the fusion studies (and vice versa) in areas where similar materials and exposure conditions are of interest. However, in addition to the concerns that are common to both fusion and advanced fission reactor programs, designers of a future DT fusion reactor have the unique problem of anticipating the effects of the 14 MeV neutron source term. In particular, the question arises whether irradiation data obtained in a near-prototypic irradiation environment such as the IFMIF are needed to verify results obtained from computational materials research. The need for a theory and modeling effort to work hand-in-hand with a complementary experimental program for the purpose of model development and verification, and for validation of model predictions was extensively discussed at the workshop. There was a clear consensus that an IFMIF-like irradiation facility is likely to be required to contribute to this research. However, the question of whether IFMIF itself is needed was explored from two different points of view at the workshop. These complementary (and in some cases opposing) points of view can be coarsely characterized as "scientific" and "engineering."

    The recent and anticipated progress in computational materials science presented at the workshop provides some confidence that many of the scientific questions whose answers will underpin the successful use of structural materials in a DT fusion reactor can be addressed in a reasonable time frame if sufficient resources are devoted to this effort. For example, advances in computing hardware and software should permit improved (and in some cases the first) descriptions of relevant properties in alloys based on ab initio calculations. Such calculations could provide the basis for realistic interatomic potentials for alloys, including alloy-He potentials, that can be applied in classical molecular dynamics simulations. These potentials must have a more detailed description of many-body interactions than accounted for in the current generation which are generally based on a simple embedding function. In addition, the potentials used under fusion reactor conditions (very high PKA energies) should account for the effects of local electronic excitation and electronic energy loss. The computational cost of using more complex potentials also requires the next generation of massively parallel computers. New results of ab initio and atomistic calculations can be coupled with ongoing advances in kinetic and phase field models to dramatically improve predictions of the non-equilibrium, radiation-induced evolution in alloys with unstable microstructures. This includes phase stability and the effects of helium on each microstructural component.

    However, for all its promise, computational materials science is still a house under construction. As such, the current reach of the science is limited. Theory and modeling can be used to develop understanding of known critical physical phenomena, and computer experiments can, and have been used to, identify new phenomena and mechanisms, and to aid in alloy design. However, it is questionable whether the science will be sufficiently mature in the foreseeable future to provide a rigorous scientific basis for predicting critical materials' properties, or for extrapolating well beyond the available validation database.

    Two other issues remain even if the scientific questions appear to have been adequately answered. These are licensing and capital investment. Even a high degree of scientific confidence that a given alloy will perform as needed in a particular Gen-IV or fusion environment is not necessarily transferable to the reactor licensing or capital market regimes. The philosophy, codes, and standards employed for reactor licensing are properly conservative with respect to design data requirements. Experience with the U.S. Nuclear Regulatory Commission suggests that only modeling results that are strongly supported by relevant, prototypical data will have an impact on the licensing process. In a similar way, it is expected that investment on the scale required to build a fusion power plant (several billion dollars) could only be obtained if a very high level of confidence existed that the plant would operate long and safely enough to return the investment.

    These latter two concerns appear to dictate that an experimental facility capable of generating a sufficient, if limited, body of design data under essentially prototypic conditions (i.e. with ~14 MeV neutrons) will ultimately be required for the commercialization of fusion power. An aggressive theory and modeling effort will reduce the time and experimental investment required to develop the advanced materials that can perform in a DT fusion reactor environment. For example, the quantity of design data may be reduced to that required to confirm model predictions for key materials at critical exposure conditions. This will include some data at a substantial fraction of the anticipated end-of-life dose, which raises the issue of when such an experimental facility is required. Long lead times for construction of complex facilities, coupled with several years irradiation to reach the highest doses, imply that the decision to build any fusion-relevant irradiation facility must be made on the order of 10 years before the design data is needed.

    Two related areas of research can be used as reference points for the expressed need to obtain experimental validation of model predictions. Among the lessons learned from ASCI, the importance of code validation and verification was emphasized at the workshop. Despite an extensive investment in theory and modeling of the relevant physics, the NIF is being built at LLNL to verify the performance of the physics codes. Similarly, while the U.S. and international fusion community has invested considerable resources in simulating the behavior of magnetically-confined plasmas, a series of experimental devices (e.g. DIII-D, TFTR, JET, NSTX, and NCSX) have been, or will be, built and numerous experiments carried out to validate the predicted plasma performance on the route to ITER and a demonstration fusion power reactor.

     

     


    Opportunities for Discovery: Theory and Computation in Basic Energy Sciences

    This report is based on the deliberations of the BESAC Subcommittee on Theory and Computation following meetings on February 22 and April 17-16, 2004, to obtain testimony and discuss input from the scientific community on research directions for theory and computation to advance the scientific mission of the Office of Basic Energy Sciences (BES).

    New scientific frontiers, recent advances in theory, and rapid increases in computational capabilities have created compelling opportunities for theory and computation to advance the science.

    The prospects for success in the experimental programs of BES will be enhanced by pursuing these opportunities. This report makes the case for an expanded research program in theory and computation in BES.

    The Subcommittee on Theory and Computation of the Basic Energy Sciences Advisory Committee was charged on October 17, 2003, by the Director, Office of Science, with identifying current and emerging challenges and opportunities for theoretical research within the scientific mission of BES, paying particular attention to how computing will be employed to enable that research. A primary purpose of the Subcommittee was to identify those investments that are necessary to ensure that theoretical research will have maximum impact in the areas of importance to BES, and to assure that BES researchers will be able to exploit the entire spectrum of computational tools, including leadership class computing facilities. The Subcommittee's Findings and Recommendations are presented in Section VII of the report.

    A confluence of scientific events has enhanced the importance of theory and computation in BES.
    After considering both written and verbal testimony from members of the scientific community, the Subcommittee observed that a confluence of developments in scientific research over the past fifteen years has quietly revolutionized both the present role and future promise of theory and computation in the disciplines that comprise the Basic Energy Sciences. Those developments fall into four broad categories:

    1. a set of striking recent scientific successes that demonstrate the increased impact of theory and computation;
    2. the appearance of new scientific frontiers in which innovative theory is required to lead inquiry and unravel the mysteries posed by new observations;
    3. the development of new experimental capabilities, including large-scale facilities, that provide challenging new data and demand both fundamental and computationally intensive theory to realize their promise;
    4. the ongoing increase of computational capability provided by continued improvements in computers and algorithms, which has dramatically amplified the power and applicability of theoretical research.

    The sum of these events argues powerfully that now is the time for an increase in the investment by BES in theory and computation, including modeling and simulation.

    Emerging themes in the Basic Energy Sciences and nine specific areas of opportunity for scientific discovery.
    The report identifies nine specific areas of opportunity in which expanded investment in theory and computation holds great promise to enhance discovery in the scientific mission of BES. While this list is not exhaustive, it represents a range of persuasive prospects broadly characterized by the themes of "Complexity" and "Control" that describe much of the BES portfolio. The challenges and promise of theory in each of these nine areas are described in detail.

    Connecting theory with experiment.
    Connecting the BES theory and computation programs with experimental research taking place at existing or planned BES facilities deserves a high priority. BES should undertake a major new thrust to significantly augment its theoretical and computational programs coupled to experimental research at its major facilities. We also urge that such a new effort not be limited only to research at the facilities but also address the coupling of theory and computation with new capabilities involving "tabletop" experimental science as well.

    The unity of modern theory and computation.
    For a number of the research problems in BES, we are fortunate to know the equations that must be solved. For this reason many BES disciplines are presently exploiting high-end computation and are poised to use it at the leadership scale. However, in many other areas of BES, we do not know all the equations, nor do we have all the mathematical and physical insights we need, and therefore we have not yet invented the required algorithms. In an expanded yet balanced theory effort in BES, enhancements in computation must be accompanied by enhancements in the rest of the theoretical endeavor. Conceptual theory and computation are not separate enterprises.

    Resources necessary for success in the BES theory enterprise.
    A successful BES theory effort must provide the full spectrum of computational resources, as well as support the development and maintenance of scientific computer codes as shared scientific instruments. We find that BES is ready for and requires access to leadership-scale computing to perform calculations that cannot be done elsewhere, but also that a large amount of essential BES computation falls between the leadership and the desktop scales. Moreover, BES should provide support for the development and maintenance of shared scientific software to enhance the scientific impact of the BES-supported theory community and to remove a key obstacle to the effective exploitation of high-end computing resources and facilities.

    In summary, the Subcommittee finds that there is a compelling need for BES to expand its programs to capture opportunities created by the combination of new capabilities in theory and computation and the opening of new experimental frontiers. Providing the right resources, supporting new styles of theoretical inquiry, and building a properly balanced program are all essential for the success of an expanded effort in theory and computation. The experimental programs of BES will be enhanced by such an effort.



    Nanoscience Research for Energy Needs

    This report is based upon a BES-cosponsored National Nanotechnology Initiative (NNI) Workshop held March 16-18, 2004, by t

    he Nanoscale Science, Engineering, and Technology (NSET) Subcommittee of the National Science and Technology Council (NSTC) to address the Grand Challenge in Energy Conversion and Storage set out in the NNI. This report was originally released on June 24, 2004, during the Department of Energy NanoSummit. The second edition that is provided here was issued in June 2005.

    The world demand for energy is expected to double to 28 terawatts by the year 2050. Compounding the challenge presented by this projection is the growing need to protect our environment by increasing energy efficiency and through the development of "clean" energy sources. These are indeed global challenges, and their resolution is vital to our energy security. Recent reports on Basic Research Needs to Assure a Secure Energy Future and Basic Research Needs for the Hydrogen Economy have recognized that scientific breakthroughs and truly revolutionary developments are demanded. Within this context, nanoscience and nanotechnology present exciting and requisite approaches to addressing these challenges.

    An interagency workshop to identify and articulate the relationship of nanoscale science and technology to the nation's energy future was convened on March 16-18, 2004 in Arlington, Virginia. The meeting was jointly sponsored by the Department of Energy and, through the National Nanotechnology Coordination Office, the other member agencies of the Nanoscale Science, Engineering and Technology Subcommittee of the Committee on Technology, National Science and Technology Council. This report is the outcome of that workshop.

    The workshop had 63 invited presenters with 32 from universities, 26 from national laboratories and 5 from industry. This workshop is one in a series intended to provide input from the research community on the next NNI strategic plan, which the NSTC is required to deliver to Congress on the first anniversary of the signing of the 21st Century Nanotechnology R&D Act, Dec. 3, 2003.

    At the root of the opportunities provided by nanoscience to impact our energy security is the fact that all the elementary steps of energy conversion (charge transfer, molecular rearrangement, chemical reactions, etc.) take place on the nanoscale. Thus, the development of new nanoscale materials, as well as the methods to characterize, manipulate, and assemble them, creates an entirely new paradigm for developing new and revolutionary energy technologies. The primary outcome of the workshop is the identification of nine research targets in energy-related science and technology in which nanoscience is expected to have the greatest impact:

    • Scalable methods to split water with sunlight for hydrogen production
    • Highly selective catalysts for clean and energy-efficient manufacturing
    • Harvesting of solar energy with 20 percent power efficiency and 100 times lower cost
    • Solid-state lighting at 50 percent of the present power consumption
    • Super-strong, light-weight materials to improve efficiency of cars, airplanes, etc.
    • Reversible hydrogen storage materials operating at ambient temperatures
    • Power transmission lines capable of 1 gigawatt transmission
    • Low-cost fuel cells, batteries, thermoelectrics, and ultra-capacitors built from nanostructured materials
    • Materials synthesis and energy harvesting based on the efficient and selective mechanisms of biology

    The report contains descriptions of many examples indicative of outcomes and expected progress in each of these research targets. For successful achievement of these research targets, participants recognized six foundational and vital crosscutting nanoscience research themes:

    • Catalysis by nanoscale materials
    • Using interfaces to manipulate energy carriers
    • Linking structure and function at the nanoscale
    • Assembly and architecture of nanoscale structures
    • Theory, modeling, and simulation for energy nanoscience
    • Scalable synthesis methods


    DOE-NSF-NIH Workshop on Opportunities in THz Science

    This report is based on a Workshop on Opportunities in Terahetrz (THz) Science held February 12-14, 2004, to discuss basic research problems that can be answered using THz radiation. The workshop did not focus on the wide range of potential applications of THz radiation in engineering, defense and homeland security, or the commercial and government sectors of the economy. The workshop was jointly sponsored by DOE, NSF, and NIH.

    The region of the electromagnetic spectrum from 0.3 to 20 THz (10- 600 cm-1, 1 mm - 15 µm wavelength) is a frontier area for research in physics, chemistry, biology, medicine, and materials sciences. Sources of high quality radiation in this area have been scarce, but this gap has recently begun to be filled by a wide range of new technologies. Terahertz radiation is now available in both cw and pulsed form, down to single-cycles or less, with peak powers up to 10 MW. New sources have led to new science in many areas, as scientists begin to become aware of the opportunities for research progress in their fields using THz radiation.

    Science at a Time Scale Frontier: THz-frequency electromagnetic radiation, with a fundamental period of around 1 ps, is uniquely suited to study and control systems of central importance: electrons in highly-excited atomic Rydberg states orbit at THz frequencies. Small molecules rotate at THz frequencies. Collisions between gas phase molecules at room temperature last about 1 ps. Biologically-important collective modes of proteins vibrate at THz frequencies. Frustrated rotations and collective modes cause polar liquids (such as water) to absorb at THz frequencies. Electrons in semiconductors and their nanostructures resonate at THz frequencies. Superconducting energy gaps are found at THz frequencies. An electron in Intel's THz Transistor races under the gate in ~1 ps. Gaseous and solid-state plasmas oscillate at THz frequencies. Matter at temperatures above 10 K emits black-body radiation at THz frequencies. This report also describes a tremendous array of other studies that will become possible when access to THz sources and detectors is widely available. The opportunities are limitless.

    Electromagnetic Transition Region: THz radiation lies above the frequency range of traditional electronics, but below the range of optical and infrared generators. The fact that the THz frequency range lies in the transition region between photonics and electronics has led to unprecedented creativity in source development. Solid-state electronics, vacuum electronics, microwave techniques, ultrafast visible and NIR lasers, single-mode continuous-wave NIR lasers, electron accelerators ranging in size from a few inches to a mile-long linear accelerator at SLAC, and novel materials have been combined yield a large variety of sources with widely-varying output characteristics. For the purposes of this report, sources are divided into 4 categories according to their (low, high) peak power and their (small, large) instantaneous bandwidth.

    THz experiments: Many classes of experiments can be performed using THz electromagnetic radiation. Each of these will be enabled or optimized by using a THz source with a particular set of specifications. For example, some experiments will be enabled by high average and peak power with impulsive half-cycle excitation. Such radiation is available only from a new class of sources based on sub-ps electron bunches produced in large accelerators. Some high-resolution spectroscopy experiments will require cw THz sources with kHz linewidths but only a few hundred microwatts of power. Others will require powerful pulses with ≤1% bandwidth, available from free-electron lasers and, very recently, regeneratively-amplified lasers and nonlinear optical materials. Time-domain THz spectroscopy, with its time coherence and extremely broad spectral bandwidth, will continue to expand its reach and range of applications, from spectroscopy of superconductors to sub-cutaneous imaging of skin cancer.

    What is needed
    The THz community needs a network: Sources of THz radiation are, at this point, very rare in physics and materials science laboratories and almost non-existent in chemistry, biology and medical laboratories. The barriers to performing experiments using THz radiation are enormous.

    One needs not only a THz source, but also an appropriate receiver and an understanding of many experimental details, ranging from the absorption characteristics of the atmosphere and common materials, to where to purchase or construct various simple optics components such as polarizers, lenses, and waveplates, to a solid understanding of electromagnetic wave propagation, since diffraction always plays a significant role at THz frequencies. There is also significant expense, both in terms of time and money, in setting up any THz apparatus in one's own lab, even if one is the type of investigator who enjoys building things.

    Because of the enormous barriers to entry into THz science, the community of users is presently much smaller than the potential based on the scientific opportunities. Symposia on medical applications of THz radiation are already attracting overflow crowds at conferences. The size of the community is increasing with a clear growth potential to support a large THz user's network including user facilities. The opportunities are great. The most important thing we can do is lower research barriers.

    A THz User's Network would leverage the large existing investment in THz research and infrastructure to considerably grow the size of the THz research community. The Network would inform the scientific community at large of opportunities in THz science, bring together segments of the community of THz researchers who are currently only vaguely aware of one another and lower the barriers to entry into THz research.

    Specific ideas for network activities include disseminating information about techniques and opportunities in THz science through the worldwide web, sponsoring sessions about THz technology at scientific conferences, co-location of conferences from different communities within the THz field, providing funding for small-scale user facilities at existing centers of excellence, directing researchers interested in THz science to the most appropriate technology and/or collaborator, encouraging commercialization of critical THz components, outreach to raise public awareness of THz science and technology, and formation of teams to work on problems of common interest, such as producing higher peak fields or pulse-shaping schemes.

    Interagency support is crucial: NIH, NSF, and DOE will all benefit, and all must be involved. Eventually, the network will provide the best and most efficient path to defining what new facilities may be needed. New users of THz methodology will also find it easier to learn about the field when there is a network.

    Defining common goals: During the workshop, the community articulated several common and unmet technical needs. This list is far from exhaustive, and it will grow with the network:

    1. Higher peak fields.
    2. Coverage to 10 THz (or higher) with coherent broad-band sources.
    3. Full pulse-shaping.
    4. Excellent stability in sources with the above characteristics.
    5. Easy access to components such as emitters and receivers, and for time-domain THz spectroscopy.
    6. Near-field THz microscopy.
    7. Sensitive non-cryogenic detectors.


    Basic Research Needs for the Hydrogen Economy

    This report is based upon the BES Workshop on Hydrogen Production, Storage, and Use, held May 13-15, 2003, to identify fundamental research needs and opportunities in hydrogen production, storage, and use, with a focus on new, emerging and scientifically challenging areas that have the potential to have significant impact in science and technologies.

    The coupled challenges of a doubling in the world's energy needs by the year 2050 and the increasing demands for "clean" energy sources that do not add more carbon dioxide and other pollutants to the environment have resulted in increased attention worldwide to the possibilities of a "hydrogen economy" as a long-term solution for a secure energy future. The hydrogen economy offers a grand vision for energy management in the future. Its benefits are legion, including an ample and sustainable supply, flexible interchange with existing energy media, a diversity of end uses to produce electricity through fuel cells or to produce heat through controlled combustion, convenient storage for load leveling, and a potentially large reduction in harmful environmental pollutants. These benefits provide compelling motivation to mount a major, innovative basic research program in support of a broad effort across the applied research, development, engineering, and industrial communities to enable the use of hydrogen as the fuel of the future.

    There is an enormous gap between our present capabilities for hydrogen production, storage, and use and those required for a competitive hydrogen economy. To be economically competitive with the present fossil fuel economy, the cost of fuel cells must be lowered by a factor of 10 or more and the cost of producing hydrogen must be lowered by a factor of 4. Moreover, the performance and reliability of hydrogen technology for transportation and other uses must be improved dramatically.

    Simple incremental advances in the present state of the art cannot bridge this gap. The only hope of narrowing the gap significantly is a comprehensive, long-range program of innovative, high-risk/high-payoff basic research that is intimately coupled to and coordinated with applied programs. The best scientists from universities and national laboratories and the best engineers and scientists from industry must work in interdisciplinary groups to find breakthrough solutions to the fundamental problems of hydrogen production, storage, and use. The objective of such a program must not be evolutionary advances but revolutionary breakthroughs in understanding and in controlling the chemical and physical interactions of hydrogen with materials.

    The detailed findings and research directions identified by the three panels are presented in this report. They address the four research challenges for the hydrogen economy outlined by Secretary of Energy Spencer Abraham in his address to the National Hydrogen Association: (1) dramatically lower the cost of fuel cells for transportation, (2) develop a diversity of sources for hydrogen production at energy costs comparable to those of gasoline, (3) find viable methods of onboard storage of hydrogen for transportation uses, and (4) develop a safe and effective infrastructure for seamless delivery of hydrogen from production to storage to use.

    The essence of this report is captured in six cross-cutting research directions that were identified as being vital for enabling the dramatic breakthroughs to achieve lower costs, higher performance, and greater reliability that are needed for a competitive hydrogen economy:

    • Catalysis
    • Nanostructured Materials
    • Membranes and Separations
    • Characterization and Measurement Techniques
    • Theory, Modeling, and Simulation
    • Safety and Environmental Issues

    In addition to these research directions, the panels identified biological and bio-inspired science and technology as richly promising approaches for achieving the revolutionary technical advances required for a hydrogen economy.



    Theory and Modeling in Nanoscience

    This report is based upon the May 10-11, 2002, workshop conducted jointly by the Basic Energy Sciences Advisory Committee and the Advanced Scientific Computing Advisory Committees to identify challenges and opportunities for theory, modeling, and simulation in nanoscience and nanotechnology and to investigate the growing and promising role of applied mathematics and computer science in meeting those challenges.

    During the past 15 years, the fundamental techniques of theory, modeling, and simulation have undergone a revolution that parallels the extraordinary experimental advances on which the new field of nanoscience is based. This period has seen the development of density functional algorithms, quantum Monte Carlo techniques, ab initio molecular dynamics, advances in classical Monte Carlo methods and mesoscale methods for soft matter, and fast-multipole and multigrid algorithms. Dramatic new insights have come from the application of these and other new theoretical capabilities. Simultaneously, advances in computing hardware increased computing power by four orders of magnitude. The combination of new theoretical methods together with increased computing power has made it possible to simulate systems with millions of degrees of freedom.

    The application of new and extraordinary experimental tools to nanosystems has created an urgent need for a quantitative understanding of matter at the nanoscale. The absence of quantitative models that describe newly observed phenomena increasingly limits progress in the field. A clear consensus emerged at the workshop that without new, robust tools and models for the quantitative description of structure and dynamics at the nanoscale, the research community would miss important scientific opportunities in nanoscience. The absence of such tools would also seriously inhibit widespread applications in fields of nanotechnology ranging from molecular electronics to biomolecular materials. To realize the unmistakable promise of theory, modeling, and simulation in overcoming fundamental challenges in nanoscience requires new human and computer resources.

    Fundamental Challenges and Opportunities

    With each fundamental intellectual and computational challenge that must be met in nanoscience comes opportunities for research and discovery utilizing the approaches of theory, modeling, and simulation. In the broad topical areas of (1) nano building blocks (nanotubes, quantum dots, clusters, and nanoparticles), (2) complex nanostructures and nano-interfaces, and (3) the assembly and growth of nanostructures, the workshop identified a large number of theory, modeling, and simulation challenges and opportunities. Among them are:

    • to bridge electronic through macroscopic length and time scales
    • to determine the essential science of transport mechanisms at the nanoscale
    • to devise theoretical and simulation approaches to study nano-interfaces, which dominate nanoscale systems and are necessarily highly complex and heterogeneous
    • to simulate with reasonable accuracy the optical properties of nanoscale structures and to model nanoscale opto-electronic devices
    • to simulate complex nanostructures involving "soft" biologically or organically based structures and "hard" inorganic ones as well as nano-interfaces between hard and soft matter
    • to simulate self-assembly and directed self-assembly
    • to devise theoretical and simulation approaches to quantum coherence, deco-herence, and spintronics
    • to develop self-validating and benchmarking methods

    The Role of Applied Mathematics

    Since mathematics is the language in which theory is expressed and advanced, developments in applied mathematics are central to the success of theory, modeling, and simulation for nanoscience, and the workshop identified important roles for new applied mathematics in the above-mentioned challenges. Novel applied mathematics is required to formulate new theory and to develop new computational algorithms applicable to complex systems at the nanoscale.

    The discussion of applied mathematics at the workshop focused on three areas that are directly relevant to the central challenges of theory, modeling, and simulation in nano-science: (1) bridging time and length scales, (2) fast algorithms, and (3) optimization and predictability. Each of these broad areas has a recent track record of developments from the applied mathematics community. Recent advances range from fundamental approaches, like mathematical homogenization (whereby reliable coarse-scale results are made possible without detailed knowledge of finer scales), to new numerical algorithms, like the fast-multipole methods that make very large scale molecular dynamics calculations possible. Some of the mathematics of likely interest (perhaps the most important mathematics of interest) is not fully knowable at the present, but it is clear that collaborative efforts between scientists in nanoscience and applied mathematicians can yield significant advances central to a successful national nanoscience initiative.

    The Opportunity for a New Investment

    The consensus of the workshop is that the country's investment in the national nano-science initiative will pay greater scientific dividends if it is accelerated by a new investment in theory, modeling, and simulation in nanoscience. Such an investment can stimulate the formation of alliances and teams of experimentalists, theorists, applied mathematicians, and computer and computational scientists to meet the challenge of developing a broad quantitative understanding of structure and dynamics at the nanoscale.

    The Department of Energy is uniquely situated to build a successful program in theory, modeling, and simulation in nano-science. Much of the nation's experimental work in nanoscience is already supported by the Department, and new facilities are being built at the DOE national laboratories. The Department also has an internationally regarded program in applied mathematics, and much of the foundational work on mathematical modeling and computation has emerged from DOE activities. Finally, the Department has unique resources and experience in high performance computing and algorithms. The combination of these areas of expertise makes the Department of Energy a natural home for nanoscience theory, modeling, and simulation.



    Opportunities for Catalysis in the 21st Century

    This report is based upon a Basic Energy Sciences Advisory Committee subpanel workshop that was held May 14-16, 2002, to identify research directions to better understand how to design catalyst structures to control catalytic activity and selectivity.

    Chemical catalysis affects our lives in myriad ways. Catalysis provides a means of changing the rates at which chemical bonds are formed and broken and of controlling the yields of chemical reactions to increase the amounts of desirable products from these reactions and reduce the amounts of undesirable ones. Thus, it lies at the heart of our quality of life: The reduced emissions of modern cars, the abundance of fresh food at our stores, and the new pharmaceuticals that improve our health are made possible by chemical reactions controlled by catalysts. Catalysis is also essential to a healthy economy: The petroleum, chemical, and pharmaceutical industries, contributors of $500 billion to the gross national product of the United States, rely on catalysts to produce everything from fuels to "wonder drugs" to paints to cosmetics.

    Today, our Nation faces a variety of challenges in creating alternative fuels, reducing harmful by-products in manufacturing, cleaning up the environment and preventing future pollution, dealing with the causes of global warming, protecting citizens from the release of toxic substances and infectious agents, and creating safe pharmaceuticals. Catalysts are needed to meet these challenges, but their complexity and diversity demand a revolution in the way catalysts are designed and used.

    This revolution can become reality through the application of new methods for synthesizing and characterizing molecular and material systems. Opportunities to understand and predict how catalysts work at the atomic scale and the nanoscale are now appearing, made possible by breakthroughs in the last decade in computation, measurement techniques, and imaging and by new developments in catalyst design, synthesis, and evaluation.

    A Grand Challenge

    In May 2002, a workshop entitled "Opportunities for Catalysis Science in the 21st Century" was conducted in Gaithersburg, Maryland. The impetus for the workshop grew out of a confluence of factors: the continuing importance of catalysis to the Nation's productivity and security, particularly in the production and consumption of energy and the associated environmental consequences, and the emergence of new research tools and concepts associated with nanoscience that can revolutionize the design and use of catalysts in the search for optimal control of chemical transformations. While research opportunities of an extraordinary variety were identified during the workshop, a compelling, unifying, and fundamental challenge became clear. Simply stated, the Grand Challenge for catalysis science in the 21st century is to understand how to design catalyst structures to control catalytic activity and selectivity.

    The Present Opportunity

    In his address to the 2002 meeting of the American Association for the Advancement of Science, Jack Marburger, the President's Science Advisor, spoke of the revolution that will result from our emerging ability to achieve an atom-by-atom understanding of matter and the subsequent unprecedented ability to design and construct new materials with properties that are not found in nature. " The revolution I am describing," he said, " is one in which the notion that everything is made of atoms finally becomes operational… We can actually see how the machinery of life functions, atom by atom. We can actually build atomic-scale structures that interact with biological or inorganic systems and alter their functions. We can design new tiny objects 'from scratch' that have unprecedented optical, mechanical, electrical, chemical, or biological properties that address needs of human society."

    Nowhere else can this revolution have such an immediate payoff as in the area of catalysis. By investing now in new methods for design, synthesis, characterization, and modeling of catalytic materials, and by employing the new tools of nanoscience, we will achieve the ability to design and build catalytic materials atom by atom, molecule by molecule, nanounit by nanounit.

    The Importance of Catalysis Science to DOE

    For the present and foreseeable future, the major source of energy for the Nation is found in chemical bonds. Catalysis affords the means of changing the rates at which chemical bonds are formed and broken. Catalysis also allows chemistry of extreme specificity, making it possible to select a desired product over an undesired one. Materials and materials properties lie at the core of almost every major issue that the U.S. Department of Energy (DOE) faces, including energy, stockpile stewardship, and environmental remediation. Much of the synthesis of new materials is certainly going to happen through catalysis. When scientists and engineers understand how to design catalysts to control catalytic chemistry, the effects on energy production and use and on the creation of exciting new materials will be profound.

    A Recommendation for Increased Federal Investment in Catalysis Research

    We are approaching a renaissance in catalysis science in this country. With the availability of exciting new laboratory tools for characterization, new designer approaches to synthesis, advanced computational capabilities, and new capabilities at user facilities, we have unparalleled potential for making significant advances in this vital and vibrant field. The convergence of the scientific disciplines that is a growing trend in the catalysis field is spawning new ideas that reach beyond conventional thinking.

    This revolution unfortunately comes at a time when industry has largely abandoned its support of basic research in catalysis. As the only Federal agency that supports catalysis as a discipline, DOE is uniquely positioned to lead the revolution. Our economy and our quality of life depend on catalytic processes that are efficient, clean, and effective. An increased investment in catalysis science in this country is not only important, it is essential.

    Successful research ventures in this area will have an impact on all levels of daily life, leading to enhanced energy efficiency for a range of fuels, reductions in harmful emissions, effective synthesis of new and improved drugs, enhanced homeland security and stockpile stewardship, and new materials with tailored properties. Federal investment is vital for building the scientific workforce needed to address the challenging issues that lie ahead in this field — a workforce that comprises our best and brightest scientists, developing creative new ideas and approaches. This investment is also vital to ensuring that we have the best scientific tools possible for exploiting creative ideas, and that our scientists have ready access to these experimental and computational tools. These tools include both state-of-the-art instrumentation in individual investigator laboratories and unique instrumentation that is only available, because of its size and cost, at DOE' s national user facilities.



    Biomolecular Materials

    This report is based upon the January 13-15, 2002, workshop sponsored by the Basic Energy Sciences Advisory Committee to explore the potential impact of biology on the physical sciences, in particular the materials and chemical sciences.

    Twenty-two scientists from around the nation and the world met to discuss the way that the molecules, structures, processes and concepts of the biological world could be used or mimicked in designing novel materials, processes or devices of potential practical significance. The emphasis was on basic research, although the long-term goal is, in addition to increased knowledge, the development of applications to further the mission of the Department of Energy.

    The charge to the workshop was to identify the most important and potentially fruitful areas of research in the field of Biomolecular Materials and to identify challenges that must be overcome to achieve success. This report summarizes the response of the workshop participants to this charge, and provides, by way of example, a description of progress that has been made in selected areas of the field.

    The participants felt that a DOE program in this area should focus on the development of a greater understanding of the underlying biology, and tools to manipulate biological systems both in vitro and in vivo rather than on the attempted identification of narrowly defined applications or devices. The field is too immature to be subject to arbitrary limitations on research and the exclusion of areas that could have great impact.

    These limitations aside, the group developed a series of recommendations. Three major areas of research were identified as central to the exploitation of biology for the physical sciences: 1) Self Assembled, Templated and Hierarchical Structures; 2) The Living Cell in Hybrid Materials Systems; and 3) Biomolecular Functional Systems.

    Workshop participants also discussed the challenges and impediments that stand in the way of our attaining the goal of fully exploiting biology in the physical sciences. Some are cultural, others are scientific and technical.

    Recommendations from the report are:

    Program Relevance. In view of what has recently developed into a generally recognized opinion that biology offers a rich source of structures, functions and inspiration for the development of novel materials, processes and devices support for this research should be a component of the broad Office of Basic Energy Sciences Program.

    Broad Support. The field is in its early stages and is not as well defined as other areas. Thus, although it is recommended that support be focused in the three areas identified in this report, it should be broadly applied. Good ideas in other areas proposed by investigators with good track records should be supported as well. There should not be an emphasis on "picking winning applications" because it is simply too difficult to reliably identify them at this time.

    Support of the Underlying Biology. Basic research focused on understanding the biological structures and processes in areas that show potential for applications supporting the DOE mission should be supported.

    Multidisplinary Teams. Research undertaken by multidisciplinary teams across the spectrum of materials science, physics, chemistry and biology should be encouraged but not artificially arranged.

    Training. Research that involves the training of students and postdocs in multiple disciplines, preferably co-advised by two or more senior investigators representing different relevant disciplines, should be encouraged without sacrificing the students' thorough studies within the individual disciplines.

    Long-Term Investment. Returns, in terms of functioning materials, processes or devices should not be expected in the very short term, although it can reasonably be assumed that applications will, as they have already, arise unexpectedly.



    Basic Research Needs To Assure A Secure Energy Future

    This report is based upon a Basic Energy Sciences Advisory Committee workshop that was held in October 2002 to assess the basic research needs for energy technologies to assure a reliable, economic, and environmentally sound energy supply for the future. The workshop discussions produced a total of 37 proposed research directions.

    Current projections estimate that the energy needs of the world will more than double by the year 2050. This is coupled with increasing demands for "clean" energy – sources of energy that do not add to the already high levels of carbon dioxide and other pollutants in the environment. These coupled challenges simply cannot be met by existing technologies. Major scientific breakthroughs will be required to provide reliable, economic solutions.

    The results of the BESAC workshop are a compilation of 37 Proposed Research Directions. At a higher level, these fell into ten general research areas, all of which are multidisciplinary in nature:

    • Materials Science to Transcend Energy Barriers
    • Energy Biosciences
    • Basic Research Towards the Hydrogen Economy
    • Innovative Energy Storage
    • Novel Membrane Assemblies
    • Heterogeneous Catalysis
    • Fundamental Approaches to Energy Conversion
    • Basic Research for Energy Utilization Efficiency
    • Actinide Chemistry and Nuclear Fuel Cycles
    • Geosciences

    Nanoscale science, engineering, and technology were identified as cross-cutting areas where research may provide solutions and insights to long-standing technical problems and scientific questions. The need for developing quantitative predictive models was also identified in many cases, and this requires better understanding of the underlying fundamental mechanisms of the relevant processes. Often this in turn requires characterization with very high physical, chemical, structural, and temporal precision: DOE's existing world-leading user facilities currently provide these capabilities, and these capabilities must be continuously enhanced and new ones developed. In addition, requirements for theory, modeling, and simulation will demand advanced computational tools, including high-end computer user facilities. All the participants agreed that the education of the next generation of research scientists is of crucial importance; and this should include making the importance of the energy security issue clear to everyone.

    It is clear that assuring the security of the energy supply for the U.S. over the next few decades will present major problems. There are a number of reasons for this. The most important of these is the current reliance on fossil fuels for a high proportion of the energy, of which a significant fraction is imported. The Developing World countries will have greatly increased needs for energy, in part because of the expected population increase, and in part because of the increase in their presently very low standards of living. A second problem is related to concerns over the environmental effects of the use of fossil fuels. Third, the peaking of the production of fossil fuels is likely within the next several decades. For these reasons, it is very important that the U.S. undertakes a vigorous research and development program to address the issues identified in this report.

    There are a number of actions that can help in the nearer term: increased efficiency in the conversion and use of energy; increased conservation; and aggressive environmental control requirements. However, while these may delay the major impact, they will not in the longer run provide the assured energy future that the U.S. requires. It is also clear that there is no single answer to this problem. There are several options that are available at the moment, and many – or indeed all – of them must be pursued.

    Basic research will make an important contribution to the solution to this problem by providing the basis on which entities which include DOE's applied missions programs will develop new technological approaches; and by leading to the discovery of new concepts. The time between the basic research and its contribution to new or significantly improved technical solutions that can make major contributions to the future energy supply is often measured in decades. Major new discoveries are needed, and these will largely come from basic research programs.

    It is clear from the analysis presented in this report that there are a number of opportunities. Essentially all of these are interdisciplinary in character. The Office of Basic Energy Sciences should review its current research portfolio to assess how it is contributing to the research directions proposed by this study.

    BESAC expects, however, that a much larger effort will be needed than the current BES program. The magnitude of the energy challenge should not be underestimated. With major scientific discoveries and development of the underlying knowledge base, we must enable vast technological changes in the largest industry in the world (energy), and we must do it quickly. If we are successful, we will both assure energy security at home and promote peace and prosperity worldwide.

    Recommendation: Considering the urgency of the energy problem, the magnitude of the needed scientific breakthroughs, and the historic rate of scientific discovery, current efforts will likely be too little, too late. Accordingly, BESAC believes that a new national energy research program is essential and must be initiated with the intensity and commitment of the Manhattan Project, and sustained until this problem is solved.

    BESAC recommends that BES review its research activities and user facilities to make sure they are optimized for the energy challenge, and develop a strategy for a much more aggressive program in the future.



    Basic Research Needs for Countering Terrorism

    This report documents the results of the Department of Energy, Office of Basic Energy Sciences (BES) Workshop on Basic Research Needs to Counter Terrorism. This two-day Workshop, held in Gaithersburg, MD, February 28-March 1, 2002, brought together BES research participants and experts familiar with counter-terrorism technologies, strategies, and policies. The purpose of the workshop was to: (1) identify direct connections between technology needs for countering terrorism and the critical, underlying science issues that will impact our ability to address those needs and (2) recommend investment strategies that will increase the impact of basic research on our nation's efforts to counter terrorism.

    The workshop focused on science and technology challenges associated with our nation's need to detect, prevent, protect against, and respond to terrorist attacks involving Radiological and Nuclear, Chemical, and Biological threats. While the organizers and participants of this workshop recognize that the threat of terrorism is extremely broad, including food and water safety as well as protection of our public infrastructure, we necessarily limited the scope of our discussions to the principal weapons of mass destruction.

    In order to set the stage for the discussions of critical science and technology challenges, the workshop began with keynote and plenary lectures that provided a realistic context for understanding the broad challenges of countering terrorism. The plenary speakers emphasized the socio-political complexity of terrorism problems, reinforced the need for basic research in addressing these problems, and provided critical advice on how basic research can best contribute to our nation's needs. Their advice highlighted the need to:

    • Invest Strategically– Focus on Cross-Cutting Research that has the potential to have an impact on a broad set of technology needs, thereby providing the greatest return on the research investment.
    • Build Team Efforts– Countering terrorism will require broad, collaborative teams. The research community should focus on: (1) Research Environments and Infrastructures that encourage and enable cross-disciplinary science and technology teams to explore and integrate new scientific discoveries and (2) Exploring Relationships with Other Programs that will strengthen connections between new scientific advances and those groups responsible for technology development and implementation.
    • Consider Dual Use– Identify areas of research that present significant Dual-Use Opportunities for application to countering terrorism and other complementary technology needs.

    The during the workshop, participants identified several critical technology needs and the underlying science challenges that, if met, can help to reduce the threat of terrorist attacks in the United States. Some of the key technology needs and limitations that were identified include:

    Detection– Nonintrusive, stand-off, and imaging detection systems; sampling from complex backgrounds and environments; inexpensive and field-deployable sensor systems; highly selective and ultra-sensitive detectors; early warning triggers for continuous monitoring

    Prevention – Methods and materials to control, track, and reduce the availability of hazardous materials; techniques to rapidly characterize and attribute the source of terrorist threats

    Protection – Personal protective equipment; light-weight barrier materials and fabrics; filtration systems; explosive containment structures; methods to protect people, animals, crops, and public spaces

    Response–Coupled models and measurements that can predict fate and transport of toxic materials including pre-event background data; pre-symptomatic and point of care medical diagnostics; methods to immobilize and neutralize hazardous materials including self-cleaning and self-decontaminating surfaces

    The workshop discussions of these technology needs and the underlying science challenges are fully documented in the major sections of this report. The results of these discussions, combined with the broad perspective and advice from our plenary speakers, were used to develop a set of high-level workshop recommendations. The following recommendations are offered to help guide our nation's basic research investments in order to maximize our ability to reduce the threat of terrorism.

    • We recommend continuing or increasing funding for a selected set of research directions that are identified in the Workshop Summary and Recommendations (Section 5) of this report. These areas of research underpin many of the technologies that have high probability to impact our nation's ability to counter terrorism.
    • New programs should be supported to stimulate the formation of, and provide needed resources for, cross-disciplinary and multi-institutional teams of scientists and technologists that are needed to address these critical problems. An important component of this strategy is investment in DOE national laboratories and user facilities because they can provide an ideal environment to carry out this highly collaborative work.
    • Governmental organizations and agencies should explore their complementary goals and capabilities and, where appropriate, work to develop agreements that facilitate the formation of multi-organizational teams and the sharing of research and technology capabilities that will improve our nation's ability to counter the threat of terrorism.
    • Increased emphasis should be placed on identifying dual-use applications for key counter-terrorism technologies. Efforts should be focused on building partnerships between government, university, and industry to capitalize on these opportunities.

    In summary, this workshop made significant progress in identifying the basic research needs and in outlining a strategy to enhance the research community's ability to impact our nation's counter-terrorism needs. We wish to acknowledge the enthusiasm and hard work of all the workshop participants. Their extraordinary contributions were key to the success of this workshop, and their dedication to this endeavor provides strong evidence that the basic research community is firmly committed to supporting our nation's goal of reducing the threat of terrorism in the United States.

    Workshop Presentations - February 28, 2002
    The Role of Science and Technology in Countering Terrorism
    Keynote Lecture by Jay Davis, LLNL
    Welcome and Brief Overview
    Walter Stevens, BES
    Introduction and Purpose
    Terry Michalske, SNL
    Radiological and Nuclear Threat Area
    Michael Anastasio, LLNL
    Chemical Threat Area
    Michael Sailor, UC San Diego
    Biological Threat Area; David Franz, Southern Research Institute
    Workshop Report
    Screen optimized Print optimized Word format


    Complex Systems: Science for the 21st Century

    This report is based upon a BES workshop, March 5-6, 1999, which was designed to help define new scientific directions related to complex systems in order to create new understanding aboutthe nano world and complicated, multicomponent structures.

    As we look further in to this century, we find science and technology at yet another threshold: the study of simplicity will give way to the study of "complexity" as the unifying theme.

    The triumphs of science in the past century, which improved our lives immeasurably, can be described as elegant solutions to problems reduced to their ultimate simplicity. We discovered and characterized the fundamental particles and the elementary excitations in matter and used them to form the foundation for interpreting the world around us and for building devices to work for us. We learned to design, synthesize, and characterize small, simple molecules and to use them as components of, for example, materials, catalysts, and pharmaceuticals. We developed tools to examine and describe these "simple" phenomena and structures.

    The new millennium will take us into the world of complexity. Here, simple structures interact to create new phenomena and assemble themselves into devices. Here also, large complicated structures can be designed atom by atom for desired characteristics. With new tools, new understanding, and a developing convergence of the disciplines of physics, chemistry, materials science, and biology, we will build on our 20th century successes and begin to ask and solve questions that were, until the 21st century, the stuff of science fiction.

    Complexity takes several forms. The workshop participants identified five emerging themes around which research could be organized.

    Collective Phenomena — Can we achieve an understanding of collective phenomena to create materials with novel, useful properties? We already see the first examples of materials with properties dominated by collective phenomena — phenomena that emerge from the interactions of the components of the material and whose behavior thus differs significantly from the behavior of those individual components. In some cases collective phenomena can bring about a large response to a small stimulus — as seen with colossal magnetoresistance, the basis of a new generation of recording memory media. Collective phenomena are also at the core of the mysteries of such materials as the high-temperature superconductors.

    Materials by Design — Can we design materials having predictable, and yet often unusual properties? In the past century we discovered materials, frequently by chance, determined their properties, and then discarded those materials that did not meet our needs. Now we will see the advent of structural and compositional freedoms that will allow the design of materials having specific desired characteristics directly from our knowledge of atomic structure. Of particular interest are "nanostructured" materials, with length scales between 1 and 100 nanometers. In this regime, dimensions "disappear," with zero-dimensional dots or nanocrystals, one-dimensional wires, and two-dimensional films, each with unusual properties distinctly different from those of the same material with "bulk" dimensions. We could design materials for lightweight batteries with high storage densities, for turbine blades that can operate at 2500°C, and perhaps even for quantum computing.

    Functional Systems — Can we design and construct multicomponent molecular devices and machines? We have already begun to use designed building blocks to create self-organized structures of previously unimagined complexity. These will form the basis of systems such as nanometer-scale chemical factories, molecular pumps, and sensors. We might even stretch and think of self-assembling electronic/photonic devices.

    Nature's Mastery — Can we harness, control, or mimic the exquisite complexity of Nature to create new materials that repair themselves, respond to their environment, and perhaps even evolve? This is, perhaps, the ultimate goal. Nature tells us it can be done and provides us with examples to serve as our models. We learn about Nature's design rules and try to mimic green plants which capture solar energy, or genetic variation as a route to "self-improvement" and optimized function. T hese concepts may seem fanciful, but with the revolution now taking place in biology, progressing from DNA sequence to structure and function, the possibilities seem endless. Nature has done it. Why can't we?

    New Tools — Can we develop the characterization instruments and the theory to help us probe and exploit this world of complexity? Radical enhancement of existing techniques and the development of new ones will be required for the characterization and visualization of structures, properties, and functions — from the atomic, to the molecular, to the nanoscale, to the macroscale. Terascale computing will be necessary for the modeling of these complex systems.

    Now is the time. We can now do this research, make these breakthroughs, and enhance our lives as never before imagined. The work of the past few decades has taken us to this point, solving many of the problems that underlie these challenges, teaching us how to approach problems of complexity, giving us the confidence needed to achieve these goals. This work also gave us the ability to compute on our laps with more power than available to the Apollo astronauts on their missions to the moon. It taught us to engineer genes, "superconduct" electricity, visualize individual atoms, build "plastics" ten times stronger than steel, and put lasers on chips for portable CD players. We are ready to take the next steps.

    Complexity pays dividends. We think of simple silicon for semiconductors, but our CD players depend on dozens of layers of semiconductors made of aluminum, gallium, and arsenic. Copper conducts electricity and iron is magnetic. Superconductors and giant magnetoresistive materials have eight or more elements, all of which are essential and interact with one another to produce the required proper-ties. Nature, too, shows us the value of complexity. Hemoglobin, the protein that transports oxygen from the lungs to, for example, the brain, is made up of four protein subunits which interact to vastly increase the efficiency of delivery. As individual subunits, these proteins cannot do the job.

    The new program. The very nature of research on complexity makes it a "new millennium" program. Its foundations rest on four pillars: physics, chemistry, materials science, and biology. Success will require an unprecedented level of inter-disciplinary collaboration. Universities will need to break down barriers between established departments and encourage the development of teams across disciplinary lines. Interactions between universities and national laboratories will need to be increased, both in the use of the major facilities at the laboratories and also through collaborations among research programs. Finally, understanding the interactions among components depends on understanding the components themselves. Although a great deal has been accomplished in this area in the past few decades, far more remains to be done. A complexity program will complement the existing programs and will ensure the success of both. The benefits are, as they have been at the start of all previous scientific "revolutions," beyond anything we can now foresee.



    Nanoscale Science, Engineering and Technology Research Directions
    JPG
    Front & Back JPG
    Report
    Report

    Nanoscale Science, Engineering and Technology Research Directions

    This report illustrates the wide range of research opportunities and challenges in nanoscale science, engineering and technology. It was prepared in 1999 in connection with the interagency national research initiative on nanotechnology.

    The principal missions of the Department of Energy (DOE) in Energy, Defense, and Environment will benefit greatly from future developments in nanoscale science, engineering and technology. For example, nanoscale synthesis and assembly methods will result in significant improvements in solar energy conversion; more energy-efficient lighting; stronger, lighter materials that will improve efficiency in transportation; greatly improved chemical and biological sensing; use of low-energy chemical pathways to break down toxic substances for environmental remediation and restoration; and better sensors and controls to increase efficiency in manufacturing.

    The DOE's Office of Science has a strong focus on nanoscience discovery, the development of fundamental scientific understanding, and the conversion of these into useful technological solutions. A key challenge in nanoscience is to understand how deliberate tailoring of materials on the nanoscale can lead to novel and enhanced functionalities. The DOE National Laboratories are already making a broad range of contributions in this area. The enhanced properties of nanocrystals for novel catalysts, tailored light emission and propagation, and supercapacitors are being explored, as are hierachical nanocomposite structures for chemical separations, adaptive/responsive behavior and impurity gettering. Nanocrystals and layered structures offer unique opportunities for tailoring the optical, magnetic, electronic, mechanical and chemical properties of materials. The Laboratories are currently synthesizing layered structures for electronics/photonics, novel magnets and surfaces with tailored hardness. This report supplies numerous other examples of new properties and functionalities that can be achieved through nanoscale materials control. These include:

    • Nanoscale layered materials that can yield a four-fold increase in the performance of permanent magnets;
    • Addition of aluminum oxide nanoparticles that converts aluminum metal into a material with wear resistance equal to that of the best bearing steel;
    • New optical properties achieved by fabricating photonic band gap superlattices to guide and switch optical signals with nearly 100% transmission, in very compact architectures;
    • Layered quantum well structures to produce highly efficient, low-power light sources and photovoltaic cells;
    • Novel optical properties of semiconducting nanocrystals that are used to label and track molecular processes in living cells;
    • Novel chemical properties of nanocrystals that show promise as photocatalysts to speed the breakdown of toxic wastes;
    • Meso-porous inorganic hosts with self-assembled organic monolayers that are used to trap and remove heavy metals from the environment; and
    • Meso-porous structures integrated with micromachined components that are used to produce high-sensitivity and highly selective chip-based detectors of chemical warfare agents.

    These and other nanostructures are already recognized as likely key components of 21st century optical communications, printing, computing, chemical sensing and energy conversion technologies.

    The DOE is well prepared to make major contributions to developing nanoscale scientific understanding, and ultimately nanotechnology, through its materials characterization, synthesis, in situ diagnostic and computing capabilities. The DOE and its National Laboratories maintain a large array of major national user facilities that are ideally suited to nanoscience discovery and to developing a fundamental understanding of nanoscale processes. Synchrotron and neutron sources provide exquisite energy control of radiation sources that are able to probe structure and properties on length scales ranging from Ångstroms to millimeters. Scanning Probe Microscope (SPM) and Electron Microscopy facilities provide unique capabilities for characterizing nanoscale materials and diagnosing processes. DOE also maintains synthesis and prototype manufacturing centers where fundamental and applied research, technology development and prototype fabrication can be pursued simultaneously. Finally, the large computational facilities at the DOE National Laboratories can be key contributors in nanoscience discovery, modeling and understanding.

    In order to increase the impact of major DOE facilities on the national nanoscience and technology initiative, it is proposed to establish several new Nanomaterials Research Centers. These Centers are intended to exploit and be associated with existing radiation sources and materials characterization and diagnostic facilities at DOE National Laboratories. Each Center would focus on a different area of nanoscale research, such as materials derived from or inspired by nature; hard and crystalline materials, including the structure of macromolecules; magnetic and soft materials, including polymers and ordered structures in fluids; and nanotechnology integration. The Nanomaterials Research Centers will facilitate interdisciplinary research and provide an environment where students, faculty, industrial researchers and national laboratory staff can work together to rapidly advance nanoscience discovery and its application to nanotechnology. Establishment of these Centers will permit focusing DOE resources on the most important nanoscale science questions and technology needs, and will ensure strong coupling with the national nanoscience initiative. The synergy of these DOE assets in partnership with universities and industry will provide the best opportunity for nanoscience discoveries to be converted rapidly into technological advances that will meet a variety of national needs and enable the United States to reap the benefits of a technological revolution.