Water Molecules Play Unexpected Role in Mineral Formation
The arrangement of water molecules aligns particles to attach to each other during mineral formation
The arrangement of water molecules aligns particles to attach to each other during mineral formation
Large minerals form from tiny particles continually attaching together. Particles snap to the surface, like LEGO® bricks. And a bit of torque is needed to align the particles. As the particles attach, water is expelled from between the surfaces. Led by Dr. Kevin Rosso, researchers at Pacific Northwest National Laboratory measured and calculated the forces that provide the torque for alignment. They found that in mineral formation, water has a more significant role than previously thought. Water organizes on particle surfaces and transmits structural data to incoming particles.
Why It Matters: Understanding particle attachment enables more accurate predictions of when minerals will form and when they won't. These insights provide new ways of studying and controlling material synthesis at the atomic scale. Further, this work offers valuable information for geoscientists exploring subsurface processes. These processes can include mineral extraction and carbon storage. Further, understanding water and particle behavior is vital for those creating materials for new energy storage devices.
Summary: Knowing how minerals form is vital for storing carbon underground, creating tailored catalysts, and more. Minerals can form via particle attachment, which involves amassing particles repeatedly until large crystals emerge. During each step, a nano-sized particle snaps to the surface. As the particles attach, they expel water between their surfaces. The forces involved in this process had not been definitively determined.
The team measured and calculated the forces that provide the torque for alignment, working at the near atomic scale. They did so by conducting experiments that used spectroscopy and microscopy resources, including atomic force microscopy with designer tips, at the Environmental Molecular Sciences Laboratory, a national scientific user facility sponsored by DOE's Office of Science, Biological and Environmental Research.
In a zinc oxide system, they found that water organizes on the particle surfaces. The water transmits structural data about the underlying surface to incoming particles. Thus, water acts as both a solvent for the particles and a messenger about its properties. If incoming particles are strongly misaligned, water acts as a barrier to incorrect attachment, limiting the growth of defective crystals. Understanding how water behaves in mineral formation offers benefits in materials synthesis, geosciences, and catalyst design.
Sponsors: This material is based upon work supported by the Department of Energy (DOE), Office of Science, Basic Energy Sciences, Chemical Sciences, Geosciences, and Biosciences Division through its Geosciences Program at Pacific Northwest National Laboratory (PNNL). The Materials Synthesis and Simulation Across Scales Initiative, a Laboratory Directed Research and Development effort at PNNL, supported development of tip fabrication methods and the large-scale molecular dynamics methods.
Reference: Zhang X, Z Shen, J Liu, SN Kerisit, ME Bowden, ML Sushko, JJ De Yoreo, and KM Rosso. 2017. "Direction-Specific Interaction Forces Underlying Zinc Oxide Crystal Growth by Oriented Attachment." Nature Communications 8:835. DOI: 10.1038/s41467-017-00844-6
Andrew Lumsdaine, Chief Scientist of the Northwest Institute for Advanced Computing, a collaborative center established by Pacific Northwest National Laboratory and the University of Washington in 2013, recently joined the inaugural class of Better Scientific Software Fellows. Known as BSSw, the Better Scientific Software community comprises international researchers, practitioners, and stakeholders from national laboratories, academia, and industry dedicated to advancing computational science and engineering and related technical computing areas. The BSSw Fellowship Program was launched to recognize leaders and advocates of high-quality scientific software, especially to promote best practices, processes, and tools for improving productivity and enhancing software sustainability. As part of the award, Lumsdaine will receive $10,000 to establish an activity that promotes better scientific software in his focus area: Practices for High-Performance and High-Quality Scientific Software in Modern C++.
“Since joining PNNL and NIAC in 2016, Andrew has been an important bridge between the high-quality computer science research happening at the laboratory and the work being done at the University of Washington,” explained Nathan Baker, Advanced Computing, Mathematics, and Data Division Director. “His enthusiasm for and advocacy of CSE-related research, especially impacting extreme-scale systems, perfectly intersects the areas that BSSw aims to reach with its Fellows Program. It is a notable honor to see Andrew join the BSSw Fellows and begin a new avenue of outreach.”
Lumsdaine was one of four elected to the inaugural 2018 BSSw Fellows class along with four honorable mentions. The group was recognized during the Exascale Computing Project 2nd Annual Meeting, held on Feb. 6-9, 2018, in Knoxville, Tenn.
Lumsdaine is an internationally recognized computer scientist whose research addresses important fundamental questions regarding the development of high-performance, scalable applications and algorithms and parallel programming models, libraries, and tools for supercomputers that maximally impact scientific discoveries. He also is active in standardization efforts with important contributions to the C++ programming language, Message Passing Interface specification, and the Graph 500. In addition to his role at NIAC, Lumsdaine is a PNNL Laboratory Fellow and an affiliate computer science professor at UW.
Is Texas the state of big stuff?
Yes. At least in the case of the American Association for the Advancement of Science (AAAS), which had its big annual meeting this year (Feb. 15-19) in Austin.
These annual celebrations of science, engineering, and innovation are a heady crush of plenary and topical lectures, flash talks, seminars, and specialty sessions.
This year there was a spicy dash of PNNL, including lectures by two BSD researchers, and a session moderated by another.
Janet Jansson (presented)
Host Genetics, Diet, Disease, and the Human Gut Microbiome
In an AAAS session titled "Harnessing the Microbiome as a Tool for Prevention and Treatment of Disease,"Jansson began her talk by explaining that the collection of microorganisms in the human gut-the gut microbiome-has a tremendous influence on human health. And that scientists are just beginning to learn the details of that influence.
In her "Multi-Omics of the Gut Microbiome" presentation, Jansson also outlined some of the research she and her collaborators have already published on the complex interactions between the gut microbiome and host genes, diet, and the environment.
A number of those studies precisely measured gene activity, proteins, and metabolites (the byproducts of metabolism) in environments like the human gut, where human cells and microbes interact.
One study showed the role of the microbiome in inflammatory bowel disease. Another deciphered the molecular details of the response of the gut microbiome to a diet high in resistant starch, which is known to contribute to health. Still another demonstrated that a specific bacterial species of the Lactobacillus family are correlated to genes involved in immune response and inflammatory disease.
Jansson also used her AAAS presentation to outline preliminary results, not yet published, from another study of the same bacteria, linking them to better memory function when compared to mice without them. (More study is needed, she said, along with peer review and replication.)
In general, said Jansson, understanding how microbes affect our health is crucial. And that having detailed molecular information about individuals-including information about the human microbiome-could help usher in an era of personalized medicine.
Justin Teeguarden (presented)
Technologies for Understanding Human Chemical Exposure
Teeguarden delivered a 30-minute talk on "A Convergence of Technologies: Improving our Understanding of Human Chemical Exposure."
He pointed to recent technologies that are transforming our understanding of human exposure to chemicals in our diets, the air, consumer products, and the household.
The presentation explored what these new technologies reveal about human exposure, and how the data they create are translated into knowledge that can be used to set priorities for chemical assessment or to calculate human health risks.
In a pre-AAAS document, Teeguarden wrote that these scientific and technological advances "have the potential to improve assessment of public health risks posed by chemicals by dramatically improving our ability to measure, document, and understand the breadth of human exposure."
He said this "convergence of technological advances" in exposure science included leaps ahead in computational sciences, chemical separation and detection, genetics, and biomonitoring.
During the talk, Teeguarden provided examples of how these advances can be used to measure both regional and personal external exposures to chemicals.
For one, computational chemistry combined with modern analytical instruments can identify chemicals in biological samples-chemicals for which, he wrote, "no authentic standards exist."
New chemical probes can be used to directly measure the activity of enzymes that both detoxify and toxify chemicals.
Passive sampling devices (like wristbands) and sampling devices based on cell phones can now allow individual measures of exposure. Combine these with remote sensing technology, Teeguarden wrote, and you get "unprecedented capabilities to integrate personal and regional exposure assessment."
Kirsten Hofmockel (moderated session)
Standardized Study Systems and Methods for Laboratory Microbial Ecosystems
Hofmockel moderated a session called "Advancing Health and Environmental Science through Standardized Laboratory Microbial Ecosystems." It explored a well-known problem in studying microbiomes in a range of diverse, complex environments: the lack of agreed-upon study systems and methods.
For instance, there is no such system for investigating plant microbiomes, and so every researcher is studying a different set of microbes in different soil systems. "Inter-sample diversity inherent to microbiomes," the session abstract reads, "yields irreproducible results, limiting ... scientists' ability to build on each other's work."
During the session, speakers highlighted new technologies.
Among those presenting was Robin "Rob" Knight, a University of California, San Diego, professor of computer science and engineering. Last summer he delivered the keynote address at the 2017 PNNL-hosted Multi-Omics for Microbiomes / EMSL Integration conference.
Jo Handelsman was another speaker. Now at the University of Wisconsin, Madison, she was associate director for science (2014-2016) in the Obama-era White House Office of Science and Technology Policy. In 2016 she invited Jansson to take part in the launch of a National Microbiome Initiative.
Back then Handelsman said, "We think it's a microbial future."
It's a quote that could also sum up one of PNNL's strongest research missions.
At PNNL, Hofmockel is co-principal investigator with Jansson of a U.S. Department of Energy-sponsored Soil Microbiome Science Focus Area called "Phenotypic Response of the Soil Microbiome to Environmental Perturbations."
Her AAAS session was part of the Fabricated Ecosystems (EcoFAB) initiative, announced in 2015 to collaboratively create controlled model ecosystems for monitoring microorganisms and host responses in response to changing variables. Last year Hofmockel co-organized an EcoFAB summit.
For a view of AAAS presenters from PNNL atmospheric sciences, go to https://www.pnnl.gov/science/highlights/highlight.asp?id=4835.
HPC scientists to showcase SHAD developer framework at upcoming IEEE/ACM CCGrid 2018 Conference
The unprecedented amount of rapidly changing data that needs to be processed in emerging data analytics applications poses novel computational challenges impacting both hardware and software. Options that require customizing architectures, software, or both to target specific problems mean long development times, difficult-to-achieve solutions, and limited flexibility. Computer scientists Vito Giovanni Castellana and Marco Minutoli, from PNNL’s High Performance Computing group, are among those seeking viable solutions to evolving big data problems. Recently, their work documented in “SHAD: the Scalable High-performance Algorithms and Data-structures Library,” was accepted for inclusion in the main program at the upcoming 18th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, known as CCGrid 2018.
Built to aid application developers, SHAD can provide scalability and performance that unlike other high-performance data analytics frameworks, aims to support different application domains, including graph processing, machine learning, and data mining.
“There is a gap in current technologies between productivity, performance, and versatility,” Castellana said. “Developers of high performance data analytics software spend significant effort tuning and optimizing their solutions for best-in-class performance, while data scientists usually trade performance for higher productivity. SHAD wants to fill the gap by providing a unified environment for both classes of users.”
SHAD facilitates application development by providing a high-level shared-memory programming environment and general-purpose data structures with interfaces inspired by common programming languages libraries. The data structures, such as Array, Vector, Map, and Set, are designed to accommodate high data volumes that can be accessed in massively parallel computing environments and used as building blocks for SHAD extensions, such as higher-level software libraries. As both Castellana and Minutoli agree that open-source software is fundamental to engaging other scientists and advancing science and technology, SHAD currently is publicly available under an Apache license at: https://github.com/pnnl/SHAD.
In their work to be presented at CCGrid 2018, Castellana and Minutoli evaluated SHAD’s flexibility using a cluster of 24 nodes equipped with two Intel Xeon E5-2680 v2 central processing units, working at 2.8 GHz and 768 GB of memory per node. They scaled their experiments up to 320 cores. In direct comparisons of SHAD’s performance on single node machines and clusters, for example, versus C++ standard libraries, and graph applications, SHAD notably demonstrated improved productivity with good performance and scalability. Moreover, when compared with custom solutions, SHAD provided similar performance with highly reduced development effort.
“Scientists and engineers can use SHAD to quickly prototype their ideas and speedup the development of complex software systems,” Minutoli added. “While our goal is to deliver a productive and user-friendly environment, we are committed to providing the best trade-off between productivity and performance.”
CCGrid 2018 emphasizes research using and impacting cluster, cloud, and grid computing and is the primary international forum for showcasing results and technological developments. Some areas of interest include applications, architecture and networking programming models and runtime systems, and performance modeling and evaluation. A truly global conference, CCGrid 2018 returns to the United States this year and is being held on May 1-4, 2018 in Washington D.C.
This work was supported in part by the High Performance Data Analytics (HPDA) Program at Pacific Northwest National Laboratory.
Simulations show that organized storms lasting at least nine hours can modify the surrounding environment to further extend their longevity
Mesoscale convective systems (MCSs)—intermediate-scale thunderstorm clusters lasting up to about 24 hours—are important precipitation producers. They account for 30-70 percent of warm-season (April to August) rainfall between the Rocky Mountains and Mississippi River, and 50-60 percent of tropical rainfall.
The increasing frequency of long-lived MCSs in the past 35 years across the U.S. Great Plains motivates the need to understand the environments that favor their development.
Analyzing realistic simulations of these systems, researchers at the U.S. Department of Energy's Pacific Northwest National Laboratory found that MCSs lasting nine hours or more strengthen the cyclonic (counterclockwise) circulation that feeds dry, cool air into the rear of the MCS region. This process increases evaporative cooling and helps maintain the MCS.
MCSs tend to not only produce floods, but they carry with them a variety of severe weather phenomena. The finding that MCSs can be "self-sustaining" suggests that model errors in the large-scale environment can greatly limit their ability to simulate long-lived MCSs.
Furthermore, small changes in the large-scale environment may result in large changes in the frequency of long-lived MCSs because the changes can be amplified through interactions between the MCSs and their large-scale environment. Hence, understanding how the large-scale environment may change in the future has important implications for predicting future changes in floods and severe weather in the United States.
Recent research shows that long-lived MCSs over the U.S. Great Plains have become more frequent and produce more extreme rainfall compared to 35 years ago. To better understand interactions between the large-scale environments and MCSs, researchers performed continental-scale, convection-permitting simulations of the 2011 and 2012 warm seasons for analysis. These simulations—conducted using the Weather Research and Forecasting model—realistically reproduced the structure, lifetime, and mean precipitation of MCSs over the central United States.
Researchers analyzed the simulations to determine the environmental conditions conducive to generating long-lived MCSs. The simulations showed that MCSs systematically formed over the central Great Plains ahead of a trough in the upper-level westerlies in combination with an enhanced low-level jet bringing moisture from the Gulf of Mexico. These environmental properties at the time of storm initiation were most prominent for the MCSs that persisted at least nine hours. Those MCSs exhibited the strongest feedback to the environment through diabatic heating produced by condensation and precipitation from the MCSs.
The feedback produced a midlevel cyclonic circulation near the trailing portion of the MCS. Researchers found that the mesoscale low-pressure center fed dry, cool air into the environment at the rear of the MCS region, increasing evaporative cooling and helping to maintain the MCS.
Sponsors: The U.S. Department of Energy (DOE) Office of Science, Biological and Environmental Research supported this study as part of the Regional and Global Climate Modeling program through the Water Cycle and Climate Extremes Modeling (WACCEM) Scientific Focus Area. Professor Houze's University of Washington participation is supported by Pacific Northwest National Laboratory under Task Order 292896 (WACCEM) of Master Agreement 243766.
Facilities: This research used computational resources from the National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science user facility. Sounding data were obtained from the Atmospheric Radiation Measurement (ARM) Climate Research Facility, a DOE Office of Science user facility sponsored by the Office of Biological and Environmental Research.
Reference: Q. Yang, R.A. Houze Jr., L.R. Leung, Z. Feng, "Environments of Long-Lived Mesoscale Convective Systems Over the Central United States in Convection Permitting Climate Simulations." Journal of Geophysical Research: Atmospheres 122, 13,288-13,307 (2017). [DOI: 10.1002/2017JD027033]
Ion separation is suppressed at the liquid-air interface, a fundamental finding that alters how we control solutions for energy production
The behavior of ions in water drives everything from desalination to corrosion. But how the behavior of ions changes at the interface of liquid water and air, not surrounded by the water, wasn't well known. The challenge was in asking the right questions and in designing the calculations and simulations to get the answers. Led by Dr. Liem Dang at Pacific Northwest National Laboratory, researchers made two discoveries. First, water slows the separation of ions at the interface with air. Second, when surrounded by water, spherical sodium and chloride ions rearrange water's bonding structure, breaking and forming hydrogen bonds.
"This type of research fundamentally changes how we understand water," said Dang, the principal investigator on the project.
Why It Matters: The transport and behavior of ions affect chemical reactions at air-liquid interfaces. Ions at water's boundary are vital to synthesis and corrosion. The team's research shows that the liquid-air interface affects the fundamental behavior of ions. The interface slows dissociation, effectively altering ions' behavior.
Summary: Although working with ionic solutions is fairly common in synthesis, separations, and subsurface science, how the ions change their environment and, in turn, how the environment influences the ions' behavior was not well known. Researchers from Pacific Northwest National Laboratory and Louisiana Tech University discovered how at the interface, the liquid water suppresses ions ability to dissociate, or move away from each other. On the solution's surface, the sodium and chloride ions stayed together longer than they did deep in the liquid (bulk water). In addition, they found that the ions changed the hydrogen bonding structure of the bulk water. Different hydrogen bond patterns form depending on the location of the ions.
"In general, if the ions move into the bulk liquid, it distorts the whole system," Dang said. "The water molecules have to re-adjust and create different interactions."
The team's findings came thanks to complex simulations and calculations that took months to set up and run on massive parallel computers. Unlike other simulations, these folded in both the thermodynamics and kinetics of ions in bulk water and at interfaces in calculating the dissociation rate. "Nobody's ever accomplished this before," said Dang.
The team's next step is delving into reactive phenomena, such as the behaviors of the hydronium and hydroxide ions at interfaces with other counter ions. The team's upcoming efforts are more challenging due to the quantum nature of the ions. The work requires more sophisticated methods, such as ab initio molecular dynamics. "It's been a long journey to understand these interactions and how they change the behavior of water," said Dang. "The next step is going to be much harder."
Facility Use: Calculations were carried out using computer resources provide by Department of Energy, Office of Science, Basic Energy Sciences
Reference: Dang LX, GK Schenter, and CD Wick. 2017. "Rate Theory of Ion Pairing at the Water Liquid-Vapor Interface." Journal of Physical Chemistry C 121:10018-10026. DOI: 10.1021/acs.jpcc.7b02223
The data system will allow for more detailed, consistent, and up-to-date global emissions trends that will aid in understanding aerosol effects on Earth system processes
To better understand how aerosols affect the atmosphere and Earth system processes, historical emissions data are necessary as a key input for modeling and analyses.
A research team led by scientists at the U.S. Department of Energy's Pacific Northwest National Laboratory developed a data system, the Community Emissions Data System, which has produced a new, robust data set covering the years 1750-2014 for carbonaceous aerosols, chemically reactive gases—which are precursors to aerosol particles—and carbon dioxide.
Emissions data from different countries vary in methodology, level of detail, source coverage, and consistency across time and space. This project addressed the limitations of existing emission data inventories with a reproducible methodology applied to all emissions types, updated emissions factors, more recent estimates through 2014, and comprehensive documentation. The methodology facilitates transparency, regular updates, and the incorporation of improved information over time.
This new consistent methodology will facilitate uncertainty analyses, leading to improved scientific understanding of the role of aerosols in the atmosphere.
Country-to-country differences in compiling emissions data make it difficult to construct consistent time series of past emissions across regions. This new data set contains annual estimates of CO, CH4, NH3, NOx, SO2, NMVOC, carbonaceous aerosols, and carbon dioxide for the years 1750-2014 by country, fuel, and sector, along with seasonal data. Researchers developed these data with the Community Emissions Data System (CEDS). This system integrates population, energy consumption, and other economic driver data with national and global emissions inventory data to produce consistent emissions trends over time.
Key methodological developments include the use of open-source software and data, a consistent methodology for all emissions species, and the use of national inventory data sets. The CEDS software and data will be publicly available through an open-source repository to facilitate community involvement and improvement.
Sponsors: This research was based on work supported by the U.S. Department of Energy Office of Science, Biological and Environmental Research as part of the Earth System Modelingprogram. The National Aeronautics and Space Administration’s Atmospheric Composition Modeling and Analysis Program (ACMAP), award NNH15AZ64I, provided additional support for the development of the gridded data algorithm.
Reference: R.M. Hoesly, S.J. Smith, L. Feng, Z. Klimont, G. Janssens-Maenhout, T. Pitkanen, J.J. Seibert, L. Vu, R.J. Andres, R.M. Bolt, T.C. Bond, L. Dawidowski, N. Kholod, J. Kurokawa, M. Li, L. Liu, Z. Lu, M.C.P. Moura, P.R. O’Rourke, Q. Zhang, “Historical (1750–2014) Anthropogenic Emissions of Reactive Gases and Aerosols from the Community Emissions Data System (CEDS).” Geoscientific Model Development 11, 369-408 (2018). [DOI: 10.5194/gmd-11-369-2018]
Using molecular spectroscopy to study reaction mechanisms
Gasoline, lubricants, and consumer products are improved by chemical additives. Making additives often involves a chemical reaction known as alkylation, the addition of a carbon chain to existing molecules. Chemists know acid catalysts are useful for alkylation, but how one of the most popular catalysts, acidic zeolites, perform alkylation in a condensed phase is not well understood.
Dr. Jian Zhi Hu, Dr. Zhenchao Zhao, Dr. Hui Shi, Dr. Johannes Lercher, and their colleagues from Pacific Northwest National Laboratory identified a key reaction mechanism associated with zeolite-catalyzed alkylation of phenol with cyclohexanol. They made this discovery using in situ high-temperature and high-pressure magic angle spinning nuclear magnetic resonance (MAS-NMR) spectroscopy.
Why It Matters: Scientists now have an understanding of how the catalytic activity, mechanism, and reaction pathways depend on three factors. These factors are the concentration and strength of acid sites, the steric constraints for the reaction, and the identity of the alkylating agent.
Summary: Detailed kinetic and spectroscopy analyses showed that phenol alkylation with cyclohexanol does not appreciably occur before a majority of cyclohexanol has been dehydrated to cyclohexene. Alkylation reactions are slowed as long as the alcohol is present. In contrast, alkylation products are readily formed when the solution initially contains just phenol and cyclohexene.
A combination of in situ MAS-NMR spectroscopy and the use of carbon-13 isotope enriched phenol and cyclohexanol allowed the identification of the reaction pathway that is difficult to probe by other spectroscopy methods. The reaction sequence does not occur as a result of competitive adsorption but by the absence of a reactive electrophile. This is due to the preferential formation of adsorption complexes, i.e., protonated alcohol dimers at Brønsted acid sites, which hinder the adsorption of cyclohexene. At low coverage of the acid sites by protonated dimers, cyclohexene adsorption and protonation yields cyclohexyl carbenium ions, which attack phenol to produce alkylated products. This further implies that protonated cyclohexanol dimers dehydrate without the formation of carbenium ions.
The results show the importance of NMR spectroscopy as a unique in situ analytical method, providing detailed molecular information on the sample studied under real world conditions (operando).
"Experts in catalysis and NMR worked together to allow us to watch important chemical processes occurring at high temperatures and pressures," said Dr. Karl Mueller, PNNL's Chief Science and Technology Officer for Physical and Computational Sciences. "Before, we had to infer what was happening by either stopping the reaction (i.e., ex situ) or only measuring the final products, neither of which can show us a complete picture."
With insights about the crucial reaction pathways and how NMR spectroscopy can contribute, the team is continuing to explore new reactions and catalysts to produce energy carriers, or molecules that store energy in the chemical bonds.
User Facilities: All of the NMR experiments were performed in the Environmental Molecular Sciences Laboratory (EMSL), a national scientific user facility sponsored by the Department of Energy's Office of Biological and Environmental Research, and located at Pacific Northwest National Laboratory (PNNL). Computing time was granted by a user proposal at EMSL and by the National Energy Research Scientific Computing Center (NERSC).
Reference: Zhao Z, H Shi, C Wan, MY Hu, Y Liu, D Mei, DM Camaioni, JZ Hu, and JA Lercher. 2017. "Mechanism of Phenol Alkylation in Zeolite H-BEA Using In Situ Solid-State NMR Spectroscopy." Journal of the American Chemical Society 139(27):9178-9185. DOI: 10.1021/jacs.7b02153
Scientists define subtle details for calcium carbonate synthesis, a fundamental and ubiquitous reaction
The details matter when synthesizing materials, whether for a chocolate bar or a solar panel. These materials and others are formed through the phenomenon of nucleation. The mechanisms and pathways that underpin this ubiquitous phenomenon are, in general, unknown. Using a combination of experiments, simulation, and theory, an international team captured a glimpse of the initial stages of the nucleation of calcium carbonate. That is, they uncovered subtle details about the process by which the molecules interact to form clusters that eventually grow into large crystals. Their findings revealed that this well-studied system nucleates through a mechanism first proposed more than 100 years ago: by adding on ions one by one.
For the case of pure calcium carbonate, getting a definitive answer to the question of how nucleation proceeds required experimentalists, materials scientists, and physical chemists with diverse viewpoints. These experts came from DOE's Pacific Northwest National Laboratory (PNNL), the University of Minnesota, and the Paul Scherrer Institute in Switzerland.
Why It Matters: This research pushed the state of the art in both experimental techniques and theory to predict larger behaviors by starting from individual atoms. The work provides new insights into how materials form. "The work addresses fundamental questions about solution speciation and nucleation pathways that are important to numerous fields of science," said Dr. James De Yoreo, a materials expert at PNNL who co-led the research.
Moreover, this achievement required development of theory and simulations that could directly compute experimental data on bulk solutions from a molecular framework that can easily be extended to other systems.
Summary: In spite of its importance in making both human-made and natural materials, understanding the nucleation of crystals has remained a challenge. Why? The mechanisms and pathways are sensitive to subtle changes in molecular interactions and system complexity. As a result, widely differing models of nucleation have been proposed even for simple systems such as calcium carbonate.
Looking at the question from one viewpoint wasn't enough. "You had to get all of these experts together to get definitive answers," said PNNL's Dr. Gregory Schenter, who constructed and analyzed the theoretical solution model along with his national lab and university colleagues.
Nucleation can occur by one of two mechanisms. In the classical mechanism, molecules attach one by one. In contrast, nucleation can occur via a multi-stage pathway involving groups of molecules bonding together. Which mechanism dominates has implications for materials technologies as well as our understanding of the natural world.
To determine which mechanism dominates, the team brought together scientific instruments at facilities in the United States and Switzerland. These instruments included synchrotrons at the Swiss Light Source and powerful computers at three locations. Computers at the National Energy Research Scientific Computing Center and the Environmental Molecular Sciences Laboratory, both DOE Office of Science user facilities, were used along with resources at the Minnesota Supercomputing Institute.
The group found that the clusters of atoms leading to nucleation start as simple ionic species, in agreement with the classical picture first formulated more than 100 years ago. However, small differences in the forces involved can have significant impacts on how the calcium carbonate grows. This research pushed the state of the art in both experimental techniques and theoretical calculations, offering new insights into a fundamental and widely used process.
Sponsors: Research was sponsored by the Department of Energy (DOE), Office of Science, Office of Basic Energy Sciences (BES), Division of Material Sciences and Engineering and the Division of Chemical Sciences, Geosciences, and Biosciences; National Science Foundation; European Community's Seventh Framework Programme; Swiss National Science Foundation; Materials Synthesis and Simulation Across Scales (MS3) Initiative through the Laboratory Directed Research and Development effort at PNNL; Alternate Sponsored Fellow Program at PNNL; and a University of Minnesota Doctoral Dissertation Fellowship.
Facilities: All measurements were performed at the PHOENIX beamline at the Swiss Light Source, Paul Scherrer Institut, Villigen, Switzerland; calculations used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science user facility supported by the DOE, Office of Science; the Environmental Molecular Sciences Laboratory, a DOE Office of Science user facility; and PNNL's Institutional Computing resources. Simulations used resources at the Minnesota Supercomputing Institute.
Reference: K Henzler, EO Fetisov, M Galib, MD Baer, BA Legg, C Borca, JM Xto, S Pin, JL Fulton, GK Schenter, N Govind, JI Siepmann, CJ Mundy, T Huthwelker, and JJ De Yoreo. 2017. "Supersaturated Calcium Carbonate Solutions Are Classical." Science Advances 4(1):eaao6283. DOI: 10.1126/sciadv.aao6283
Aluminum's coordination change occurs in extreme environments
Researchers at the Interfacial Dynamics in Radioactive Environments and Materials (IDREAM) Energy Frontier Research Center quantified transient penta-coordinated Al3+ species during the crystallization of gibbsite from hydrous aluminum gels in solutions of concentrated sodium hydroxide. The research shows that concentrated electrolytes in solution affect hydrogen bonding, ion interactions, and coordination geometries in currently unpredictable ways.
Why It Matters: These mechanistic studies support the development of new process flow sheets to accelerate the processing of radioactive wastes at two Department of Energy sites. Further, the studies may provide less energy-intensive routes for industrial aluminum production.
Summary: Gibbsite (α-Al(OH)3) is an important mineral resource for industrial aluminum production. It is also present in large quantities in the high-level radioactive waste tanks at U.S. Department of Energy sites in Washington State and South Carolina. Traditional processing for either aluminum production or radioactive waste treatment is an energy-intensive activity. Processing involves heating to facilitate dissolution of gibbsite in highly alkaline solutions of concentrated electrolytes. Heating is followed by cooling to encourage precipitation from these chemically extreme systems.
For radioactive waste treatment, the dissolution and precipitation steps are often quite slow. Why? In part, both processes involve changes in the coordination geometry of the trivalent aluminum. In the solid phase, it is six coordinate to give an octahedral geometry. To move into the solution phase, the aluminum ion must change its geometry to a four-coordinate tetrahedral form.
Led by Dr. Jian Zhi Hu and Dr. Kevin Rosso, the team conducted high-field magic angle spinning nuclear magnetic resonance spectroscopy studies that probed ion interactions, solute organization, and solvent properties during gibbsite precipitation. The team captured real-time system dynamics as a function of experimental conditions, revealing previously unknown mechanistic details.
The team's work shows that the change in coordination is not a simple transition between the tetrahedral to octahedral species. The change involves an intermediate penta-coordinated aluminum metal center. Further, these species are influenced by subtle changes in solute and solvent organization. These changes lead to gel networks that can sometimes facilitate formation or dissolution of the solid phase. Understanding how aluminum coordination changes in extreme environments may lead to efficiencies in aluminum production and accelerate radioactive waste processing.
User Facility: Materials characterization and nuclear magnetic resonance measurements were performed using EMSL, a national scientific user facility sponsored by the Department of Energy, Office of Science, Office of Biological and Environmental Research at Pacific Northwest National Laboratory.
Reference: Hu JZ, X Zhang, NR Jaegers, C Wan, TR Graham, M Hu, CI Pearce, AR Felmy, SB Clark, and KM Rosso. 2017. "Transitions in Al Coordination during Gibbsite Crystallization Using High-Field 27Al and 23Na MAS NMR Spectroscopy." Journal of Physical Chemistry C 121(49):27555-27562. DOI: 10.1021/acs.jpcc.7b10424
Single oxygen atoms are vital to designing better catalysts from the ground up
To create a winning football team, quarterbacks send their team mates to the right spots. Positioned correctly, the players work around obstacles to drive the ball to the end zone. In much the same way, scientists position catalytic atoms to drive reactions that can yield fuels, plastics, or other desired products. A team led by Dr. Roger Rousseau and Dr. Zdenek Dohnálek at Pacific Northwest National Laboratory changed how scientists think about positioning key players: oxygen atoms. These atoms are vital in turning super-thin sheets of carbon, called graphene, into a unique catalytic support.
The team discovered that single oxygen atoms bind to graphene catalytic supports differently than expected. On a freestanding graphene sheet, oxygen most often rests between two carbon atoms. On a graphene sheet resting on a ruthenium metal support, single oxygen atoms bind to just one carbon. Moreover, because the graphene sheet buckles on the metal, these oxygen atoms appear only in certain predictable spots.
"This work let us understand oxygen binding at unprecedented levels," said Rousseau, a PNNL chemist who led the study's theoretical calculations. The team now knows exactly how oxygen atoms bind and the energy involved as well as the influence of the supporting materials.
Why It Matters: Creating faster and more efficient catalysts requires designing them from the bottom up. Scientists want to design the right structures to do the job rather than search among countless possibilities. To go back to our football analogy, the quarterback knows what needs to happen and designs the play to get the job done. That's designing the structure for the function.
This fundamental research shows scientists how to take advantage of precise spots on the graphene to build up model catalysts that can be faster and more efficient. "Oxygen atoms on graphene let us bind other groups," said Dr. Vanda Glezakou, a theorist on the study. "As a result, they make it possible to design precise and efficient catalytic arrays, essentially positioning the players where they can best work together."
"This research redefines what we know about oxygen binding to carbon atoms on metal-supported graphene, which is very important for their reactivity," said experiment lead Dohnálek, who holds a joint appointment with PNNL and Washington State University.
Summary: Beginning with a flat piece of ruthenium metal, the scientists grew graphene, a one-atom-thick layer of carbon. The two materials form a superstructure because the carbon and ruthenium atoms do not lay neatly on top of each other. This mismatch causes the graphene to pucker, forcing some carbon atoms to bind to the metal, while others don't. These differences in the graphene binding influence how oxygen atoms bind.
Having created this layered material, they delved into where the oxygen resided and how it behaved. They began with scanning tunneling microscopy. While the instrument is state of the art, the resolution wasn't sufficient to pinpoint the position of static oxygen atoms. So the team heated the material causing the oxygen atoms to move. How the oxygen moved told them about how it was bound.
Analyzing how and why the oxygen atoms moved required density functional theory calculations and massive simulations involving a thousand atoms. "The data analysis had to be really clever," said Rousseau. "This wasn't something anyone could do. The calculations were backbreaking."
The experiments and calculations required resources from two DOE Office of Science user facilities. The team used microscopes at the Environmental Molecular Sciences Laboratory in Washington State and supercomputers at the National Energy Research Scientific Computing Center in California.
By combining laboratory experiments and computational simulations, the team showed that single oxygen atoms bind preferentially to certain carbon atoms. Specifically, carbon atoms that are close to the underlying ruthenium but not bound to it. Less preferred sites for oxygen binding are between two carbon atoms; oxygens bound to carbon atoms that are, in turn, bound to ruthenium; and oxygens on untethered carbon atoms far from the ruthenium.
Having answered how oxygen behaves on carbon, the team is planning to use it as anchors to build model catalysts consisting of single metal atoms and small oxide clusters.
Sponsors: Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences, and Biosciences (Z.N., M.T.N, V.A.G, R.R., Z.D.); an Alternate Sponsored Fellowship at Pacific Northwest National Laboratory (F.P.N.); and the University of Graz (F.P.N.)
User Facilities: EMSL, a national scientific user facility sponsored by the Department of Energy's Office of Biological and Environmental Research and located at Pacific Northwest National Laboratory (PNNL); National Energy Research Scientific Computing Center located at Lawrence Berkley National Laboratory.
Reference: Novotny Z, MT Nguyen, FP Netzer, VA Glezakou, R Rousseau, and Z Dohnalek. 2018. "Formation of Supported Graphene Oxide: Evidence for Enolate Species." Journal of the American Chemical Society. DOI: 10.1021/jacs.7b12791
PNNL scientists spice up electrolyte solution to increase charge cycles
When it comes to the special sauce of batteries, researchers at the Department of Energy's Pacific Northwest National Laboratory have discovered it's all about the salt concentration. By getting the right amount of salt, right where they want it, they've demonstrated a small lithium-metal battery can re-charge about seven times more than batteries with conventional electrolytes.
A battery's electrolyte solution shuttles charged atoms between electrodes to generate electricity. Finding an electrolyte solution that doesn't corrode the electrodes in a lithium-metal battery is a challenge but the PNNL approach, published online in Advanced Materials, successfully creates a protective layer around the electrodes and achieves significantly increased charge/discharge cycles.
Conventional electrolytes used in lithium-ion batteries, which power household electronics like computers and cell phones, are not suitable for lithium-metal batteries. Lithium-metal batteries that replace a graphite electrode with a lithium electrode are the 'holy grail' of energy storage systems because lithium has a greater storage capacity and, therefore, a lithium-metal battery has double or triple the storage capacity. That extra power enables electric vehicles to drive more than two times longer between charges.
Adding more lithium-based salt to the liquid electrolyte mix creates a more stable interface between the electrolyte and the electrodes which, in turn, affects the life of the battery. But that high concentration of salt comes with distinct downsides - including the high cost of lithium salt. The high concentration also increases viscosity and lowers conductivity of the ions through the electrolyte.
"We were trying to preserve the advantage of the high concentration of salt, but offset the disadvantages," said Ji-Guang "Jason" Zhang, a senior battery researcher at PNNL. "By combining a fluorine-based solvent to dilute the high concentration electrolyte, our team was able to significantly lower the total lithium salt concentration yet keep its benefits."
In this process, they were able to localize the high concentrations of lithium-based salt into "clusters" which are able to still form protective barriers on the electrode and prevent the growth of dendrites - microscopic, pin-like fibers - that cause rechargeable batteries to short circuit and limit their life span.
PNNL's patent-pending electrolyte was tested in PNNL's Advanced Battery Facility on an experimental battery cell similar in size to a watch battery. It was able to retain 80 percent of its initial charge after 700 cycles of discharging and recharging. A battery using a standard electrolyte can only maintain its charge for about 100 cycles.
Researchers will test this localized high concentration electrolyte on 'pouch' batteries developed at the lab, which are the size and power of a cell phone battery, to see how it performs at that scale. They say the concept of using this novel fluorine-based diluent to manipulate salt concentration also works well for sodium-metal batteries and other metal batteries.
This research is part of the Battery500 Consortium led by PNNL which aims to develop smaller, lighter, and less expensive batteries that nearly triple the specific energy found in batteries that power today's electric cars. Specific energy measures the amount of energy packed into a battery based on its weight.
Sponsor: Department of Energy's Office of Energy Efficiency and Renewable Energy's Vehicle Technologies Office
Reference: Chen S, J Zheng, D Mei, KS Han, MH Engelhard, W Zhao, W Xu, J Liu, and JG Zhang. 2018. "High-Voltage Lithium-Metal Batteries Enabled by Localized High Concentration Electrolytes." Advanced Materials. Early online. DOI: 10.1002/adma.201706102
Read the original media release by Susan Bauer here.
Scientists found that cooling from sulfate aerosols can partially offset Arctic warming from absorbing aerosols
Long-range transport of aerosols from mid-latitudes in the Northern Hemisphere can increase aerosol concentrations in the Arctic. Depending on the source, these aerosols perturb the Arctic energy balance by absorbing or scattering energy, heating or cooling the atmosphere and surface.
Scientists at the U.S. Department of Energy's Pacific Northwest National Laboratory analyzed 16 source regions or sectors of sulfate aerosols to determine how much influence each individual source exerts on energy balance in the Arctic.
Researchers found that meteorology drives the seasonality of contributions to Arctic sulfate concentrations from remote sources, and cooling from sulfates can partially offset Arctic warming from absorbing aerosols, such as black carbon. Knowing the source of sulfate in the Arctic and its contribution to energy balance is important for understanding Arctic climate change.
Sulfate compounds (commonly called sulfate) are produced by power plants and industrial processes, and are also found in nature. These aerosols can affect Earth's energy balance by scattering heat from the sun.
To quantify these interactions and their influence on Arctic climate, researchers performed simulations using the Community Earth System Model equipped with an explicit sulfur source tagging technique, time-varying sulfur dioxide emissions, and meteorological conditions for the time period of 2010-2014. They found that regions with high emissions and/or near or within the Arctic presented relatively large contributions to Arctic sulfate burden. The largest contribution came from sources in East Asia (27 percent).
Researchers also found that meteorology strongly influenced seasonal variations in the contribution to Arctic sulfate concentrations from remote sources. The mean cooling effect on energy balance from sulfate aerosols offsets the positive top-of-the-atmosphere warming effect from black carbon by one-third. A 20 percent global reduction in sulfur dioxide emissions led to a net Arctic top-of-the-atmosphere warming of 0.02 W m-2.
These findings suggest that jointly reducing future black carbon and sulfur dioxide emissions could prevent at least some of the Arctic warming that would result from reductions in sulfur dioxide emissions alone. Calculations also indicate that sources with shorter transport pathways and meteorology favoring longer aerosol lifetimes are more efficient in influencing the Arctic energy balance changes.
This study concluded that current sulfur emissions would result in an equilibrium Arctic cooling of about -0.19 K, with -0.05 K of that from sources in East Asia.
Sponsors: The U.S. Department of Energy (DOE) Office of Science, Biological and Environmental Research supported this research as part of the Regional and Global Climate Modeling program for the High-Latitude Application and Testing of Global and Regional Climate Models (HiLAT) project. NASA's Atmospheric Composition Modeling and Analysis Program and the Climate Change Division of the U.S. Environmental Protection Agency also supported this research.
User Facility: The research used computational resources at the National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science user facility.
Reference: Y. Yang, H. Wang, S.J. Smith, R.C. Easter, P.J. Rasch, "Sulfate Aerosol in the Arctic: Source Attribution and Radiative Forcing." Journal of Geophysical Research: Atmospheres 123, 1899-1918 (2018). [DOI: 10.1002/2017JD027298]
Slowed reductions in foreign emissions this century revealed the impact of rising domestic emissions in China
Pollution in China—particularly in the heavily populated North China Plain—has shown steep increases since the beginning of the 21st century. A study by scientists at the U.S. Department of Energy's Pacific Northwest National Laboratory revealed the influence of meteorology and domestic and foreign emissions on aerosol trends in China from 1980-2014.
Researchers found that decreased reductions in foreign emissions, together with weakening of winds, explained about 25 percent of the increased aerosol trend in China this century compared to 1980-2000.
Previous studies showed that fine aerosol particles can reach distant and remote areas via long-range transport, resulting in global effects on climate and air quality. However, contributions to aerosol in the North China Plain from distant foreign sources have stabilized since the beginning of the 21st century, and limited further reductions are foreseen. Therefore, reducing local emissions is the most certain way to improve future air quality in the North China Plain.
Rapid population and industrial growth in the North China Plain has led to poor air quality in the region characterized by high concentrations of particulate matter less than 2.5 microns in diameter (PM2.5). During the winter, haze events in the North China Plain and other parts of the country can be particularly extreme, as meteorological conditions stagnate and lead to an accumulation of aerosol particles in the atmosphere.
Researchers quantified the recent intensification of winter haze in China for the time period of 1980-2014, using an aerosol source tagging capability in the Community Atmosphere Model (version 5), a global aerosol-climate model. In particular, they looked at variations of wintertime PM2.5 concentrations on decadal timescales for the North China Plain.
The research team found that, over the last two decades of the 20th century, decreased foreign emissions offset by 13 percent the effect of China's increasing domestic emissions on PM2.5concentrations. As foreign emissions stabilized after 2000, their counteracting effect almost disappeared, revealing the impact of China's increasing domestic emissions. A decrease in foreign emission reductions along with weakening winds explained 25 percent of the increased PM2.5 trend during 2000-2014 as compared to 1980-2000.
These findings highlight the contribution of foreign emissions to historical changes in haze occurrence in the North China Plain that needs to be taken into account in air quality studies.
Sponsors: This research was supported by the U.S. Department of Energy (DOE) Office of Science, Biological and Environmental Research as part of the Regional and Global Climate Modeling program and by NASA's Atmospheric Composition Modeling and Analysis Program.
User Facility: The research used computational resources at the National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science user facility.
Reference: Y. Yang, H. Wang, S.J. Smith, R. Zhang, S. Lou, Y. Qian, P.-L. Ma, P.J. Rasch, "Recent Intensification of Winter Haze in China Linked to Foreign Emissions and Meteorology."Scientific Reports 8, 2107 (2018). [DOI: 10.1038/s41598-018-20437-7]
Observed large-scale rainfall statistics could be used to directly constrain small-scale microphysical parameters in models
To accurately simulate and predict precipitation, particularly when it is extreme, it is critical to understand how in-cloud microphysical processes, such as condensation of vapor and evaporation of rain and cloud particles, cascade up to influence large-scale precipitation variability. However, because these influences are non-linear and cross a broad range of spatial scales, arriving at this understanding is challenging.
Using high-resolution modeling with theoretical and statistical analysis, a research team led by scientists at the U.S. Department of Energy's Pacific Northwest National Laboratory revealed a direct link between the in-cloud processes and the frequency of precipitation extremes. Their findings led to a new approach for using observations to constrain the representation of cloud microphysical processes in Earth system models.
Precipitation variability and extremes are important aspects of the water cycle that have direct societal implications ranging from water resource management to emergency response. In climate models, these processes are strongly influenced by how cloud microphysical processes are represented.
The approach developed in this study would allow the use of readily available remote sensing observations of large-scale rainfall statistics to estimate difficult-to-observe, small-scale in-cloud parameters.
Researchers sought to provide a theoretical ground for interpreting the sensitivities of precipitation statistics to changes in microphysical parameters, and used observations to constrain those parameters. The researchers simulated rainfall associated with a Madden-Julian Oscillation event—a major fluctuation in tropical weather on weekly to monthly timescales—using the Model for Prediction Across Scales-Atmosphere with a refined region at 4-kilometer grid spacing over the Indian Ocean.
The simulation revealed that because cloud microphysical processes regulate precipitable water (water vapor throughout an atmospheric column), and because of the non-linear relationship between precipitation and precipitable water, the amount of precipitable water above a certain critical threshold contributes disproportionately to precipitation variability. However, the frequency of precipitable water exceeding the threshold decreases rapidly as a function of precipitable water vapor. Therefore, changes in microphysical processes that shift the statistics even slightly relative to the threshold have large effects on precipitation variability. Furthermore, precipitation variance and extreme precipitation frequency are approximately linearly related to the difference between the mean and critical precipitable water threshold.
Thus, using radar observations from the Dynamics of the Madden-Julian Oscillation (DYNAMO) field campaign that took place in 2011 and 2012 over the equatorial Indian Ocean, researchers demonstrated that observed large-scale precipitation statistics could be used to directly constrain small-scale microphysical parameters in models.
Sponsors: The U.S. Department of Energy (DOE) Office of Science, Biological and Environmental Research supported this research as part of the Regional and Global Climate Modeling program. Chun Zhao is supported by the "Thousand Talents Plan for Young Professionals" program of China.
User Facility: The research used computational resources at the National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science user facility.
Reference: S. Hagos, L.R. Leung, C. Zhao, Z. Feng, K. Sakaguchi, "How Do Microphysical Processes Influence Large-Scale Precipitation Variability and Extremes?" Geophysical Research Letters 45, 1661-1667 (2018). [DOI: 10.1002/2017GL076375]
Researchers developed a probability-based framework for representing storm clouds and rainfall in climate models.
As the resolution of climate and weather models continues to improve, researchers can zoom in and see finer details of cloud processes across space and time. Because clouds are often much smaller than the grid size, existing model representations of intense storm clouds include several assumptions that are not correct for the new generation of models.
Using radar observations and convection-permitting models, scientists at the U.S. Department of Energy's Pacific Northwest National Laboratory led the development of a new probability-based framework for representing convective storm clouds and rainfall in current and future generations of climate models.
The proposed framework holds promise for addressing several challenges and biases in climate models related to cloud size, interactions among clouds, and evolution. This could lead to more accurate models representing the inner workings and evolution of convective storm systems, and better predictions of clouds, rainfall, and global circulation.
Key challenges in the treatment of convective clouds in climate and weather models include representing the size continuum of convective clouds, interactions among clouds, and the evolution of clouds over time. To address these challenges, researchers proposed a novel framework for developing more realistic cloud parameters. Based on the Master Equation, a probability formulation for population dynamics, the framework predicts the growth and decay of the number of convective clouds of a given size.
Under this framework, researchers analyzed observations and used theoretical arguments to build simplified cloud population models. They then evaluated the performance of the simplified models against radar observations and convection-permitting models. The results demonstrated the potential of this probability-based approach to represent the evolution of convective cloud systems, such as internal fluctuations and diurnal cycle.
Future work will involve generalizing this approach to include physical processes such as cold pools and stratiform cloud formation, followed by implementation and testing in a climate model.
User Facility: The National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science user facility, provided computing resources for the model simulations.
Reference: S. Hagos, Z. Feng, R.S. Plant, R.A. Houze Jr., H. Xiao, "A Stochastic Framework for Modeling Population Dynamics of Convective Clouds." Journal of Advances in Modeling Earth Systems 10, 448-465 (2018). [DOI: 10.1002/2017MS001214]
Working with colleagues at the Environmental Molecular Sciences Laboratory (EMSL), researchers from the Biological Sciences Division (BSD) helped fuse microfluidics and robotics for a new bioanalysis platform that uses far fewer cells than any other technology.
The new sampling platform provides first-ever insights into proteins in human, animal, and plant cells.
The study appears in Nature Communications. BSD co-authors are Paul D. Piehowski, Yufeng Shen, Ronald J. Moore, Anil K. Shukla, Vladislav A. Petyuk, Richard D. Smith, and Wei-Jun Qian.
Detailed analysis of proteins is important for understanding how cells function. Until now, that has only been possible by sampling thousands to millions of cells.
The new robotically controlled processing platform reduces sample volume by more than two orders of magnitude. It also dramatically enhances such analyses.
Last month, the Carbon Capture Simulation Initiative, known as CCSI, released the CCSI Toolset, a computational tools and models suite designed to maximize learning and reduce risk during the scale-up process for carbon capture technologies used by power plants, as open-source software now available on GitHub. CCSI is a partnership led by the National Energy Technology Laboratory for the U.S Department of Energy’s Office of Fossil Energy. Along with NETL, Lawrence Berkeley, Lawrence Livermore, Los Alamos, and Pacific Northwest national laboratories collaborate on the project with academic partners that include Boston University, Carnegie Mellon University, Princeton University, West Virginia University, and the University of Texas. In 20016, the CCSI Toolset was awarded a prestigious R&D 100 Award by R&D Magazine.
CCSI2 extends the initial CCSI project to further accelerate the commercialization of carbon capture technologies.
At PNNL, Zhijie (Jay) Xu, an engineer with ACMD Division’s Computational Mathematics group, leads CCSI’s computational fluid dynamics (CFD) team that develops multiscale CFD models required to simulate the complex physical and chemical processes of carbon capture. The team, which includes Jie Bao, a research engineer with the Computational Fluids and Nuclear Processes group (Energy and Environment Directorate), and Rajesh Singh and Chao Wang, both scientists with ACMD Division’s Computational Engineering group, uses these models to gain insights on flow field and reaction behaviors in laboratory- and device-scale reactors. Primarily, the CFD team focuses on modeling various sorbent, solvent-based, and other emerging capture technologies.
“High-fidelity CFD models with validated and quantified confidence are important for a better understating of the science behind—and are useful for—system-level carbon capture technology designs.” Xu explained. “For example, with our CFD models at different scales, we were able to improve our fundamental understanding of diverse carbon capture technologies and provide predictions with quantified confidence at different levels.”
The open-source CCSI Toolset provides several major capabilities, including rapid computational screening at various scales, accelerated design and evaluation to better understand and improve system performance, and risk management via simulations that consider model and parameter uncertainty and identify and incorporate critical data.
By sharing the toolset via GitHub, CCSI expects that the broader energy technology development community now will be able to expand the software by directly engaging and improving its existing features, as well as adding new ones, and helping to flag and resolve bugs.
The CCSI Toolset also is a critical component of the extended Carbon Capture Simulation for Industry Impact project, or CCSI2, which includes national laboratories, academia, and industry partners.
This research is sponsored by DOE’s Office of Fossil Energy through the Carbon Capture Simulation Initiative.
Researchers discover mechanism that decreases the magnetism of metallic core particles with a metal-organic framework shell, opening doors to new material designs
Surface mining for rare earth elements used in smartphones and wind turbines is difficult and rarely done in the United States. Scientists wanted to know if they could pull the metals, present at trace levels, from geothermal brines using magnetic particles. The particles, wrapped in a molecular framework shell known as a metal-organic framework (MOF), should easily trap the metals and let the rest flow past. However, the team led by Dr. Pete McGrail at Pacific Northwest National Laboratory found the magnetic strength dropped by 70 percent after the MOF shell was formed.
Why It Matters: The use of MOFs may allow for the separation of yttrium, scandium, and other elements from saline water from geothermal sources, produced waters from oil and gas fields, or wastes such as fly ash. "These elements have a lot of applications -- petroleum refining, computer monitors, magnets in wind turbines," said Dr. Praveen Thallapally, the materials design lead on the study. "Right now, 99 percent of these rare earths are imported to the U.S."
The fundamental knowledge gained from this research shows why this MOF affected the magnetic strength so much and offers insights into methods to avoid these problems.
Summary: Scientists began with an MOF called Fe3O4@MIL-101-SO3. It contains chromium ions connected by organic ligands. The synthesis process forms the MOF shell by a molecular self-assembly process with the MOF building up a layer around the magnetite core particles. Researchers expected the shell to have little impact on the magnetic strength of the particles but found it dropped by 70 percent.
"We wanted to figure out why," said Thallapally. Theories abounded, but nobody had brought together the materials, expertise, and instrumentation to definitively prove what was happening.
They used imaging capabilities at DOE's Environmental Molecular Sciences Laboratory, an Office of Science user facility located at PNNL. Specifically, they used scanning electron and transmission electron microscopy to study the MOF shell. They found that the particles increased in size as expected. This meant the problem wasn't the magnetite particles dissolving in the liquids used during synthesis, a common theory.
Next, they also used 57Fe-Mössbauer spectroscopy to study the oxidation state of the metal core. They found a larger amount of oxidized ferric iron than expected. Digging in further with atom probe tomography, the team found that chromium had crept inside the iron cores. They obtained more details on the chromium oxidation state using X-ray absorption fine structure spectroscopy at the Advanced Light Source, a DOE Office of Science user facility at Lawrence Berkeley National Laboratory.
In the end, the team showed that the chromium penetrated into the pores in the iron particles and was reduced by capturing an electron from the iron thus oxidizing it. The magnetic strength of magnetite is strongly determined by the amount of ferrous versus ferric (oxidized) iron in the material. The iron oxidation thus degraded the magnetic properties. These fundamental insights will allow materials science researchers to adjust the MOF chemistry to prevent the unwanted oxidation-reduction reactions and better retain the core-shell material's magnetic properties.
Sponsors: Department of Energy, Geothermal Technologies Office; Department of Energy, Office of Science, Basic Energy Sciences, Division of Materials Sciences and Engineering (synthesis of the material)
User Facilities: Mössbauer, atom probe, scanning electron microscopy, and transmission electron microscopy, and magnetometry experiments were performed in the Environmental Molecular Sciences Laboratory, a national scientific user facility sponsored by the Department of Energy (DOE) Office of Science's Office of Biological and Environmental Research; X-ray absorption fine structure spectroscopy was performed in the Advanced Light Source, supported by the Director, Office of Science, Office of Basic Energy Sciences, DOE
Reference: SK Elsaidi, MA Sinnwell, D Banerjee, A Devaraj, RK Kukkadapu, TC Droubay, Z Nie, L Kovarik, M Vijayakumar, S Manandhar, M Nandasiri, BP McGrail, and PK Thallapally. 2017. "Reduced Magnetism in Core-Shell Magnetite@MOF Composites." Nano Letters 17(11):6968-6973. DOI: 10.1021/acs.nanolett.7b03451
Interface interactions delay phase separation in oxide thin films, suggesting new ways to control crystal growth
Controlling the formation of defects is an important aspect of designing next-generation spintronic materials. Such materials would use the spin degree of freedom found in electrons, rather than the charge, as is currently used, to do computing. Significantly, defects can effectively erase the information the spin carries. However, it is often difficult to predict how, when, and why such defects form. Researchers have examined the onset of an unusual kind of defect that forms in crystalline films of complex oxides and have suggested a new mechanism for its occurrence, in addition to suggesting ways to mitigate the process. The group showed that formation of nanoscale secondary phases within the host film material can result from electrostatic interactions between the film and the substrate on which the film is grown.
"By understanding these interactions, we can begin to precisely engineer spintronic materials with exciting and useful properties," said lead author Dr. Steven Spurgeon, a materials scientist at PNNL.
A group at Pacific Northwest National Laboratory examined the film growth of a promising oxide material for spintronics and observed the occurrence of an unusual phase separation that occurs. The material grew in a completely uniform fashion up to a certain film thickness at which point a second, simpler oxide phase nucleated as nanoscale particles within the host material. This unusual phase separation did not correlate with strain relief, the usual cause of defect formation that typically occurs as the film gets thicker. Rather, the group traced this onset to the formation of oxygen vacancies which are in turn driven by the electronic interaction of the film with the substrate. The group’s findings suggest new ways to control the film growth to mitigate this undesirable process.
Materials scientists typically set out to synthesize a specific crystalline material with the desired properties, but their efforts are often foiled by the presence of unwanted defects that can degrade performance. These defects are sometimes hard to detect, particularly when they occur on the nano- or atomic scale. In this study, Pacific Northwest National Laboratory researchers examined thin films of La2MnNiO6, in which they had previously found a network of unexpected nanoscale particles of NiO. They set out to understand the mechanism(s) driving the formation of these particles to see if the process could be controlled.
Using a combination of scanning transmission electron microscopy and atom probe tomography, the researchers examined various films and their underlying substrates in 3-D. Surprisingly, they found that the first several nanometers of the film did not contain any NiO precipitates, and it was not until the film had grown to certain thickness that this phase separation occurred. At first they thought the NiO particle formation might be a result of the strained La2MnNiO6 film relaxing to its normal lattice dimensions as its thickness increased. However, both X-ray diffraction and electron microscopy measurements showed this wasn’t the case.
The researchers then realized that the substrate on which the films are grown (SrTiO3) could affect the formation of defects in the growing films via an electrostatic interaction between La2MnNiO6 and SrTiO3. Density functional theory calculations confirmed this hypothesis. A voltage drop across the La2MnNiO6 film, which can naturally occur because of differences in the electronic structures of La2MnNiO6 and SrTiO3, can drive the formation of oxygen vacancies. These point defects can in turn initiate the formation of the larger NiO defects.
These results suggest that NiO precipitate formation could be suppressed to greater film thicknesses either by modifying the interface composition to prevent formation of the voltage drop across the La2MnNiO6 film, or by significantly increasing the oxygen flow rate during film growth.
Sponsors: This research was managed for the Department of Energy, Office of Science, Basic Energy Sciences, Division of Materials Science and Engineering (10122, KC0203020, Electronic, Magnetic and Optical Properties of Epitaxial Films and Interfaces).
Reference: S.R. Spurgeon, P.V. Sushko, A. Devaraj, Y. Du, T. Droubay, and S.A. Chambers, "The onset of phase separation in the double perovskite oxide La2MnNiO6." Physical Review B (2018). [DOI: 10.1103/PhysRevB.97.134110]