Manuel Götz | Jonathan Lefebvre | Friedemann Mörs | Amy McDaniel Koch | Frank Graf | Siegfried Bajohr | Rainer Reimert | Thomas Kolb
© 2015 The Authors. The Power-to-Gas (PtG) process chain could play a significant role in the future energy system. Renewable electric energy can be transformed into storable methane via electrolysis and subsequent methanation. This article compares the available electrolysis and methanation technologies with respect to the stringent requirements of the PtG chain such as low CAPEX, high efficiency, and high flexibility. Three water electrolysis technologies are considered: alkaline electrolysis, PEM electrolysis, and solid oxide electrolysis. Alkaline electrolysis is currently the cheapest technology; however, in the future PEM electrolysis could be better suited for the PtG process chain. Solid oxide electrolysis could also be an option in future, especially if heat sources are available. Several different reactor concepts can be used for the methanation reaction. For catalytic methanation, typically fixed-bed reactors are used; however, novel reactor concepts such as three-phase methanation and micro reactors are currently under development. Another approach is the biochemical conversion. The bioprocess takes place in aqueous solutions and close to ambient temperatures. Finally, the whole process chain is discussed. Critical aspects of the PtG process are the availability of CO 2 sources, the dynamic behaviour of the individual process steps, and especially the economics as well as the efficiency.
Morgan Bazilian | Ijeoma Onyeji | Michael Liebreich | Ian MacGill | Jennifer Chase | Jigar Shah | Dolf Gielen | Doug Arent | Doug Landfear | Shi Zhengrong
This paper briefly considers the recent dramatic reductions in the underlying costs and market prices of solar photovoltaic (PV) systems, and their implications for decision-makers. In many cases, current PV costs and the associated market and technological shifts witnessed in the industry have not been fully noted by decision-makers. The perception persists that PV is prohibitively expensive, and still has not reached 'competitiveness'. The authors find that the commonly used analytical comparators for PV vis a vis other power generation options may add further confusion. In order to help dispel existing misconceptions, some level of transparency is provided on the assumptions, inputs and parameters in calculations relating to the economics of PV. The paper is aimed at informing policy makers, utility decision-makers, investors and advisory services, in particular in high-growth developing countries, as they weigh the suite of power generation options available to them. © 2012 Elsevier Ltd.
Rohit Sen | Subhes C. Bhattacharyya
Renewable energy-based off-grid or decentralised electricity supply has traditionally considered a single technology-based limited level of supply to meet the basic needs, without considering reliable energy provision to rural consumers. The purpose of this paper is to propose the best hybrid technology combination for electricity generation from a mix of renewable energy resources to satisfy the electrical needs in a reliable manner of an off-grid remote village, Palari in the state of Chhattisgarh, India. Four renewable resources, namely, small-scale hydropower, solar photovoltaic systems, wind turbines and bio-diesel generators are considered. The paper estimates the residential, institutional, commercial, agricultural and small-scale industrial demand in the pre-HOMER analysis. Using HOMER, the paper identifies the optimal off-grid option and compares this with conventional grid extension. The solution obtained shows that a hybrid combination of renewable energy generators at an off-grid location can be a cost-effective alternative to grid extension and it is sustainable, techno-economically viable and environmentally sound. The paper also presents a post-HOMER analysis and discusses issues that are likely to affect/influence the realisation of the optimal solution. © 2013 Elsevier Ltd.
Prakash Parthasarathy | K. Sheeba Narayanan
Steam gasification is considered one of the most effective and efficient techniques of generating hydrogen from biomass. Of all the thermochemical processes, steam gasification offers the highest stoichiometric yield of hydrogen. There are several factors which influence the yield of hydrogen in steam gasification. Some of the prominent factors are: biomass type, biomass feed particle size, reaction temperature, steam to biomass ratio, addition of catalyst, sorbent to biomass ratio. This review article focuses on the hydrogen production from biomass via steam gasification and the influence of process parameters on hydrogen yield. © 2014 Elsevier Ltd.
Evangelos G. Giakoumis
In the present work, a detailed statistical investigation is conducted in order to a) assess the average values of all properties (incl. fatty acid composition) of the most investigated biodiesels and b) quantify the effects of feedstock unsaturation on the physical and chemical properties of the derived methyl ester. To this aim, the available literature on biodiesel properties and fatty acid composition was gathered (more than 750 papers published in International Journals and Conferences), and the reported measurements are statistically analyzed with respect to the feedstock and its chemical composition and structure; in total, 26 different biodiesel feedstocks are studied, comprising of twenty-two edible and non-edible vegetable oils and four animal fats. From the analysis, collective results and statistical data are derived for each property that are then compared with the European and American specifications. The effects of unsaturation are investigated with separate best-fit linear curves provided for each interesting property with respect to the average number of double bonds. The various trends observed are discussed and explained based on fundamental aspects of fuel chemistry and on the consequences they have on real engine operation. © 2012 Elsevier Ltd.
D. Vicinanza | P. Contestabile | V. Ferrante
Sardinia (Italy) is the second largest island in the Mediterranean Sea and its economy is penalized by high costs of electricity, which is double compared to the continental Italian regions, and triple compared to the EU average. In this research, the wave energy potential of the north-west of Sardinia has been studied by an analysis of wave measurements carried out in a 20-year period by the Italian Wave Buoys Network (1989-2009) and the corresponding hindcast data by the European Centre for Medium-Range Weather Forecasts (ECMWF). The annual offshore wave power was found to range between 8.91 kW/m and 10.29 kW/m, the bulk of which is provided by north-westerly waves. The nearshore energetic patterns have been studied by means of a numerical coastal propagation model (Mike21 NSW). The analyses highlight two " hot spots" where the wave power is respectively 9.95 and 10.91 kW/m. For these locations, a Wave Energy Converter with maximum efficiency in the ranges of significant wave heights between 3.5 and 4.5 m (energy periods 9.5-11 s) and 4-6 m (energy periods 9.5-11.5 s) respectively should be selected. In order to find a concrete solution to the problem of harvesting wave energy in this area, the characterization of waves providing energy is considered along with additional considerations, such as installation and operational costs, institutional factors, environmental sensitivity and interferences with others human activities. On the basis of the information available and the identified circumstances, the site of Bosa Marina has been proposed as a prospective wave farm location. For this site in particular, multifunctional structures like harbour or coastal protection breakwaters equipped with a WEC are recommended. © 2012 Elsevier Ltd.
Boucar Diouf | Ramchandra Pode
© 2014 Elsevier Ltd. The potential of lithium ion (Li-ion) batteries to be the major energy storage in off-grid renewable energy is presented. Longer lifespan than other technologies along with higher energy and power densities are the most favorable attributes of Li-ion batteries. The Li-ion can be the battery of first choice for energy storage. Nevertheless, Li-ion batteries to be fully adopted in the renewable energy sector need a price reduction that most likely will be due to the mass production. The progress in Li-ion batteries needs to be carried further to match enough energy and power densities for the electric vehicle. We present the electric vehicle sector as the driving force of Li-ion batteries in renewable energies. We believe that the development of the electric vehicle industry could be the driving force for the renewable sector making Li-ion batteries more affordable as a benefit of mass production. In the development of Li-ion technology, the electric automobile will be accompanied by other sectors such as grid storage, consumer electronics, the electric bike, military or other medical applications. We present the incomparable advantages of Li-ion batteries over other technologies even if some challenges are still to overcome for a wider usage in stationary energy storage.
Aliasghar Baziar | Abdollah Kavousi-Fard
This paper proposes a new probabilistic framework based on 2. m Point Estimate Method (2. m PEM) to consider the uncertainties in the optimal energy management of the Micro Girds (MGs) including different renewable power sources like Photovoltaics (PVs), Wind Turbine (WT), Micro Turbine (MT), Fuel Cell (FC) as well as storage devices. The proposed probabilistic framework requires 2. m runs of the deterministic framework to consider the uncertainty of m uncertain variables in the terms of the first three moments of the relevant probability density functions. Therefore, the uncertainty regarding the load demand forecasting error, grid bid changes and WT and PV output power variations are considered concurrently. Investigating the MG problem with uncertainty in a 24h time interval with several equality and inequality constraints requires a powerful optimization technique which could escape from the local optima as well as premature convergence. Consequently, a novel self adaptive optimization algorithm based on θ-Particle Swarm Optimization (θ-PSO) algorithm is proposed to explore the total search space globally. The θ-PSO algorithm uses the phase angle vectors to update the velocity/position of particles such that faster and more stable convergence is achieved. In addition, the proposed self adaptive modification method consists of three sub-modification methods which will let the particles choosel the modification method which best fits their current situation. The feasibility and satisfying performance of the proposed method is tested on a typical grid-connected MG as the case study. © 2013 Elsevier Ltd.
Luca Liberti | Adriana Carillo | Gianmaria Sannino
In this paper we present a high resolution assessment of the wave energy resources in the Mediterranean. The energy resources are evaluated through of a numerical simulation performed on the entire Mediterranean basin for the period 2001-2010 using a third generation ocean wave model. The model results are extensively validated against most of the available wave buoy and satellite altimeter data. Starting from the model results a detailed analysis of wave energy availability in the Mediterranean Sea is carried out. The western Sardinia coast and the Sicily Channel are found to be among the most productive areas in the whole Mediterranean. Simulation results show the presence of significant spatial variations of wave power availability even on relatively small spatial scales along these two coastlines. For a number of selected locations in these two areas we present an in-depth investigation of the distribution of wave energy among wave heights, periods and directions. Seasonal and inter-annual variability of wave energy potential are also analyzed and discussed. © 2012 Elsevier Ltd.
Florian Steinke | Philipp Wolfrum | Clemens Hoffmann
Intermittent renewable power production from wind and sun requires significant backup generation to cover the power demand at all times. This holds even if wind and sun produce on average 100% of the required energy. Backup generation can be reduced through storage - averaging in time - and/or grid extensions - averaging in space. This report examines the interplay of these technologies with respect to the reduction of required backup energy. We systematically explore a wide parameter space of combinations of both technologies. Our simple, yet informative approach quantifies the backup energy demand for each scenario. We also estimate the resulting total system costs which allow us to discuss cost-optimal system designs. © 2012 Elsevier Ltd.
Da Liu | Dongxiao Niu | Hui Wang | Leilei Fan
Affected by various environment factors, wind speed presents characters of high fluctuations, autocorrelation and stochastic volatility; thereby it is hard to forecast with a single model. A hybrid model combining with input selected by deep quantitative analysis, Wavelet Transform (WT), Genetic Algorithm (GA) and Support Vector Machines (SVM) was proposed. WT was exploited to decompose the wind speed signal into two components, an approximation signal to maintain the major fluctuations and a detail signal to eliminate the stochastic volatility. SVM were built to model the approximation signal. Autocorrelation and partial correlation were applied to analyze the inner ARIMA Autoregressive Integrated Moving Average (ARIMA) relationship between the historical speeds thus to select the input of SVM from them, and Granger causality test was applied to select input from environment variables by checking the influence of temperature with different leading lengths. The parameters in SVM were fine-tuned by GA to ensure the generalization of SVM. A case study of a wind farm from North China demonstrates that this method outperforms the comparison models. © 2013 Elsevier Ltd.
Zhengguo Zhang | Guoquan Shi | Shuping Wang | Xiaoming Fang | Xiaohong Liu
Here we demonstrate thermal energy storage cement mortar (TESCM) fabricated by integrating ordinary cement mortar with a composite phase change material (PCM) based on n-octadecane and expanded graphite (EG). The mass percentage of n-octadecane in the composite PCM can reach as high as 90% due to the excellent adsorption ability of EG, which endows the composite PCM with large latent heat. SEM images of the composite PCM show that n-octadecane is adsorbed into the pores of EG and uniformly covers on the nanosheets of EG, which microstructure contributes to preventing leakage of melted n-octadecane after it changes phase from solid state to liquid state. The n-octadecane/EG composite PCM has a good compatibility with ordinary cement mortar, and does not obviously deteriorate the apparent densities of the TESCM samples. Based on the thermal energy storage performance evaluation, it is found that the TESCM containing the n-octadecane/EG composite PCM plays a role in reducing the variation of indoor temperature, which helps to decrease the energy consumption for buildings. © 2012 Elsevier Ltd.
Mohammad H. Ahmadi | Hadi Hosseinzade | Hoseyn Sayyaadi | Amir H. Mohammadi | Farshad Kimiaghalam
In the recent years, numerous studies have been done on Stirling cycle and Stirling engine which have been resulted in different output power and engine thermal efficiency analyses. Finite speed thermodynamic analysis is one of the most prominent ways which considers external irreversibilities. In the present study, output power and engine thermal efficiency are optimized and total pressure losses are minimized using NSGA algorithm and finite speed thermodynamic analysis. The results are successfully verified against experimental data. © 2013 Elsevier Ltd.
Masoud Sharafi | Tarek Y. ELMekkawy
Recently, the increasing energy demand has caused dramatic consumption of fossil fuels and unavoidable raising energy prices. Moreover, environmental effect of fossil fuel led to the need of using renewable energy (RE) to meet the rising energy demand. Unpredictability and the high cost of the renewable energy technologies are the main challenges of renewable energy usage. In this context, the integration of renewable energy sources to meet the energy demand of a given area is a promising scenario to overcome the RE challenges. In this study, a novel approach is proposed for optimal design of hybrid renewable energy systems (HRES) including various generators and storage devices. The ε-constraint method has been applied to minimize simultaneously the total cost of the system, unmet load, and fuel emission. A particle swarm optimization (PSO)-simulation based approach has been used to tackle the multi-objective optimization problem. The proposed approach has been tested on a case study of an HRES system that includes wind turbine, photovoltaic (PV) panels, diesel generator, batteries, fuel cell (FC), electrolyzer and hydrogen tank. Finally, a sensitivity analysis study is performed to study the sensibility of different parameters to the developed model. © 2014.
Wenxian Yang | Richard Court | Jiesheng Jiang
Wind turbines are being increasingly deployed in remote onshore and offshore areas due to the richer wind resource there and the advantages of mitigating the land use and visual impact issues. However, site accessing difficulties and the shortage of proper transportation and installation vehicles/vessels are challenging the operation and maintenance of the giants erected at these remote sites. In addition to the continual pressure on lowering the cost of energy of wind, condition monitoring is being regarded as one of the best solutions for the maintenance issues and therefore is attracting significant interest today. Much effort has been made in developing wind turbine condition monitoring systems and inventing dedicated condition monitoring technologies. However, the high cost and the various capability limitations of available achievements have delayed their extensive use. A cost-effective and reliable wind turbine condition monitoring technique is still sought for today. The purpose of this paper is to develop such a technique through interpreting the SCADA data collected from wind turbines, which have already been collected but have long been ignored due to lack of appropriate data interpretation tools. The major contributions of this paper include: (1) develop an effective method for processing raw SCADA data; (2) propose an alternative condition monitoring technique based on investigating the correlations among relevant SCADA data; and (3) realise the quantitative assessment of the health condition of a turbine under varying operational conditions. Both laboratory and site verification tests have been conducted. It has been shown that the proposed technique not only has a potential powerful capability in detecting incipient wind turbine blade and drive train faults, but also exhibits an amazing ability in tracing their further deterioration. © 2012 Elsevier Ltd.
Stefan Weitemeyer | David Kleinhans | Thomas Vogt | Carsten Agert
© 2014 Elsevier Ltd. Integrating a high share of electricity from non-dispatchable Renewable Energy Sources in a power supply system is a challenging task. One option considered in many studies dealing with prospective power systems is the installation of storage devices to balance the fluctuations in power production. However, it is not yet clear how soon storage devices will be needed and how the integration process depends on different storage parameters. Using long-term solar and wind energy power production data series, we present a modelling approach to investigate the influence of storage size and efficiency on the pathway towards a 100% RES scenario. Applying our approach to data for Germany, we found that up to 50% of the overall electricity demand can be met by an optimum combination of wind and solar resources without both curtailment and storage devices if the remaining energy is provided by sufficiently flexible power plants. Our findings show further that the installation of small, but highly efficient storage devices is already highly beneficial for the RES integration, while seasonal storage devices are only needed when more than 80% of the electricity demand can be met by wind and solar energy. Our results imply that a compromise between the installation of additional generation capacities and storage capacities is required.
Souma Chowdhury | Jie Zhang | Achille Messac | Luciano Castillo
The development of large scale wind farms that can compete with conventional energy resources presents significant challenges to today's wind energy industry. A powerful solution to these daunting challenges can be offered by a synergistic consideration of the key design elements (turbine selection and placement) and the variations in the natural resource. This paper significantly advances the Unrestricted Wind Farm Layout Optimization (UWFLO) method, enabling it to simultaneously optimize the placement and the selection of turbines for commercial-scale wind farms that are subject to varying wind conditions. The advanced UWFLO method avoids the following limiting traditional assumptions: (i) array/grid-wise layout pattern, (ii) fixed wind condition, or unimodal and univariate distribution of wind conditions, and (iii) the specification of a fixed and uniform type of turbine to be installed in the farm. Novel modifications are made to the formulation of the inter-turbine wake interactions, which allow turbines with differing features and power characteristics to be considered in the UWFLO method. The annual energy production is estimated using the joint distribution of wind speed and direction. A recently developed Kernel Density Estimation-based model that can adequately represent multimodal wind data is employed to characterize the wind distribution. A response surface-based wind farm cost model is also developed and implemented to evaluate and favorably constrain the Cost of Energy of the designed farm. The selection of commercially available turbines introduces discrete variables into the optimization problem; this challenging problem is solved using an advanced mixed-discrete Particle Swarm Optimization algorithm. The effectiveness of this wind farm optimization methodology is illustrated by applying it to design a 25-turbine wind farm in N. Dakota. A remarkable improvement of 6.4% in the farm capacity factor is accomplished when the farm layout and the turbine selection are simultaneously optimized. © 2012 Elsevier Ltd.
Tao Ma | Hongxing Yang | Lin Lu | Jinqing Peng
The intermittent characteristic of a solar-alone or a wind-alone power generation system prevents the standalone renewable energy system from being fully reliable without suitable energy storage capability. In this study, the most traditional and mature storage technology, pumped hydro storage (PHS), is introduced to support the standalone microgrid hybrid solar-wind system. This paper explores a new solution for the challenging task about energy storage. A mathematical model of the hybrid system is developed and the operating principle is introduced. The proposed system is applied in a case study to power a remote island in Hong Kong, and its technical feasibility is then examined. The hour-by-hour simulation results indicate that the intermittent nature of the renewables can be compensated by introducing the PHS technology. Therefore, a reliable and environmentally friendly power supply can be provided. The results demonstrate that technically the PHS based renewable energy system is an ideal solution to achieve 100% energy autonomy in remote communities. © 2014 Elsevier Ltd.
Salman Ahmad | Razman Mat Tahar
Currently, around 90% of Malaysia's electricity generation depends on fossil fuels. This reliance, in a long run, is not a secure option. However, renewable energy sources can contribute to a sustainable electricity generation system; but diversifying fuel supply chain is a complex process. Therefore, the aim of this paper is two folds. Firstly, various renewable resources potential are reviewed, and secondly an assessment model is developed for prioritizing renewable options. Four major resources, hydropower, solar, wind, biomass (including biogas and municipal solid waste) are considered. Their electricity generation potential, along with any likely shortcoming is also discussed. Moreover, using a multi-perspective approach based on analytic hierarchy process (AHP), an assessment model is developed. AHP model employs four main criteria, technical, economical, social and environmental aspects, and twelve sub-criteria. From the review it was found that renewable resources seem to have a sufficient potential to develop a sustainable electricity system. Furthermore, AHP model prioritize those resources, revealing that solar is the most favorable resource followed by biomass. Hydropower and wind however, are ranked third and fourth, respectively. The model also shows that each resource is inclined towards a particular criterion; solar towards economical, biomass towards social, hydropower towards technical, and wind towards environmental aspect. Besides reporting AHP model for the first time in Malaysian context, the assessment performed in this study, can serve decision makers to formulate long-term energy policy aiming for sustainability. © 2013 Elsevier Ltd.
The world has agreed to a set of shared targets on climate change. Those targets require deep (80 to 100 percent) decarbonization, relatively quickly.
What’s the best way to get fully decarbonized? In my previous post, I summarized a raging debate on that subject. Let’s quickly review.
We know that deep decarbonization is going to involve an enormous amount of electrification. As we push carbon out of the electricity sector, we pull other energy services like transportation and heating into it. (My slogan for this: electrify everything.) This means lots more demand for electricity, even as electricity decarbonizes.
The sources of carbon-free electricity with the most potential, sun and wind, are variable. They come and go on their own schedule. They are not “dispatchable,” i.e., grid operators can’t turn them on and off as needed. To balance out variations in sun and wind (both short-term and long-term), grid operators need dispatchable carbon-free resources.
Deep decarbonization of the electricity sector, then, is a dual challenge: rapidly ramping up the amount of variable renewable energy (VRE) on the system, while also ramping up carbon-free dispatchable resources that can balance out that VRE and ensure reliability.
Two potentially large sources of dispatchable carbon-free power are nuclear and fossil fuels with carbon capture and sequestration (CCS). Suffice it to say, a variety of people oppose one or both of those sources, for a variety of reasons.
So then the question becomes, can we balance out VRE in a deeply decarbonized grid without them? Do our other dispatchable balancing options add up to something sufficient?
That is the core of the dispute over 100 percent renewable energy: whether it is possible (or advisable) to decarbonize the grid without nuclear and CCS.
In this post I’m going to discuss three papers that examine the subject, try to draw a few tentative conclusions, and issue a plea for open minds and flexibility. It’ll be fun!
Two papers circulated widely among energy nerds in 2017 cast a skeptical eye on the goal of 100 percent renewables.
One was a literature review on the subject, self-published by the Energy Innovation Reform Project (EIRP), authored by Jesse Jenkins and Samuel Thernstrom. It looked at a range of studies on deep decarbonization in the electricity sector and tried to extract some lessons.
The other was a paper in the journal Renewable and Sustainable Energy Reviews that boasted “a comprehensive review of the feasibility of 100% renewable-electricity systems.” It was by B.P. Heard, B.W. Brook, T.M.L. Wigley, and C.J.A. Bradshaw, who, it should be noted, are advocates for nuclear power.
We’ll take them one at a time.
Most current models find that deep decarbonization is cheaper with dispatchable power plants
Jenkins and Thernstrom rounded up 30 studies on deep decarbonization, all published since 2014, when the most recent comprehensive report was released by the Intergovernmental Panel on Climate Change (IPCC). The studies focused on decarbonizing different areas of different sizes, from regional to global, and used different methods, so there is not an easy apples-to-apples comparison across them, but there were some common themes.
To cut to the chase: The models that optimize for the lowest-cost path to zero carbon electricity — and do not rule out nuclear and CCS a priori — generally find that it is cheaper to get there with than without them.
Today’s models, at least, appear to agree that “a diversified mix of low-CO2 generation resources” add up to a more cost-effective path to deep decarbonization than 100 percent renewables. This is particularly true above 60 or 80 percent decarbonization, when the costs of the renewables-only option rise sharply.
Again, it’s all about balancing out VRE. The easiest way to do that is with fast, flexible natural gas plants, but you can’t get past around 60 percent decarbonization with a large fleet of gas plants running. Getting to 80 percent or beyond means closing or idling lots of those plants. So you need other balancing options.
One is to expand the grid with new transmission lines, which connects VRE over a larger geographical area and reduces its variability. (The wind is always blowing somewhere.) Several deep decarbonization studies assume a continental high-voltage super-grid in the US, with all regions linked up. (Needless to say, such a thing does not exist and would be quite expensive.)
The other way to balance VRE is to maximize carbon-free dispatchable resources, which include dispatchable supply (power plants), dispatchable demand (“demand management,” which can shift energy demand to particular parts of the day or week), and energy storage, which acts as both supply (a source of energy) and demand (a way to absorb it).
Energy storage and demand management are both getting better at balancing out short-term (minute-by-minute, hourly, or daily) variations in VRE.
But there are also monthly, seasonal, and even decadal variations in weather. The system needs to be prepared to deal with worst case scenarios, long concurrent periods of high cloud cover and low wind. That adds up to a lot of backup.
We do not yet have energy storage at anything approaching that scale. Consider pumped hydro, currently the biggest and best-developed form of long-term energy storage. The EIRP paper notes that the top 10 pumped-hydro storage facilities in the US combined could “supply average US electricity needs for just 43 minutes.”
Currently, the only low-carbon sources capable of supplying anything like that scale are hydro, nuclear, and (potentially) CCS.
So if you take nuclear and CCS off the table, you’re cutting out a big chunk of dispatchable capacity. That means other dispatchable resources have to dramatically scale up to compensate — we’d need a lot of new transmission, a lot of new storage, a lot of demand management, and a lot of new hydro, biogas, geothermal, and whatever else we can think of.
Even with tons of new transmission, we’ll still need a metric shit-ton of new storage. Here’s a graph for comparison:
The US currently has energy storage capacity for around an hour of average electricity consumption. Only 15 weeks, six days, and 23 hours to go!
Suffice to say, that would mean building a truly extraordinary amount of energy storage by mid-century.
It gets expensive, progressively more so as decarbonization reaches 80 percent and above. Trying to squeeze out that last bit of carbon without recourse to big dispatchable power plants is extremely challenging, at least for today’s models.
Thus, models that optimize for the lowest-cost pathway to deep decarbonization almost always include lots of dispatchable power plants, including nuclear and CCS.
“It is notable,” the review says, “that of the 30 papers surveyed here, the only deep decarbonization scenarios that do not include a significant contribution from nuclear, biomass, hydropower, and/or CCS exclude those resources from consideration a priori.”
To summarize: Most of today’s models place high value on large dispatchable power sources for deep decarbonization, and it’s difficult to muster enough large dispatchable power sources without nuclear and CCS.
100 percent renewables hasn’t been 100 percent proven feasible
The second review takes a somewhat narrower and more stringent approach. It examines 24 scenarios for 100 percent renewable energy with enough detail to be credible. It then judges them against four criteria for feasibility:
(1) consistency with mainstream energy-demand forecasts; (2) simulating supply to meet demand reliably at hourly, half-hourly, and five-minute timescales, with resilience to extreme climate events; (3) identifying necessary transmission and distribution requirements; and (4) maintaining the provision of essential ancillary services.
(“Ancillary services” are things like frequency regulation and voltage control, which keep the grid stable and have typically been supplied by fossil fuel power plants.)
Long story short, none of the studies passed these feasibility tests. The highest score was four points out of a possible seven.
The authors conclude that “in all individual cases and across the aggregated evidence, the case for feasibility [of 100 percent renewable energy] is inadequate for the formation of responsible policy directed at responding to climate change.”
That is the peer-reviewed version of a sick burn.
Note, though, that these are pretty tough criteria: Researchers model a full electricity system, responsive to both short-term and long-term weather variations, meeting demand that is not appreciably different from mainstream projections, providing all needed services reliably, using technologies already demonstrated at scale.
That’s not easy! It’s reasonable to ask whether we need that much confidence to begin planning for long-term decarbonization. If any new system must demonstrate in advance that it is fully prepared to substitute for today’s system, it’s going to be difficult to change the system at all.
(Renewables advocates might say that nuclear advocates have a vested interest in keeping feasibility criteria as strict and tied to current systems as possible.)
For more in this vein, see “A critical review of global decarbonization scenarios: what do they tell us about feasibility?” from 2014, and here for more.
The question is how much our current decision-making should be constrained by what today’s models tell us is possible in the distant future.
Energy experts are more optimistic than their models
A third paper worth mentioning is 2017’s Renewables Global Futures Report (GFR) from global renewable-energy group REN21. In it, they interviewed “114 renowned energy experts from around the world, on the feasibility and challenges of achieving a 100% renewable energy future.”
There’s a ton of interesting stuff in the report, but this jumps out:
That’s 71 percent who agree that 100 percent renewables is “reasonable and realistic.” Yet the models seem to agree that 100 percent renewables is unrealistic. What gives?
Models are only models
It pays to be careful with literature reviews. They are generally more reliable than single studies, but they are exercises in interpretation, colored by the assumptions of their authors. And there’s always a danger that they are simply compiling common biases and limitations in current models — reifying conventional wisdom.
There are plenty of criticisms of current models of how climate change and human politics and economics interact. Let’s touch on a few briefly, and then I’ll get to a few takeaways.
1) Cost-benefit analysis is incomplete.
Models that “minimize cost” rarely minimize all costs. They leave out many environmental impacts, along with more intangible social benefits like community control, security, or independence.
UC Berkeley’s Mark Delucchi, occasional co-author with Stanford’s Mark Jacobson of work on 100 percent WWS (wind, water, and sun — see more about that at the Solutions Project), says that the ideal analysis of deep decarbonization would involve a full cost-benefit analysis, taking all effects, “the full range of climate impacts (not just CO2), air-quality benefits, water-quality benefits, habitat destruction, energy security — everything you can think of,” into account. No one, he said, has done that for getting above, say, 90 percent WWS.
“My own view,” he told me, “which is informed but not demonstrated by my work on 100% WWS, is that the very large environmental benefits of WWS probably make it worth paying for close to — but not quite — a 100% WWS systems. The ‘not quite’ is important, because it does look to me that balancing supply and demand when you get above 90-95% WWS (for the whole system) starts to get pretty expensive.”
In other words, full cost-benefit analysis is likely to offset higher renewables costs more than most models show.
2) Most models are based on current markets, which will change.
“Our traditional energy models are pretty clearly biased against a 100% renewable outcome,” Noah Kaufman told me. He worked on the “US Midcentury Strategy for Deep Decarbonization,” which the US government submitted to the UNFCCC in November 2016 as a demonstration of its long-term commitment to the Paris climate process. “Models like to depict the system largely as it exists today, so of course they prefer baseload replacing baseload.”
(Kaufman cautions that while current models may underestimate renewables, he doesn’t believe we know that with enough certainty “to mandate those [100% renewable] scenarios.”)
Price analyses based on current wholesale energy markets will not tell us much about markets in 20 or 30 years. VRE is already screwing up wholesale markets, even at relatively low penetrations, because the incremental cost of another MW of wind when the wind is blowing is $0, which undercuts all competitors.
Wholesale power markets will not survive in their current form. Markets will evolve to more accurately value a wider range of grid services — power, capacity, frequency response, rapid ramping, etc. — allowing VRE and its complements to creep into more and more market niches.
Financing will evolve as well. As it gets cheaper, VRE and storage start looking more like infrastructure than typical power plant investments. Almost all the costs are upfront, in the financing, planning, and building. After that, “fuel” is free and maintenance costs are low. It pays off over time and then just keeps paying off. Financing mechanisms will adapt to reflect that.
3) Most models do not, and cannot, model emerging solutions or current costs.
Most energy models today do not account for the full complement of existing strategies to manage and expand VRE — all the different varieties of storage, the growing list of demand-management tools, new business models and regulations — so they neither are, nor claim to be, definitive.
“I don’t want to overstate or improperly extract conclusions from my work,” NREL’s Bethany Frew, who co-authored one of the key studies in the EIRP review, cautions, “I didn’t look at an exhaustive set of resources.”
Models today cannot capture the effects of technologies and techniques that have not yet been developed. But this stuff is the subject of intense research, experimentation, and innovation right now.
It is viewed as irresponsible to include speculative new developments in models, but at the same time, it’s a safe bet that the energy world will see dramatic changes in the next few decades. Far more balancing options will be available to future modelers.
In a similar vein, as energy modeler Christopher Clack (formerly of NOAA) told me, it can take two or three years to do a rigorous bit of modeling. And that begins with cost estimates taken from peer-reviewed literature, which themselves took years to publish.
The result is that models almost inevitably use outdated cost estimates, and when costs are changing rapidly, as they are today, that matters.
Speaking of which…
4) Models have always underestimated distributed energy technology.
As I described in detail in this post, energy models have consistently and woefully underestimated the falling costs and rapid growth of renewable energy.
The professional energy community used to be quite convinced that wind and solar could play no serious role in the power system because of their variability. Then, for a long time, conventional wisdom was that they could provide no more than 20 percent of power before the grid started falling apart.
That number has kept creeping up. Now CW has it around 60 percent. Which direction do you suppose it will go in the next few decades?
It’s a similar story with batteries and EVs. They keep outpacing forecasts, getting cheaper and better, finding new applications. Is there any reason to think that won’t continue?
Which brings us to…
5) Pretending we can predict the far future is silly.
Predicting the near future is difficult. Predicting the distant future is impossible. Nothing about fancy modeling makes it any less impossible.
Modelers will be the first to tell you this. (Much more in this old post from 2014.) They are not in the business of prediction; they aren’t psychics. All they do is construct elaborate if-then statements. If natural gas prices do this, solar and wind prices do that, demand does this, storage does that, and everything else more or less stays the same … then this will happen. They are a way of examining the consequences of a set of assumptions.
Are the assumptions correct? Will all those variables actually unfold that way in the next 20, 30, 40 years? Ask any responsible modeler and they will tell you: “Eff if I know.”
Long-term energy modeling was more tractable when the energy world was mostly composed of very large technologies and projects, with a small set of accredited builders and slow innovation cycles. But as energy and its associated technologies and business models have gotten more and more distributed, innovation has become all the more difficult to even track, much less predict.
Because distributed energy technologies are smaller than big power plants, they iterate faster. They are more prone to complex interactions and emergent effects. Development is distributed as well, across hundreds of companies and research labs.
Energy is going to bend, twist, and accelerate in unpredictable ways even in the next few years, much less the next few decades. We really have no friggin’ idea what’s going to happen.
The lessons to take from all this
Okay, we’ve looked at some of the literature on 100 percent renewables, which is generally pretty skeptical. And we’ve covered some reasons to take the results of current modeling with a grain of salt. What should we take away from all this? Here are a few tentative conclusions.
1) Take variability seriously.
One reason everyone’s so giddy about renewable energy is that it’s been pretty easy to integrate it into grids so far — much easier than naysayers predicted.
But one thing models and modelers agree on is that variability is a serious challenge, especially at high VRE penetrations. As VRE increases, it will begin to run into technical and economic problems. (Read here and here for more.) California is already grappling with some of these issues.
Getting deep decarbonization right means thinking, planning, and innovating toward a rich ecosystem of dispatchable resources that can balance VRE at high penetrations. That needs to become as much a priority as VRE deployment itself.
2) Full steam ahead on renewable energy.
We have a solid understanding of how to push VRE up to around 60 percent of grid power. Right now, wind and solar combined generate just over 5 percent of US electricity. (Nuclear generates 20 percent.)
The fight to get 5 percent up to 60 is going to be epic. Political and social barriers will do more to slow that growth than any technical limitation, especially in the short- to mid-term.
This is likely why the energy experts interviewed by REN21, though they believe 100 percent renewables is “reasonable and realistic,” don’t actually expect it to happen by mid-century.
It will be an immense struggle just to deploy the amount of VRE we already know is possible. If we put our shoulder to that wheel for 10 years or so, then we can come up for air, reassess, and recalibrate. The landscape of costs and choices will look very different then. We’ll have a better sense of what’s possible and what’s lacking.
Until then, none of these potential future limitations are any reason to let up on the push for VRE. (Though there should also be a push for storage and other carbon-free balancing options.)
3) Beware natural gas lock-in.
The easy, default path for the next several years will be to continue to lean on natural gas to drive down emissions and balance VRE. And sure enough, there’s a ton of natural gas “in the queue.”
But leaning too hard on natural gas will leave us with a ton of fossil fuel capacity that we end up having to shut down (or leave mostly idle) before the end of its useful life. That will be an economically unfortunate and politically difficult situation.
We need to start thinking about alternatives to natural gas, today.
4) Keep nuclear power plants open as long as possible.
Clack told me something intriguing. He said that there is enough nuclear capacity in the US today to serve as the necessary dispatchable generation in an 80 percent decarbonized grid. We wouldn’t need any big new nuclear or CCS power plants.
It would just mean a) changing market and regulatory rules to make nuclear more flexible (it largely has the technical capacity), and b) keeping the plants open forever.
Obviously those plants are not going to stay open forever, and the ones that are genuinely unsafe should be shut down. And Clack’s models are only models too, not gospel.
But what’s clear is that, from a decarbonization perspective, allowing a nuclear power plant to close (before, say, literally any coal plant) is a self-inflicted wound. It makes the challenges described above all that much more difficult. Every MW of dispatchable, carbon-free power capacity that is operating safely should be zealously guarded.
5) Do relentless RD&D on carbon-free dispatchable resources, including nuclear.
We know we will need a lot of dispatchable carbon-free resources to balance out a large share of VRE.
Storage and demand management can play that role, and in any scenario, we will need lots of both, so they should be researched, developed, and deployed as quickly as possible.
But large-scale, carbon-free dispatchable generation will help as well. That can be hydro, wave, tidal, geothermal, gas from waste, renewable gas, or biomass. It can also be nuclear or CCS.
I personally think fossil fuel with CCS will never pass any reasonable cost-benefit analysis. It’s an environmental nightmare in every way other than carbon emissions, to say nothing of its wretched economics and dodgy politics.
But we’re going to need CCS regardless, so we might as well figure it out.
Current nuclear plants have proven uneconomic just about everywhere they’ve been attempted lately (except, oddly, South Korea) and there is no obvious reason to favor them in their market battle with renewables.
But it is certainly worth researching new nuclear generation technologies — the various smaller, more efficient, more meltdown-proof technologies that seem perpetually on the horizon. If they can make good on their promise, with reasonable economics, it would be a blessing. (See Brad Plumer’s piece on radical nuclear innovation.)
Basically, research everything. Test, experiment, deploy, refine.
6) Stay woke.
Above all, the haziness of the long-term view argues for humility on all sides. There’s much we do not yet know and cannot possibly anticipate, so it’s probably best for everyone to keep an open mind, support a range of bet-hedging experiments and initiatives, and maintain a healthy allergy to dogma.
We’ve barely begun this journey. We don’t know what the final few steps will look like, but we know what direction to travel, so we might as well keep moving.