I am preparing a study for the NCEA of the use of levelized costs as a metric for assessing the economic competitiveness of alternative generation technologies. When deployed outside their original framework, levelized costs are a terrible measure of generating costs that are deliberately misused by academics and policymakers. I feel a small amount of guilt about this because levelized costs were promoted nearly 5 decades ago as a way of simplifying the results from system planning models by people with whom I worked.
The levelized cost was the logical extension of applying cost-benefit analysis to a very specific investment decision. This was how to supply baseload power for a vertically integrated electricity utility facing rapidly growing demand. At that time the core choice was between building new nuclear or coal plants. Nuclear power plants cost a lot and required (at that time) 8 to 10 years to build.[1] Coal plants were much less expensive and could be built more quickly – typically 5-6 years.[2]
Thanks for reading Gordon’s Substack! Subscribe for free to receive new posts and support my work.
For a simple like-for-like comparison of this kind, levelized costs could be used to examine the trade-off between capital and operating (including fuel) costs on a wide range of assumptions about the cost of capital and future fuel prices. However, there was no reason then or at any point since to use levelized costs as a basis for comparing projects that are designed to serve entirely different purpose – for example, baseload vs peaking capacity – or that play different roles in an electricity system – dispatchable vs non-dispatchable plants.
Current uses of levelized costs bring to mind the Shakespearean phrase “comparisons are odorous” (from Much Ado About Nothing).[3] Their only purpose is advocacy and PR in the hope of masking the smell with greenery or other flummery.
The modern equivalent to the original comparison between nuclear power and coal generation would be nuclear vs combined cycle gas plants. Even that is not truly like for like as CCGTs provide far greater operational flexibility than nuclear plants. They can be ramped up quickly and run quite efficiently at low loads.
One problem with the simple comparison is it masks a critical question. How do we ensure that system demand is always met? Baseload plants are assumed conventionally to operate with load or capacity factors of 85% to 90%. Nuclear plants in the US have an average load factor of 93%. That leaves an average of about 25 days per year for outages for maintenance and refuelling. The assumption is that there is enough spare capacity in any large electricity system to cover those outages.
So let us pose an alternative comparison. How can we power a 1 GW data centre that needs to operate, perhaps with minor variations in load, for 24 hours per day throughout the year with 99.99% reliability? The comparison is prompted by Microsoft’s deal with Constellation Energy to update and reopen Unit 1 at Three Mile Island. Even a nuclear plant requires backup when supplying an always-on data centre.
The Microsoft-Constellation deal is a special case because it involves upgrading and reopening an existing plant, so the lead time should be much shorter than for an equivalent new conventional nuclear plant. An alternative approach being promoted by nuclear advocates is to build small modular reactors (SMRs) such as the Natrium sodium-cooled design promoted by TerraPower or the NuScale Power 77 MW PWR design. Many of these are designed to have long refuelling periods – up to 36 months. Since the average refuelling outage for US plants is estimated as 32 days, reducing the time between such outages reduces the amount of spare capacity required.
One pure strategy for ensuring that 1 GW of power is always available is to build 21 x 50 MW SMRs with a refuelling period of 24 or more months. However, most SMR designs have a higher capacity. For example, the NuScale design is configured in blocks of 12 so that the guaranteed output is only about 850 MW. Many other SMR designs use larger units – 300 MW for the GE-Hitachi design and 470 MW for the Rolls-Royce design. In such cases, the spare capacity required to cover refuelling outages is much larger.
There are two conclusions to be drawn. First, the smaller SMR designs favoured by China and Russia offer greater flexibility and chances of modularity. However, there is a very strong Not Invented Here syndrome in the nuclear industry. The UK has chosen to proceed with 4 designs that are scaled down versions of conventional plants. That is likely to mean that there is a high chance that both costs and constructions will exceed, perhaps greatly exceed, expectations.
Second, building battery storage which would cover, say, a 35-day refuelling period for a 300 MW reactor would be prohibitively expensive. Even using batteries designed for 12-hour storage, the capital cost would exceed $60 billion. The implication is that, no matter what governments claim, some level of gas turbine backup is simply unavoidable. It might not be used if wind and solar conditions were favourable, but in all reliability calculations these are given very low weights for good reason.
To return to our benchmark of a 1 GW data centre which requires 99.99% there are three realistic low carbon options:
A. A conventional nuclear reactor with a capacity of at least 1 GW that can be built on time and on budget. Today that means a Korean APR-1400 with a cost at 2024 prices of about $9 billion per unit. To cover refuelling, it would be necessary to rely upon 1 GW of gas turbine backup at a cost of about $850 million.
B. 4 x 300 MW modular reactors, since no UK government is likely to authorize the use of either Chinese or Russian designs, plus 300MW of gas turbines to cover refuelling. The total capital cost would be about $10.6 billion.
C. Some optimised combination of offshore wind, solar and battery storage. To meet the reliability standard this would require at least 1.8 GW of offshore wind, at least 1.5 GW of solar, and 1 GW of 8-hour battery storage. The total capital cost would be about $13.3 billion. Replacing the battery storage by gas turbines would reduce this total to $10.9 billion.
Of course, if no-one was worried about low carbon generation, the no-brain option would be to build 4 x 300 MW CCGTs plus 300 MW of gas turbines at a capital cost of $1.7 billion. While fuel costs would, of course, be higher than for the nuclear or renewable options, saving $8 to $9 billion in capital costs buys a lot of gas. At a US price of $5 per million Btu – double the current spot market price – the cost of gas to produce 1 GW of electricity throughout the year would be about $265 million. That is below the cost of financing $8 billion of additional capital expenditure at 4%.
There is a simple but very important lesson from this analysis. Meeting a very high reliability standard for electricity supply without using natural gas as a fuel – or coal in the past – is very expensive. Nuclear power is the easiest and, perhaps, the cheapest low carbon option but even that relies on some kind of gas backup to cover refuelling and other outages. The combination of wind, solar and batteries is even more expensive – and pretty hard to get right.
[1] The French managed to build the first two 910 MW units of the Gravelines nuclear plant in 6 years.
[2] The recently closed Ratcliffe power plant with a capacity of 2.1 GW was built in 5 years.
[3] Since the phrase was given to Dogberry, whose lines contain various malapropisms, Shakespeare clearly intended this as a play on the better-known “comparisons are odious”.
Thank you for an illuminating article Gordon. I confess puzzlement over the use of 1.8GW of wind and 1.5GW of solar generation, with 1GW of battery storage, to power a 1GW data centre. I can't readily put numbers to it, but having seen the variations in wind and solar generation on the UK grid, I would have thought a much higher peak generating capacity would be needed to cope with Dunkelflaute.with the required supply reliability. I will be happy to be corrected.