0% found this document useful (0 votes)
36 views33 pages

LiberalArtsAndScienceAcademy BhFa Aff 07 - Glenbrooks Round 4

The U.S. fusion energy program is hindered by disputes over intellectual property rights, delaying investment and development of pilot reactors. The Department of Energy's control over patents is a significant concern for fusion startups, as it could deter investors and slow technological progress. Achieving commercial fusion power is critical for addressing future energy demands and mitigating existential threats from climate change.

Uploaded by

dev31415
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views33 pages

LiberalArtsAndScienceAcademy BhFa Aff 07 - Glenbrooks Round 4

The U.S. fusion energy program is hindered by disputes over intellectual property rights, delaying investment and development of pilot reactors. The Department of Energy's control over patents is a significant concern for fusion startups, as it could deter investors and slow technological progress. Achieving commercial fusion power is critical for addressing future energy demands and mitigating existential threats from climate change.

Uploaded by

dev31415
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 33

1AC---Fusion---V1---LASA BF

1AC
1AC---Fusion---ADV
Advantage one is fusion.
Current U.S. nuclear patent disputes are the foundation of stagnating fusion
investment---only providing the guarantee of protection fills the void.
Behr ‘24 [Peter; senior energy reporter for Energywire covering power grid reliability, climate policy
and cybersecurity – Nieman Fellowship – Harvard university – BA in English from Colgate University; 2-
27-2024; "Intellectual property fights hobble DOE fusion program"; E&E News by POLITICO;
https://www.eenews.net/articles/intellectual-property-fights-hobble-doe-fusion-program/; ‫]بالل‬

The moonshot U.S. program to accelerate fusion energy has struggled to get started, held up by
disputes over the federal government’s control of scientific discoveries by startup companies, according to
several people familiar with negotiations.

I ntellectual p roperty r ights have been at the center of months of talks between the D epartment o f
E nergy and eight fusion technology companies vying for multimillion-dollar federal grants. DOE selected
the participants in its “milestone” program last May on the condition that a company meet engineering and
scientific benchmarks on the way to designing a pilot fusion reactor .

But after nine months, no tech nology investment agreements have been announced, and the Biden
administration is approaching two years since it rolled out its vision for developing fusion power.

“We’re still in active negotiations with the DOE, and it isn’t all ironed out yet. I can’t comment on specifics, but
negotiations are progressing ,” said Andy Freeberg, head of communications for Seattle-based Zap Energy, one of the milestone
program participants. Several other program participants declined to comment.

Backed by rare bipartisan support in Congress, the admin istration aims to accelerate progress toward building
one or more pilot reactors in the 2030s. The goal is to show that the technical challenges of delivering
fusion power at a commercial scale can be overcome . Fusion mimics nuclear reactions inside stars.
Andrew Holland, chief executive of the Fusion Industry Association, said he and the participating fusion companies are confident the program
will move ahead.

Sources close to the program said a delay of months isn’t significant since commercial fusion power is likely decades away. But they said it’s
notable that DOE and fusion-tech companies are struggling to find common ground on the
government’s right to own or share rights to fusion breakthroughs, and that could affect future
development .

A DOE spokesperson declined to discuss the agency’s position on federal rights to fusion discoveries. But leaders in the nascent
fusion industry say the ability to own intellectual property and benefit from any commercial success is
critical .

“That’s the bread and butter of what they have that appeals to their investors,” said Stephen Dean, president of
Fusion Power Associates, a nonprofit information resource about the technology.

“For any tech nology startup , whether it’s in fusion or otherwise, the preservation and ongoing ability to
commercialize intellectual property is a crucial part of the value proposition for investors,” said Chris Kelsall,
a former fusion company chief executive. He said his comments in an interview with E&E News are solely his own and not expressed on behalf
of or in relation to any former employer.

“To continue providing regular rounds of capital investment, investors want to know that a technology
company’s secret sauce — the IP — remains intact and that the company’s ability to derive future
revenue from its core IP is not unduly compromised ,” Kelsall said.

The DOE [Department of Energy] will assert control over patents, curbing development
of essential reactor technology.
Behr ‘24 [Peter; Senior energy reporter for Energywire covering clean energy, recipient of the Nieman
Fellowship from Harvard university; 2/27/2024; "Intellectual property fights hobble DOE fusion
program"; E&E News by POLITICO; https://www.eenews.net/articles/intellectual-property-fights-hobble-
doe-fusion-program/]//LASA-AB

Going from demonstration reactors to utility -scale fusion plants that can produce reliable and affordable electricity
requires more scientific breakthroughs and engineering solutions on an Olympian scale. Now, fusion startups
are heading into a vital phase of fundraising as they stretch to produce proof of concept working models of their technologies
before the end of this decade. The cost of a utility-scale fusion pilot plant could exceed $5 billion, industry leaders say.

That underscores the importance of getting the IP agreement right for both sides, experts said.
Longview Fusion, for example, is led by Edward Moses and Valerie Roberts, both former leaders of the Lawrence Livermore National
Laboratory’s ignition facility. Equipped with a bank of powerful lasers designed to test nuclear weapons, the lab made headlines in December
by producing the first fusion reaction that generated more electricity than the reaction consumed.

The reaction was triggered when laser beams struck a tiny target containing the hydrogen isotope fuel in a test that took an entire day to
accomplish. Moses and Roberts explained that to make this process commercial, their reactor will have to repeat the ignition process
continuously 15 times a second.

Key to that success is the scientific and engineering knowledge and patents they possess that will help them simplify
and automate the process, dramatically lowering costs , they said. Their goal is electricity from fusion in 10 years at $50
per megawatt-hour.

“We have our own IP, which is independent of the government. We have a strong technology moat around our process,” said Moses, whose
company isn’t involved in the DOE milestone program.

The companies in the milestone program will have to share some of what they discover.

“DOE probably tells them this will be confidential ,” said Dean of the nonprofit Fusion Power Associates. “But DOE can
look at it in depth and see what they’re thinking about and what they want to hold secret .”

“If government funding programs insist on control or ownership of IP, they may end up slowing the
pace of technology development ,” said Kelsall, the fusion entrepreneur.

“Fusion has to be relevant and arrive on time to impact the energy transition,” he added. “If there isn’t a first-
of-a-kind pilot up and running by at least by the mid- to late 20 30s , there’s a risk it will miss the boat .”
That prevents licensing agreements necessary for innovation.
McEachran ’19 [Rich McEachran, MA in Postcolonial Culture & Global Policy, is a journalist;
3/20/2019; Raconteur; “Why companies must go about sharing IP with caution,”
https://www.raconteur.net/risk-regulation/sharing-ip-caution]//LASA-AB

The energy sector has been “far more secretive and disinclined to engage in an ethos of sharing” than
other sectors, says Colin Hulme, partner and IP specialist at Scottish law firm Burness Paull, which provides a range of services to exploration
and production companies. However, given the rapid tech nological advances being made, he adds, the sector is
under pressure to adapt and innovate to survive.

“To deliver the efficiencies that are necessary for survival , better planning is critical. Sharing IP and
information can lead to the power to predict and insights that can turn unviable plays into cash returns ,” says
Mr Hulme.

Choosing to open source IP is about profit as much as progress

There is another reason why companies in the energy sector might want to open source IP, to build towards a greener and sustainable future.
It’s for this exact reason that Tesla open sourced all its patents in 2014. Entrepreneur Elon Musk believed it would help grow the electric vehicle
industry more rapidly and establish his Tesla brand as the market leader.

While most key players


in the energy sector are unlikely to take Tesla’s total open book approach, says Mr Hulme, they are likely
to take positive steps to increase collaboration and innovation , while being careful to maintain clear ownership of
the IP.

On the face of it, sharing IP may sound altruistic and straightforward. In reality, it’s profit-driven and is no easy process.

No matter their size and the industry they’re operating in, companies need to protect themselves so
they can maintain a competitive advantage.
In what was described as an all-time high, the European Patent Office received nearly 166,000 patent applications in 2017. Closer to home,
there were more than 22,000 patent applications in the UK in the same year and just over 6,300 were granted, according to analysis conducted
by the UK’s Intellectual Property Office.

<<figure omitted>>

Sharing IP is not the opposite of patenting it

Without patents, companies run the risk of their technology being exploited. Applying for and then securing
patents puts them in a stronger position when it eventually comes to sharing IP, says Peter Arrowsmith, patent attorney at one
of the UK’s leading IP law firms Gill Jennings and Every.

“Sharing might seem like the antithesis of patenting IP, since the latter is about restricting the rights of other companies and making sure
technology remains proprietary. But forward-thinking companies also need to consider how their tech nology can be
more readily adopted,” says Mr Arrowsmith.
For Rockley Photonics, a company at the forefront of silicon photonics which manufactures chipsets for datacentres and sensors, sharing IP is
critical for successful manufacturing. While the company has an intimate knowledge of photonics manufacturing, it doesn’t own any
manufacturing facilities and instead contracts out the production, known as a fabless operation. This means the company has to share its IP in
full with the partnering foundries.

One of the more effective ways to drive innovation through collaboration is by licensing IP assets. “Startups, in
particular, underestimate the value of licensing. It doesn’t get talked about enough,” says Merlie Calvert, a former lawyer and founder of
Farillio, a legal technology platform which aims to simplify the law for entrepreneurs by providing them with all the legal documents and
guidance they need to grow their business.
“Experts will tell you to protect your IP, and you should, but it’s only a step towards making real value out of creativity. When
you license
creative ideas, products and technology, you turn them into real assets , market testers and door-openers for

bigger sales, orders and partnerships ,” says Ms Calvert.

It’ll deter crucial investors from entering the industry, driving bankruptcies across the
sector.
Powers et. al ’24 [Mary B. Powers, MA in journalism, has been reporting on engineering for more
than 30 years; Debra K. Rubin is ENR Editor-at-Large for Energy; Peter Reina is a Correspondent from
London, U.K.; David Godkin is a contributor; 6/27/2024; ENR; “Nuclear Fusion Pushes to Reach
Commercial Power Plant Stage,” https://www.enr.com/articles/58879-nuclear-fusion-pushes-to-reach-
commercial-power-plant-stage]//LASA-AB
Intellectual Property

The issue of government access to startup firms’ patents , trade secrets and other intellectual property has been a
controversial part of Energy Dept. funding deal talks with developers, executives told ENR on background. They
considered government demand for such information in certain circumstances too onerous since it is core to
a firm’s market valuation and investor support .

One battery tech nology firm the agency supported financially declared bankrupt cy in 2021 and was bought by a
Chinese firm that took its intellectual property to China, said a Congressional Research Service report. Terms of Milestone funding program
contracts were not disclosed, but developer unease remains , one executive told ENR. Rules also mandate a ban on non-U.S. based
investors in participating companies.

The patent issue becomes more pronounced as more private investors, particularly big tech firms, back fusion
startups to secure a directed power source as more and larger data centers and added artificial intelligence use escalate energy
demand.

Fusion’s possible---patent development checks energy depletion, solves warming,


AND gets us off the rock---solving all existential threats.
Turrell ‘21 [Arthur; PhD in plasma physics from Imperial College London, for which he won the Atomic
Weapons Establishment thesis prize, is Deputy Director for Research and Capability at the ONS Data
Science Campus, as well as a visitor to the Bank of England, the plasma physics group at Imperial College
London and the Data Analytics for Finance and Macro Research Centre at King's College London; 08-05-
2021; “Epilogue: Can We Afford Not to Do Fusion?”; The Star Builders: Nuclear Fusion and the Race to
Power the Planet; Orion; https://pubs.aip.org/physicstoday/article/75/10/11/2845209/Fusion-power-s-
future; ‫]بالل‬

The aftermath of these extinction-scale events would be enormous climate change. Dust and earth from asteroid impacts or volcan ic
erupti o n s could block out part of the Sun’s light, producing a cooling effect and making it harder to grow food or use
solar energy.

What can we do to prepare for these rare but world-changing disasters? Having a source of energy that can
keep going despite sweeping and adverse changes in climate seems like a good precaution. As we know,
fossil fuels will run out before too long. And renewables that rely on large areas of land are susceptible to
environmental changes . Fission could be one solution. Star power is another: the fuels are (relatively) common—deuterium is found
in all the world’s oceans while lithium is found on all the world’s inhabited continents— and in any case not that much of either is needed.

This may all sound scary. It is. But it’s also prudent to think about in the long run. We know that these events can happen. On long enough
timescales, they’re [they are] almost certain to. Personally, I’d like humanity to thrive far into the future. The
choices we make today
have enormous consequences for future generations. Star builders say that using a fraction of world research
budgets to perfect fusion energy is a small price to pay for a disaster-resilient power source .
Of course, there are reasons beyond just saving our skin to want to see fusion achieved.

The pursuit of fusion has led to scientific discoveries that are among the most extreme and surprising of any
field. Think for a second about plasmas. Understanding them is key for fusion, yes, but every plasma discovery also gives us a better
understanding of 99 percent of the visible universe. Plasmas are one of the most dramatic examples of how “more is different”: while we might
understand the behaviors of the individual components—nuclei and electrons —something changes when they’re combined in large numbers
and complexity emerges out of simplicity. 4 Understanding these rich phenomena can be its own reward, as with so many other topics in
science. Even if the study of plasmas wasn’t worth it for the joy of discovery alone, their emergent complexity holds practical lessons for other
subjects, like economics, where people’s interactions are also different from the sum of their parts. I5

A growing understanding of plasmas has led to more practical applications too, like cleaning surgical equipment or, quite literally, growing
diamonds. 6 Also, using lasers and plasmas together has resulted in new and better ways to fight cancer: lasers can be used to accelerate
protons in a plasma to very high energies, and those protons can more accurately target cancerous cells than, for example, X-rays. 7

Machines like NIF are doing incredible science in addition to the experiments on inertial fusion energy and
stockpile stewardship. NIF has been used to recreate the conditions in the core of stars that have ten times the mass of the Sun, leading to
better estimates of their rate of fusion reactions. 8 NIF experiments have taken us elsewhere in space too. Shortly before his death at the age of
ninety-five, Livermore founder Edward Teller told scientists there that what he wanted for his one hundredth birthday was to get “excellent
predictions— calculations and experiments—about the interiors of the planets.” 9 The strides forward came too late for Teller, who passed
away in 2003, but since NIF opened, Livermore’s scientists have managed to re-create the huge pressures of the interiors of gas giants like
Jupiter and Saturn, albeit on a tiny scale. In the experiments, NIF’s laser beams were used to compress liquid deuterium to 6 million times Earth
pressure and to a temperature of a few thousand degrees. As the pressure increased, usually transparent deuterium liquid first became opaque
and then, most remarkably and bizarrely of all, turned into a shiny metal. It is like squeezing your coffee cup and finding that it has turned into a
plate. 10

The engineering challenges that star builders like Professor Ian Chapman are solving en route to commercializing fusion have industrial spin-
offs. Culham’s remotehandling robots can be used in many situations that require dexterity but aren’t safe for humans to enter.
Also, developing resilient fusion reactors is pushing engineers to create new materials suitable for extremes. 11 Lawrence

Livermore has filed a host of patents based on what they’ve had to invent to make NIF work . As happened with research into
crewed space flight, fusion research is driving innovation beyond its own needs.

The scientific and industrial reasons to pursue fusion, good as they are, aren’t the most bold or ambitious arguments to
perfect the power source of stars though. There’s a reason to achieve fusion that speaks even more loudly to our existence as a
species—which is that we could spread our wings and explore the universe.
Venturing farther into space sounds like a wild dream, and it is, for now. But we’ve been to the Moon. We’ve landed an un-crewed spacecraft
on an asteroid. We’ve sent probes outside of the solar system. Before too long, we may send a crewed mission to Mars. What person doesn’t
want to open the next door and see what’s waiting for us in the rest of the universe?

The only way we’ll travel to the universe beyond our celestial backyard is with plasma physics and nuclear
fusion . Fusion rockets are humanity’s best hope for traveling across the vast distances of space.
Rocket science has a reputation for being complicated, but it all boils down to two simple ideas. The first is this: if you expel stuff in one
direction, you’ll travel in the other direction. The second relates to how much force can be created. That’s determined by how much mass is
being expelled, and how quickly it’s being expelled. Rockets expel a lot of mass quickly to get into orbit. But there’s a problem with expelling
lots of mass; you have to carry the mass with you until it’s expelled. The more force you need, the more mass you have to carry, which
increases the force you need, and so on. To avoid this problem, propulsion methods that work in space won’t be able to rely on expelling lots of
mass; instead they’ll need to maximize the speed of the mass being expelled (and to expel only a little mass).
You can probably guess why fusion is a good choice for the hasty space traveler. Fusion can create enormous exhaust
speeds with only small amounts of mass because of its high energy density. While the best chemical rockets can
achieve exhaust speeds of 4.5 kilometers (about 2.8 miles) per second, and nuclear fission might reach 8.5 kilometers per
second, a working nuclear fusion reactor could potentially produce exhaust speeds of hundreds to

thousands of kilometers per second. 12


Sticking a fusion reactor on a spacecraft is, surprisingly, not the only fusionspacecraft option out there. Project Orion was part of Edward
Teller’s “Plowshare” program to turn nuclear weapons to peaceful purposes II and was co-led by physicist Freeman Dyson. 13 It looked at
chucking exploding hydrogen bombs out of the back end of a spacecraft to cause it to accelerate in the other direction. The scheme isn’t quite
as insane as it may seem, and Dyson himself estimated that it could produce exhaust speeds of one thousand to ten thousand kilometers
(approximately six hundred to six thousand miles) per second. Apart from this approach to fusion-powered space travel posing significant
proliferation and safety risks, tests
of pulsed nuclear explosion rockets are effectively banned by international treaties—so
it seems much more sensible to use controlled fusion reactors to achieve similar ends. However fusion-
powered rockets are ultimately achieved, research into fusion for energy will aid their development . 14

Anyway, developing fusion propulsion isn’t just about exploring the universe; fusion
rockets could also help us prevent
planetary-scale extinction of life from happening in the first place. The major challenge in preventing a
humanity-killing asteroid or comet is detecting and reaching it early enough so that mitigating action—
such as steering it out of the way—can be taken. Giving an asteroid a small push early on is as effective as a big push later on. Fusion-
powered rockets could travel through space faster than conventional rockets, buying us more time to take
action. And in the catastrophic event that we couldn’t save the Earth, the ability to travel to a new home
would be the ultimate insurance policy for humanity.

Nuclear fusion reactor–powered spacecraft bring space travel within reach. They’d cut down the time it takes to
get to Mars significantly, making it possible to do round-trips within a year. They could even allow us to travel outside of
the solar system. The closest star system outside of our own is centered on Proxima Centauri, a red dwarf star four light-years away—
that is, it takes light four years to make the journey from Proxima Centauri to Earth. Proxima Centauri has a habitable zone, a region where—in
principle—life could exist. Within that habitable zone sits Proxima Centauri B, the closest exo-planet to Earth. While it’s highly unlikely that
Proxima Centauri B is habitable, we don’t yet know. With a fusion rocket, a trip to Proxima Centauri B would be possible in under forty years, a
remarkably short time. 15

I began this book with a crazy idea—to create a slice of star matter and do nuclear fusion reactions in it to produce energy. The scientists,
entrepreneurs, and governments that have pursued this goal aren’t so crazy though. The scientists are some of the best there are. Some whom
we’ve met are trusted with stewardship of the United States’ nuclear arsenal; they really know what they’re doing when it comes to nuclear
physics. Others, like Professor Ian Chapman at Culham, have won multiple awards for the quality of their research. The entrepreneurs in the
race to build a star are daring and ambitious, raising amounts that most start-ups can only dream of and making decades’ worth of progress in
fusion in just a few years. The governments funding fusion are leading the richest countries, and represent a majority of the world’s population.

Their motivations don’t seem crazy either. We’re causing an unprecedented change in our environment , and it’s
mostly driven by our use of energy. But our use of energy has improved life in ways that would have seemed impossible to our
forebears. Reducing energy demand enough to stop climate change seems impractical ; if anything, we’re likely

to need more energy , not less. We have technologies to get us partway there—most notably, solar and wind
power, and fission too, where it’s accepted. But they won’t get us the whole way. Those pursuing this apparently crazy idea
say that we can have it both ways: we can improve quality of life for many more people and protect the

environment at the same time. Fusion could deliver CO2-free energy at scale and is likely to be one of
the safest power source s, if not the safest, ever devised. Climate change aside, we need new sources of energy
because our primary sources of energy, fossil fuels, are dwindling. The ingredients of even the most basic form of fusion,
with deuterium and tritium, could last us around 33 million years. It’s not as long as the coelacanths have been swimming
around, but it’s a damn good start. And those 33 million years would surely buy us enough time to figure out how to do
fusion reactions that use even longer-lasting fuel.

Even if you believe that fusion could save the planet, you might not be convinced that it’s possible to make it work. And yet nature tells us
not only that fusion can happen, but that it’s by far the universe’s most ubiquitous power source—the one that lights
each and every day on Earth, and maps the heavens at night. The universe’s visible matter began with nuclear fusion, and stars end with it
when they go supernova. We couldn’t exist without the atoms in our bodies being forged by it. Fusion is everywhere, so, star builders
say, why not on Earth too? As a species, we’ve already harnessed controlled fission (in today’s nuclear plants), and both uncontrolled fission
and fusion in nuclear weapons. Is it really so crazy to think that we might be able to harness controlled fusion too?

Star builders say “no!” and the machines that they’ve built have come very close to demonstrating that scientifically with net energy gain.
Magnetic confinement fusion has reached 67 percent of gain in fusion power, inertial confinement
fusion 3 percent in energy. More important, we know from pen-and-paper physics that net energy gain from
fusion is possible. Experimental evidence strongly suggests that inertial confinement fusion will produce net
energy gain with a big enough laser. The conditions for self-sustaining nuclear fusion, or ignition, are close,
and there have been millionfold improvements toward them since the first fusion machines were built. That progress has
taken a long time, and now entrepreneurs are challenging the slow-moving government laboratories, pushing progress to become faster,
cheaper, and more commercially viable. While there is not complete agreement on how, and the when will depend on money and luck, star
builders all say that net energy gain is coming.

Star builders are now looking beyond net energy gain, and setting their sights on putting fusion energy on the grid. Huge engineering and
commercial challenges remain. Entire plants must first produce more energy than they use (every day, and not just in individual experiments).
Fusion energy must be extracted and turned into electricity in a safe and sustainable way. And fusion energy must
be both widely available and affordable. To go from no fusion power to significant fusion power is a change on a
scale that is difficult to appreciate, requiring thousands of plants to be constructed all over the world. But many
star builders won’t think they’ve really succeeded unless fusion is delivered quickly enough, and on big

enough scales, to help the planet avert a climate catastrophe that—right now—is coming toward us like a
juggernaut .

Warming triggers extinction. Tipping points and knock on effects overwhelm defense.
Kemp ‘23 [Luke; August 23; Postdoctoral Researcher at the Center for the Study of Existential Risk,
Research Associate at Darwin College, Ph.D. in Political Science and International Relations from the
Australian National University; OpenBook Publishers, “The Era of Global Risk,” Chapter 7: Ecological
Breakdown and Human Extinction]

Our focus will largely be on climate change. This is because it is the most well-researched and visible contributor to global
ecological risk . Yet, it cannot be easily disentangled from our other planetary boundaries. This analysis should be seen as a partial and
likely conservative overview. For this chapter I will use the definitions for terms such as catastrophic and existential risk that are outlined in our
previous paper Climate Endgame.

The state of the science

Uncertainty, tail-risks , and tipping points


For many ecological risks, it appears that the more we know, the worse the threats appear.

For climate change, the best indication for this is a change in the ‘reasons for concern’ across consecutive IPCC assessment
reports . The IPCC identifies five ‘reasons for concern’: unique and threatened ecosystems; frequency and severity of
extreme weather events; global distribution and balance of impacts; total economic and ecological impact; and irreversible ,
large-scale, abrupt transitions . These are intended to be indicators to inform the world of how close we are to “dangerous
anthropogenic interference with the climate system”, the central mission of international climate policy.3 These reasons for concern are
determined by IPCC authors as a reflection of expert opinion, and underpin the famous ‘burning embers’ diagram. The diagram shows, in a
thermostat fashion, at what temperature the risk of these different concerns is. Over time, with each successive report, the risk
levels for any given temperature have risen . That is, these reasons for concern have become more worrisome, even at lower
temperatures, as the science has progressed.4 In the fifth Assessment Report (AR5), all of the reasons for concern were ‘high’ or ‘very high’
likelihood for just 2–3°C of warming.5

Tipping elements in the Earth System have followed the same trend as the reasons for concern. That is, over time the
likelihood of
crossing tipping points at low levels of warming has been rising. Tipping elements refer to when warming breaches a
critical threshold , causing a change in one part of the climate system to become self-perpetuating , resulting in
potentially significant Earth System impacts. This includes Artic Winter Sea ice collapse and dieback of the Amazon
Rainforest. The most recent assessment of evidence on tipping elements found that out of 16 tipping elements, six are at a high
likelihood of being tipped at 1.5–2°C of global heating. This includes events such as the die-off of low-latitude coral reefs, as well as
the long-term collapse of the West Antarctic and Greenland Ice Sheets. Hence, even the ambitious goal of limiting warming to 1.5°C above pre-
industrial temperatures would likely activate multiple tipping elements.6

The study of tipping points and regime shifts in ecosystems has progressed significantly, leading to new insights.7 We now have nascent
findings suggesting that such radicalchanges often occur in a domino effect.8 For climate change, this has been termed a
‘tipping cascade’ .9 Moreover, it appears that the larger and more complex the ecosystem, the more rapid and complete its
potential collapse. Such lessons are not causes for comfort.

<<TEXT CONDENSED NONE OMITTED>>


There is more mixed news on equilibrium climate sensitivity. Climate sensitivity refers to the response of the climate system to a doubling of greenhouse gas concentrations. Since approximately the 1970s and 80s, such a response has been estimated to be between 1.5–4.5°C—that is, until the most recent sixth Assessment Report (AR6) of the IPCC. AR6 reports a narrower likely range (66–100%) of 2.5–4°C and very likely range (90–100%) of 2–5°C. The upside of this is that high sensitivities of >4°C are less likely than previously expected. The downside is that the IPCC is now ‘virtually certain’ (99–100%) that climate sensitivity will be above 1.5°C, since all lines of evidence run strongly against these lower levels of warming.11 Unfortunately, a climate sensitivity of greater than 4.5°C, while unlikely, could not be ruled out as lower levels have been. These findings echo a major study on climate sensitivity in 2020, which used a Bayesian approach with multiple strands of evidence.12 These new findings imply that a doubling of greenhouse gas concentration (which could occur this century) would run an 18% chance of causing 4.5°C or more of
warming. This echoes earlier estimates of surprisingly high likelihoods of disturbingly high temperatures. Wagner and Weitzman estimate that under a concentration of 700 parts per million (ppm) (which falls within a mid-high scenario),13 there is an approximately 10% chance of exceeding 6°C by the end of the century (note that this would be slightly lower under the latest ECS estimates).14 Temperatures this high last occurred 50 million years ago and have never been experienced by hominids.15 Such rapid warming is geologically unprecedented, and a rise that is an order of magnitude faster than what occurred during the worst mass extinction event: the End-Permian Extinction. In the slightly longer term, even more radical pulses in heat may be possible. One basic model found that stratocumulus cloud decks may abruptly be lost, causing ~8°C global warming, with CO2 concentrations that could be approached by the end of the century.16 This 8°C would be additional to the previous level of warming needed to trigger this tipping point. Other studies have shown the potential for strong cloud feedbacks to push rapid and irreversible
warming.17 Over the past decades, knowledge of catastrophic climate change has risen alongside—but not kept pace with—global emissions. Unfortunately, the higher-end warming scenarios that matter the most are those we know least about. One recent study, text-mining IPCC reports, found that there was a significant mismatch between coverage of different levels of warming and their likelihood. Similarly, a recent survey by Nature of 234 IPCC authors found that over 60% of people surveyed expected warming of 3°C or above by the end of century.18 However, in existing assessment reports, less than 10% of the mentions of temperature rise refer to 3°C or above.19 IPCC reports have given disproportionate attention to lower temperature scenarios (2°C or lower) relative to their likelihood and impact. This trend is increasing over time, with each subsequent Assessment Report covering extreme temperature rise less.20 Indeed, the IPCC notes in its 2014 Fifth Assessment Report that there have been few quantitative investigations of the global impacts of warming above 3°C.21 Regardless of their likelihood, the higher impact of these
scenarios makes them even more vital to robust decision-making under uncertainty. The gap between likely scenarios and our knowledge is disconcerting. One of the glimmers of hope over past decades has been some limited progress in emission reductions. The falling prices and increasing deployment of renewable energy has made the worst-case emissions scenario (previously RCP8.5, now SSP5–8.5) increasingly unlikely.22 This should not be grounds for complacency. High temperatures and extreme impacts can still be reached even with lower anthropogenic emissions. That is because emissions concentrations are reflective not just of human emissions, but also the reaction of the Earth System. Moreover, there is still substantial uncertainty over greenhouse gas trajectories. Cumulative emissions to date have most closely tracked the RCP8.5 scenario.23 Longrun changes in technology, energy demand, and economic growth are all highly uncertain and will have a significant impact on how much carbon is released. One study using an expert survey and econometric modelling found that annual economic growth rates of 2.1% (with a
standard deviation of 1.1%) over the next century were plausible. These high growth rates yield a >35% likelihood that emissions would exceed the RCP8.5 pathway.24 Moreover, even the best super-forecasters of geopolitical events cannot make accurate predictions for events over a year away.25 We need to maintain a healthy skepticism over our ability predict what the world’s geopolitical and energy systems—and, hence, our emissions—will look like in a century. Despite some improvements, the overall emissions picture remains dire. Assuming full implementation of the climate pledges under the Paris Agreement (nationally determined contributions, or NDCs), emissions will have increased by 13.7% in 2030 relative to 2010.26 One of the least discussed and most important obstacles is the reality of delay. Previous studies have found that the delay in undertaking emissions reductions is the largest influence on the costs and likelihood of meeting a given target.27 This is an ‘emerging consensus’ across climate economics.28 The main impediment is the lock-in of fossil-fuel-intensive infrastructure. Delay to date has been primarily due to
one key factor: the fossil-fuel industry and the wealthy who benefit from a fossil-based economy. We should be careful not to tie climate risk solely to the level of warming. Under the right conditions, climate change could have catastrophic impacts, even at just 2°C of warming.29 When thinking through extreme climate risk, we need to consider not just emissions and the associated level of warming, but also the impacts, social vulnerability to these impacts, and the response of domestic and international communities.30 Complex ends: Cascading crises and risks Extinction is complicated. Each of the five mass extinction events throughout the phanerozoic history of Earth has involved a complex of different factors including oxygenation, volcanic eruptions, asteroid strikes, and food web cascades. One of the few common imprints is climatic change. Global warming likely played a central role in each mass extinction event, perhaps even the Late Ordovician (previously assumed to be a cooling event).31 Fast-forward to human history: while we have no account of Homo sapiens going extinct, we do have a record of states, empires, and
kingdoms crumbling,32 as well as the extinction of other hominid species.33 It is always a confluence of vulnerabilities, exposures, responses, and hazards—and one that frequently has the fingerprint of climatic change.34 The science of climate change and other global ecological threats has progressed considerably since 2004. Perhaps the greatest shift in the field has been away from thinking about a list of individual ecological hazards, towards thinking about how systems transform and fail. We are slowly realising that, like mass extinction events and societal collapses, ecological catastrophe will not be a simple affair. Instead, these ‘Anthropocene risks’ involve human-driven processes that interact with interconnected global socio-ecological processes and have complex, cross-scale relationships. The study of such risks necessitates a new approach to governance that includes an appreciation of justice, inequality, and the agents driving us towards disaster.35 Global ecological threats are increasingly thought of as a study of complex systems. Earth Systems science is evolving as a discipline and is increasingly thought of as a set of
interconnected ‘planetary boundaries’.36 Climate is only one of these boundaries and is accompanied by stratospheric ozone depletion, biosphere integrity, novel entities, ocean acidification, freshwater use, land system change, biochemical flows, and atmospheric aerosol loading. Each boundary is linked to a different planetary sub-system that could be pushed into instability by human pressures. The study of regime shifts in smaller ecosystems—such as pollinator communities,37 and coral reefs38—takes a similar approach. These are matters of systemic risk:39 systems can change rapidly into a new state (like a vibrant coral reef transforming into an algaedominated environment) based not just on single hazards, but the structure of the system, internal feedbacks, and sets of interacting stressors. This systemic view is not just restricted to ecology, but has also become commonplace in studying financial crashes and societal crises more broadly.40 Such a lens has not only highlighted concern over potential ‘tipping points’ in the Earth System,41 but also the chance of irreversible changes. For instance, relatively small levels of warming
locking the world into far higher temperatures and a ‘Hothouse Earth’ trajectory.42 Similarly, irreversible loss of the West Antarctic ice sheet will likely occur at approximately 2°C and the current ice configuration will not be regained even if we lower temperatures back to present levels. Risk comes not just from the potential changes in the Earth, but also from human responses. The IPCC, in its sixth assessment report, has explicitly recognised this, defining risk not only in terms of impact, but also responses. This is a new, state-of-the-art complex risk assessment: a consideration of hazards, vulnerabilities, exposures, and responses.43 Alongside these determinants of risk, we need to better understand how risks could cascade, including across sectors, countries, and even systems. The most obvious and dramatic example of a response risk is geoengineering: large-scale interventions into the Earth System to mitigate the effects of climate change. Carbon dioxide removal (or ‘negative’ emissions) through direct air capture of greenhouse gases, afforestation, or reforestation would be the lowest risk option, but appears unlikely. It would require
a herculean effort to develop and deploy the technologies and infrastructure needed for large-scale negative emissions within decades. Instead, the lowest-cost and most likely option is also the riskiest: stratospheric aerosol injection (SAI). SAI involves injecting particles into the atmosphere to reflect sunlight. One recent risk assessment of SAI suggested that the largest threat comes from ‘latent risk’: abrupt warming that would accompany the deactivation of the SAI system. Currently there are no clear mechanisms for the direct ecological impacts to be catastrophic, although these cannot be ruled out due to the nature of the Earth System. SAI would provide several stressors to the global system, including through changing disease patterns and precipitation, as well as the potential for political conflict, but these are all understudied. The largest contributor to risk from SAI is that another catastrophe—whether it be nuclear war, a solar flare, or mass pandemic—would knock out the system, leading to warming that would otherwise takes decades, rushing in within years. Hence, SAI shifts the risk distribution. The median-case scenarios are
potentially less severe than the impacts of climate change. But the worst case is intensified. SAI, if it is used to cover significant amounts of warming, would constitute a planetary sword of Damocles.44 Large amounts of warming and monumental Earth-engineering may not be needed to trigger catastrophe. Historically, minor climatic perturbations and droughts appear to have contributed to the dissolution of dozens of empires and kingdoms, ranging from the Bronze Age world system to the Khmer Empire, Western Roman Empire, and Assyrian Empire.45 Yet many proved resilient to similar stresses. For instance, the Mayan city-state of Caracol experienced two similar droughts during its lifespan, one of which it navigated with few signs of breakdown, and the other which coincided with a rapid and enduring crisis. The largest difference appears not to be the severity of the drought, but that Caracol was riven by warfare and inequality when it hit the second time.46

<<PARAGRAPH BREAKS RESUME>>

Risk cascades still largely exist under a fog of uncertainty. Studies currently suggest that climate change can worsen
and trigger conflicts under conditions such as weak governance and ethnic divisions,47 although we do not know how this
relationship could morph under higher temperatures. Similarly, temperature does seem to have an innate and often non-linear relationship
with economic growth48 and even population spread and density. It has been suggested that humans, much like other species, have a
fundamental climatic niche—that is, a specific climate envelope of approximately 13°C (mean annual average temperature) that the majority of
human population and urban areas have developed within over millennia.49 Perhaps the best study to date on risk cascades and feedbacks
used 41 studies to empirically sketch the links between climate change, food insecurity , and societal collapse (population loss
through conflict, mortality, and emigration).50 Other researchers in global catastrophic risk have also begun putting forward frameworks for
more complex risk assessments,51 including for climate change52 and international governance.53 For now, far greater attention and research
is needed on these systemic effects, such as climate triggering conflict, political change , or even financial crises .
Indeed, understanding ‘societal fragility’ is a key part of the Climate Endgame research agenda, alongside exploring long-term extreme Earth System states, modelling mass mortality and morbidity, and undertaking integrated
climate catastrophe assessments, which include climate change alongside a host of other catastrophic threats and vulnerabilities.30

An existential end?

Could global environmental collapse cause human extinction? This leads us to the central question: could combined ecological crises cause this to be humanity’s final century? Few have been bold enough to directly broach the
question. There have been many prophesied warnings, especially within the collapse literature, but no truly comprehensive scientific assessments. Questions of catastrophe are not directly addressed by any relevant, international
scientific institutions, such as the Intergovernmental Panel on Climate Change (IPCC) or Intergovernmental Panel on Biodiversity and Ecosystem Services (IPBES).

Many individual papers have mentioned the catastrophic potential of climate change. Peer-reviewed academic studies have
referred to global warming as an “ existential threat”,5 “beyond catastrophic” (for above 5°C),54 and “an indisputable global
catastrophe” (for above 6°C).55 While the impacts of climate change alone seem capable of causing a global catastrophic risk, the authors
never spell out how the world would fall from such impacts to mass mortality. Importantly, the gloomy terms are never defined, leaving it
unknown as to whether the authors believe that certain levels of warming could plausibly lead to human extinction. These are no studies nor
proofs of existential risks from climate change, but rather indications of a lack of shared terminology.

<<TEXT CONDENSED, NONE OMITTED>>


In lieu of sustained scientific attention, the most poignant examinations have come from popular books. Mark Lynas in Our Final Warning concludes, based on a large-scale review of the existing scientific literature, that 4°C could threaten a global collapse, and 5–6+°C could unravel into human extinction.56 David Wallace-Wells in The Uninhabitable Earth guesses that, in contrast to the title, the Earth will not become uninhabitable, and humans will survive foreseeable levels of warming.57 Toby Ord in The Precipice suggests a 1 in 1000 chance of climate change resulting in an existential catastrophe.58 William MacAskill in What We Owe the Future suggests that “it’s hard to see how even this could lead directly to civilizational collapse”.59 The assessments by existential risk scholars—Ord and MacAskill— have been the least convincing thus far. Ord uses an unworkable, ambiguous definition of existential risk.60 He defines an existential risk as one that “threatens the destruction of humanity’s longterm potential”. However, what our potential is depends on one’s values. Ord suggests that we minimise existential risks first and then determine
“our potential” through a “Long Reflection”. This would essentially be a centuries-long worldwide philosophical conversation. This strategy creates a paradox: we are supposed to minimise risks to a concept that we cannot define until after we have reduced those risks. It is difficult—if not impossible— to assess climate change using this definition, as Ord doesn’t explicitly state his values, nor what “our potential” is. His analysis misses much of the most recent science and does not sufficiently consider ‘indirect’ impacts. Moreover, the chapter does not cogently answer the question of whether climate change will result in human extinction. Instead, after roughly estimating the direct impacts, Ord concludes that they will not make the entirety of Earth uninhabitable. This is an entirely different question to the likelihood of climate change causing human extinction. Ord’s use of a precise numerical figure is also largely baseless. As noted earlier, even groups of the best super-forecasters making predictions on clearly defined questions have little accuracy after 12 months.61 MacAskill’s analysis is also riddled with problems. Like Ord, he suffers
from definitional problems. He defines ‘civilisational collapse’ as society losing the ability to create most industrial and post-industrial technologies.62 This has little relation to more common definitions of societal collapse. It also assumes that we know the full range of potential industrial and post-industrial technologies. Worse still, like with Ord’s analysis, it replaces the question of whether climate change will cause civilisational collapse with an easier one: will climate change make large-scale agriculture on Earth impossible? MacAskill concludes no. Once again, this is a different question. In short, the coverage of climate change by the most prominent existential risk scholars has been simplistic and disappointing. While brave, the conclusions of Wallace-Wells and Lynas are ultimately individual guesses with multiple shortcomings. WallaceWells is unclear about how he reaches his conclusion. Lynas relies on geological studies and the analogous example of the End-Permian Extinction. His more pessimistic assessment appears the most compelling. It has the most thorough grounding in the literature and, in the face of deep uncertainty, relies
on the most reliable and relevant geological precedents. This is astute, given that studies suggest that mass extinction events work by a threshold effect for temperature or carbon that we look likely to exceed. One analysis from 2021 found that warming of 5.2°C would likely result in a mass extinction event, even without considering the other anthropogenic impacts on the Earth.63 Another study suggested that the threshold for carbon release to result in a mass extinction event would be crossed by most IPCC scenarios by the end of the century (assuming a 50% uncertainty range, we may have already crossed this precipice).64 Yet, these investigations suffer from the same problem, one that plagues the entire study of global catastrophe and human extinction: a lack of proven or reasonable tools and methods for discerning when a crisis could spiral into global calamity. Few attempts have been made, with the notable exception of the societal collapse and climate review conducted by Richards et al., which does attempt to cautiously trace out some pathways from impacts to conflict and mass mortality.65 Notably, these deal only with
climate change and not the broader, reinforcing web of ecological crises, which has received less attention. The short answer is that we do not know whether climate change or anthropogenic ecological disruption could spiral into human extinction. However, this is true for all the suspected causes of human extinction. Climate and ecological crises do appear to have one of the most concerning profiles, given their range of impacts, as well as their role in past mass-extinction events and periods of historical turmoil. There are enough reasons to take this question of human extinction from ecological breakdown seriously. For now, while uncertainty remains, it seems improbable that human actions could extinguish the biosphere. Another mass-extinction event is plausible, but complete annihilation of the biological realm is likely not. Barring science fiction, the only semi-plausible direct route for human activities to terminate all biological life is the triggering of a runaway greenhouse effect. Lynas has suggested that such a scenario is possible, if there are hidden, extreme positive feedback loops in the climate system, an enormous, profligate
use of fossil fuels, and increasing solar radiation.66 Some basic modelling of the climate system has suggested that a runaway greenhouse effect is plausible.67 This is further supported by recent modelling of potential cloud feedbacks leading to a moist greenhouse.68 However, these studies are based on high-level models with many assumptions. The current scientific consensus is that any hellish mechanism— which could lead to a furnace Earth, complete with evaporated oceans— is highly unlikely. In 2009, the IPCC reported, in its 31st meeting, that a “runaway greenhouse effect” analogous to Venus appears to have virtually no chance of being induced by anthropogenic activities.69 Whether this view continues to hold, given the new modelling outcomes, is unclear. For now, while extinguishing the entire web of life seems far less likely than causing human extinction, it is an outcome that cannot be entirely ruled out. If humans were to go extinct, it is likely that global ecological collapse would be one of a series of drivers. Imagine a world where, in 2075, we have reached 4°C of warming. The climate system was more sensitive than
expected, and new energy-hungry machine learning algorithms led to higher-than-expected energy demand. After a category 6 hurricane hits New York City, NATO (led by the US) deploys a global stratospheric aerosol injection (SAI) system. This enflames international tensions and stokes domestic unrest in societies already awash with disinformation driven by deep-fakes and other high-level machine learning applications. A nuclear war breaks out and the ensuing nuclear winter knocks out the SAI system. The few billion survivors emerge from nuclear winter to be faced by soaring temperatures as the Earth warms by 4.5°C in the space of decades. Sources of sustenance beyond agriculture, such as marine fish stocks, have been significantly affected by transgressing other planetary boundaries such as ocean acidification, biosphere integrity, and biogeochemical flows. The rapid changes in temperature cause significant changes in wildlife distribution, triggering new zoonotic pandemics. Simultaneously, the unplanned emergency evacuation of one biosafety level 4 (BS4) facility just prior to the nuclear conflict led to the release of a modified
version of the previously defeated smallpox virus. The survivors are ingenious and resilient but fail to recapture the right industrial technologies required to put an SAI system back online. Many have intentionally turned away from industrial technologies after the fall. Those that try are faced with the problem of energy return on investment: easily accessed fossil-fuel reserves have already been depleted and the leftovers are too costly to use at scale. After a long fight, the final sapien takes her [their] last breath. She is a Māori woman, living on the outskirts of modern-day Dunedin (New Zealand). Her body, riddled with the scars of an altered smallpox strain and signs of malnourishment, finally gives out. Humanity is extinguished. This is one speculative and indicative example of an extinction scenario. Yet it touches on an important point. That is, asking the question of ‘is climate change or ecological breakdown an existential risk?’ is ultimately simplistically misleading. No single hazard is an existential risk. In the scenario outlined above, a global society marked by high levels of equality, international cooperation, and adaptive technology
could have potentially weathered the same ecological conditions. Whether our combined global environmental crises could spiral into extinction depends on human responses and wider trends and vulnerabilities (such as inequality). Climate change and planetary boundaries challenge the traditional, simplistic approach of thinking of existential risk as a simple set of disconnected hazards. Indeed, no single hazard is likely to result directly in human extinction. The search for one single event to kill us all will lead us to science fiction.70 We should instead think of the overall level of risk that arises from any particular socio-economic system (such as the current fossil-fuel-driven, globalised, capitalist economy). Answering the question of whether climate change is an existential risk is a futile inquiry until we develop reasonable definitions of existential risk, a topic we turn to next. Limits to growth as an existential saviour and threat Can we grow into catastrophe, collapse, or even human extinction? There is a rising scholarly debate over whether continued economic growth is compatible with living on Earth—or even desirable. This debate dates
back to at least the 1970s with the publication of the Club of Rome’s Limits to Growth report.71 The report relied on a computerbased systems model, which was (at the time) state-of-the-art. The model attempted the ambitious task of modelling the global economy. Repeated runs of the model led to a chilling observation: any simulation with continued, unabated population and economic growth eventually led to a global collapse in industrial output and population. A study conducted some 30 years later ran the model again with updated data, finding that it fitted trends over the last three decades remarkably well.72 The Limits to Growth thesis has been a source of heated debate. Proponents of the ‘degrowth’ approach argue that, to date, no country has decoupled material consumption from economic growth,73 that limiting warming to 1.5°C or 2°C will require contractions in energy demand (and likely economic activity) which are incredibly challenging to achieve alongside continued economic growth, that infinite growth is impossible on a finite planet,74 and that growth brings neither happiness nor human flourishing.75 Critics
argue that degrowth —even if combined with redistribution—will condemn the world to low living standards,76 that absolute decoupling between emissions and economic activity is already proving possible,77 and that the limits to growth will lie well beyond Earth due to the inexhaustible resource of human ingenuity. The debate is likely unresolvable: no amount of empirical evidence can falsify the potential power of future innovation and invention. Similarly, no amount of evidence can verify the Limits to Growth trajectory until we are amidst a collapse. Strangely, even if the notion of Limits to Growth is incorrect, the very idea of it could be an existential risk according to the traditional definition. This is due to the traditional definition being odd and idiosyncratic. The canonical definition of existential risk labels it as a risk that will “annihilate Earth-originating intelligent life or permanently and drastically curtail its potential”.78 The definition was later refined and specified to mean any threat that prevents the stable attainment of ‘technological maturity’79—that being the maximum, feasible control over the environment (including the
entire universe) and level of economic productivity. Technological maturity is not usually envisioned as an Earth-bound enterprise, but an endeavour of space colonisation by a post-human species.80 Thus, an existential risk is anything that threatens this techno-utopian future, including a technological or economic plateau. Under this classical definition, the idea of Limits to Growth is an existential risk: if it is correct then continued growth trends could result in catastrophe, as indicated by the modelling study. Yet, regardless of whether the thesis is true or not, if we act to limit human activities and stay within planetary boundaries, we would also face an existential risk under the canonical definition by not reaching a techno-utopian future. This says much more about the flaws and problems of these definitions of existential risk than it does about the desirability of limiting economic growth or the validity of the limits to growth idea. If we are going to have a mature, scientific field then we need better definitions. We should start by splitting out questions of existential ethics (what humanity’s potential is, and the value of different longterm
futures) and extinction ethics (the goodness or badness of human extinction) from the study of global catastrophic and extinction risk.81 Existential risk cannot be tied to one idiosyncratic view of the future nor such vagaries as ‘our potential’. We also need to have a more refined concept of risk. Risk is not a single hazard like a biologically engineered pandemic. It is the likelihood of an adverse outcome, given exposure to certain conditions. For instance, we should think of extinction risk as the overall likelihood of humans going extinct in a particular period, and extinction threats as major contributors to this overall level of risk. The 2022 Climate Endgame paper puts forward a set of definitions reflecting this way of thinking, and a suggested full spectrum of calamity from global decimation risk through to human extinction.30 Hope in the heat: Responsibility and responses Responsibility: Tragedy of the elite, not the public The responsibility for most ecological crises is concentrated. From the lens of national emissions, just ten historical emitters account for over 75% of cumulative international emissions.82 For extraction, just six countries and
one region of 18 countries account for over three quarters of fossil-fuel reserve.83 Similarly, there is growing evidence that material consumption and consumption norms for wider society are driven by a narrow supper-affluent elite.84 The influence of the wealthiest is not just in norms, but also direct carbon inequality. Recent research from Oxfam suggests that the richest 1% of individuals globally emit more than double that of the poorest half of humanity. From 1990–2015, the cumulative emissions share of the richest 1% and 10% of the world were 15% and 52% respectively. The skewed distribution for responsibility exists in areas outside of emissions.85 One recent analysis suggests that the corporate financing of the deforestation of the Amazonian Basin is enabled by a handful of key investment firms.86

<<PARAGRAPH BREAKS RESUME>>

The lack of policy responses is also a concentrated affair. For climate change, a collection of organisations and individuals
funded by
the fossil-fuel industry has deliberately undermined public trust in climate science and strangled the policy
response. For decades, the fossilfuel industry has funded scientists and firms—and even set up fake community groups—to
muddy the science of climate change. These are the well-funded and well-documented ‘Merchants of Doubt’.87 This was
combined with the suppression of in-house climate research from several fossil-fuel giants.88 Through other actions,

such as lobbying and political subterfuge , the fossil-fuel industry has played a central role in delaying and distorting
efforts to reduce emissions over the past three decades.89 Exxon, through the International Petroleum Industry Environmental
Conservation Association (IPIECA), has coordinated efforts across the industry to both discredit the science and stop international climate policy
since the 1980s.90 Neither emissions nor the lack of a policy response can be easily tied to the global public. The idea that ‘we are all to blame’
was, instead, part of an intentional rhetorical strategy from ExxonMobil and others to shift responsibility to consumers.91 The threat is not
humanity writ large. Rather, it is from a small, powerful band who overwhelmingly profit from the global machinery of extraction. It is largely a
matter of public risks and private benefits.
Why is responsibility important? Does identifying, or targeting, the culprits behind ecological devastation bring us closer to solutions? Yes, of course it does. Across different risks and risk determinants (hazards, vulnerabilities, exposures, and responses), there are often common drivers.92 Striking these common roots is a far more effective long-term solution than attempting to
grapple with the symptoms. This is not just true for climate change. For all anthropogenic catastrophic hazards, the responsibility is concentrated, and the powerful producers (the ‘Agents of Doom’) of these threats have played a starring role in thwarting societal responses.93 Ironically, these actors also tend to disproportionately benefit from the execution of emergency powers
during crises.94 Addressing risk will ultimately mean dealing with and curtailing the political power of these actors. This should be a source of hope. The concentrated nature of responsibility means interventions should be easier to target and implement. It also means that reducing catastrophic risks could have the co-benefit of creating a more equal world.

The co-benefits of avoiding global ecological catastrophe

Global catastrophe is rarely a matter for optimism. For anthropogenic hazards, such as advanced algorithmic systems and synthetic biology, the hyped benefits are disconnected from their risk mitigation. They are dual use, and a common view is that we will either self-capitulate with them or achieve technological salvation. However, there may be many co-benefits from not
developing certain technologies. For example, avoiding the rapid development and deployment of AI systems would not just avert fears overreaching unaligned superintelligence, but also nearer-term concerns over surveillance and disinformation. However, this is rarely discussed and is usually dismissed as being impossible or not worth the loss of the potentially beneficial
applications.

Ecological risks represent a different matter altogether. They are an area where risk mitigation does not just involve building a safer world, but also one with greater welfare and health. This is the increasingly convincing story told by the ‘co-benefits’ literature. It is an area of study that has swelled since the publication of Our Final Century. The message from most studies is that the
mitigation of environmental problems—most notably climate change—yields many benefits, including improved health, economic performance, employment, and energy security.95 Once these benefits are accounted for, the economics fundamentally shift: avoiding climate change is likely to result in net economic benefit, regardless of the warming averted. The same calculus
applies to ecosystem services. Estimates of global ecosystem services place their value at equal to or greater than double global GDP—for instance, approximately $125 trillion in 2011,96 a finding that should be entirely unsurprising given that all economic activity is dependent on a functioning Earth System.

Most actions to cut emissions are ‘no-regrets’ options. This is uncontroversial and well known for measures such as energy efficiency.97 What is less widely known, but increasingly clear, is that this holds for a much greater suite of actions, including vehicle electrification and renewable energy. Overall, decarbonization already appears cheap, and the projected costs tend to fall with
each new assessment due to the plummeting price of renewable energy.98 When the co-benefits and co-harms are included in an economic analysis, then optimal climate policy—which could be compatible with 2°C or 1.5°C, depending on our risk adversity and how we value human health—becomes an automatic net benefit.99 There are other potential trade-offs that we must be
cognisant of, including the loss of marginalised workers in the fossilfuel sector, disproportionate impacts on indigenous communities for resource extraction, and the potential for resource exhaustion. This has led to calls for a just transition.100 This is an admirable and necessary approach. Nonetheless, the potential downsides of decarbonisation are still far less disturbing and costly
than fossil-fuel extraction.

The net benefit of mitigation is largely due to the dark, externalised costs of fossil fuels, most notably on human health. According to one estimate, in 2012, particulate matter from the combustion of fossil fuels caused approximately 10.2 million excess deaths. In 2018, such deaths account for approximately 18% of global deaths.101 This is only mortality. The cost is even higher
when lost productivity and sickness are considered. These overall health costs are enormous. Even in the US, the health costs of coal-fired power are likely 0.8–5.6 times the value added to the economy.102 Globally, the health effects of fossil fuels could justify a carbon price of $50–380.103

There are also a range of other potential advantages that are rarely included in naïve cost-benefit calculations. Chief among these is
avoiding the geopolitical quagmire caused by fossil-fuel supply. Securing oil supply has been a suspected cause of
many military interventions in the Middle East, including the Iraq War.104 These have had dramatic knock-on
effects politically and socially, whether it be contributing to the rise of ISIS or potentially triggering new wars . Even without these
costly and corrosive excursions, the price of securing oil is high. The US alone spends a minimum of $81 billion on protecting its oil supply
chain.105 Decarbonisation will bring about its own set of geopolitical challenges, including the potential of new races for—and conflict over
—precious Earth metals and minerals that will fuel the transition to renewable energy, but these will likely be far less toxic
and dangerous than that of fossil fuels.

Fusion avoids extractive downsides of fission and prevents meltdowns.


Baker ’22 [David R; December 13; Energy and clean tech reporter at Bloomberg; Bloomberg, “Fusion Is
Nuclear Power Without the Meltdowns and Radioactive Waste,”
https://www.bloomberg.com/news/articles/2022-12-13/what-is-fusion-power-and-why-is-it-better-
than-nuclear?embedded-checkout=true]

Abundant fuel. No danger of a meltdown. No lingering radioactive waste.


Nuclear fusion, the process US researchers successfully demonstrated this month, has big potential advantages over the nuclear power plants
operating today, which run on an entirely different principle. If perfected into a form that can run in a power plant — not just an advanced
government lab — fusion could offer a clean power source that avoids many of the pitfalls that have dogged
nuclear energy for decades.
All existing nuclear plants use fission — splitting atoms apart — rather than fusion, which involves fusing them together. Fission
plants are fueled by uranium pellets that are crammed inside long metal rods. The uranium must be mined and refined ,
and after its use, it stays radioactive for thousands of years. Unless it’s reprocessed, that waste must be carefully stored and
monitored, decade after decade.

Fusion, however, uses as its fuel two isotopes of hydrogen , the universe’s most abundant element . One of the isotopes,
deuterium, is readily available in seawater. The other, tritium, can be made by exposing lithium — the same metal used in batteries — to
neutrons. Fusion merges those two hydrogen isotopes into helium, with no long-lived waste. Neutrons from the
reaction will, over time, render the reactor materials themselves radioactive, but with a far shorter half-life than the waste from a fission plant.

Meltdowns cause extinction.


Huff ‘14 [Ethan A.; Staff writer, 8/12/2014, “Nuclear power + grid down event = global extinction for
humanity,” http://www.naturalnews.com/046429_nuclear_power_electric_grid_global_extinction.html]

If you think the Fukushima situation is bad, consider the fact that the United States is vulnerable to the
exact same meltdown situation, except at 124 separate nuclear reactors throughout the country . If
anything should happen to our nation's poorly protected electric power grid , these reactors have a
high likelihood of failure, say experts , a catastrophic scenario that would most likely lead to the
destruction of all life on our planet, including humans .
Though they obviously generate power themselves, nuclear power plants also rely on an extensive system
of power backups that ensure the constant flow of cooling water to reactor cores . In the event of an
electromagnetic pulse (EMP), for instance, diesel-powered backup generators are designed to immediately engage,
ensuring that fuel rods and reactor cores don't overheat and melt , causing unmitigated destruction.

But most of these generators were only designed to operate for a maximum period of about 24 hours or
less , meaning they are exceptionally temporary in nature. In a real emergency situation, such as one that
might be caused by a systematic attack on the power grid , it could take days or even weeks to bring
control systems back online. At this point , all those backup generators would have already run out of
fuel , leaving nuclear reactors everywhere prone to meltdowns .

Fusion eliminates scarcity, preventing resource wars and opening peacebuilding to


rejuvenate cooperation.
Carayannis ’22 [Elias, John Draper, Balwant Bhaneja; December; Professor of Science, Technology,
Innovation and Entrepreneurship at George Washington University, PhD in management of technology;
Nonkilling Economics and Business Research Committee, Center for Global Nonkilling and Visiting
Professor in Public Administration with the University of Nottingham Asia Research Institute; Center for
Global Nonkilling; Peace and Conflict Studies, “Fusion Energy for Peacebuilding: A Trinity Test-Level
Critical Juncture,” Vol. 29, No. 1]

Given these developments, as with the original Baruch Plan, the


fusion burning plasma breakthrough creates the
opportunity for a new nuc lear normative order , a new Baruch Plan, this time based on fusion energy while likewise
being oriented towards perpetual peace . Path dependence indicates, as with the Baruch Plan and the Montreal Protocol, this
would be a hybridized approach, an innovative specialized policy framework relying upon the external legitimacy of the IAEA, followed by the
U.N., and supported by the IAEA. It would be tasked with accelerating the development of fusion energy; ensuring co-development by the
Global South in pursuit of a “Future Fusion Economy” that is both competitive with, and complementary to, renewables; and applying it to the
grand challenges of climate change, energy for all, and peace, via its accelerated commercialization (Carayannis et al., 2020a; Carayannis et al.,
2020b; Carayannis et al., 2022).

If such a framework can be designed and realized, developing fusion energy will mitigate against conflict . As with the
original Baruch Plan, incentivization is critical . Beginning with the Global South, G77 co-development of fusion energy
through funding around 6-10 competing public- and private-sector cost-sharing DEMO projects up to the sum of around 30
billion dollars over two decades via their sovereign wealth funds lessens the risk of fusion energy’s accelerated arrival
destabilizing their econ omies. It achieves this via the Global Commission to Accelerate Fusion Energy providing them with a stake in
the new fusion-based Future Fusion Economy through co-ownership of core fusion energy patents, insuring them against fusion competing with
fossil fuels (see Suggested ToR for Global Commission to Accelerate Fusion Energy and Technical Annexes, available at the OSF Storage data
repository: https://osf.io/hqzak).

For the West, several of their most advanced public- and private-sector fusion DEMOphase projects, like the U.K. Atomic Energy Authority
Culham Centre for Fusion Energy’s Spherical Tokamak for Energy Production project, or TAE Technologies’ project in the U.S., could be
effectively funded to continuously innovate and engineer fusion reactors. Moreover, co-development grows the market more rapidly as Global
South countries not only have a financial stake in sales but also have sufficient knowledge to build and operate their own fusion reactors. This
agreement would benefit both the Global South and the mainly Western global fusion innovation ecosystem, like the U.K.’s South West Nuclear
Hub. G77 co-ownership of patents would also benefit the global innovation ecosystem, as third-party countries would be less likely to reverse
engineer and sell technology to Global South countries that those same countries co-owned.

Finally, in that the global commission would establish ownership of fusion IP and implement a robust sanctions mechanism for breaches of
patents, a regime will be established whereby core patents held by Western companies could be securely licensed to China. SinoU.S. relations
should then improve as a new baseline for technological cooperation is developed and implemented, a return to the pathfinding element of
fusion as a clean energy technology and basis for science diplomacy (Claessens, 2020). Revisiting the example of the Spratly Islands, the
accelerated arrival and commercialization of fusion power in the 2030’s2040’s to contribute to transitioning from fossil fuels (National
Academies, 2021) would mean a railgun-powered military conflict over the islands would lack political utility. The most dangerous period
between the deployment of railgun weapon systems in the 2020s and the burning plasma in the 2030s-2040s, when military planners begin
contemplating a fusionpowered railgun arms race, would be governed by work towards the new nuclear order.

In situating NKGPS within the QHFIE framework, we have resurrected the U.S. goal, embodied in the U.N. and in the Baruch Plan, as well as in
the Atoms for Peace program and in the ITER project, of a demilitarized world with access to inexpensive energy (Carayannis & Draper, 2021).
The fusion energy critical juncture will introduce a genuine scientific paradigm shift (Kuhn, 1970), a term
typically overused in the literature but appropriate here as conflict over fossil fuel resources could, within this century,
subside . Given so much U.S. fo reign po licy is geared towards a culture of war in large part due to the
securitization of fossil fuel energy (Marsella, 2011), much U.S. domestic and fo reign policy could then
shift from a killing-prone nature to a killing-avoiding one within the unfolding fan of nonkilling alternatives. This

could result in demilitarizing other societies. Demilitarizing would mean increased funding for public infrastructure and
services, enabling the U.S. to revisit welfare reforms abandoned during the rise of its military industrial complex (Hooks & McQueen, 2010).
Further, demilitarizing does not present an existential threat to the U.S. military-industrial-congressional complex (MICC) (LeLoup, 2008). The
MICC can re-purpose itself for a post-fusion world, towards domestic and foreign aid to coordinate a global Fusion for Peace program to
address energy for all and climate change, to ensure planetary defense (National Science and Technology Council, 2018), and to conduct space
exploration (Dawson, 2017).

In terms of Paige’s funnel of killing, preventing fusion-powered weaponry primarily requires action at the level of the structural reinforcement
zone, where socioeconomic arguments, institutions, and material means predispose and support a discourse of killing (Evans Pim, 2012, p. 116,
citing Paige, 2009, p. 76). Motlagh (2012, pp. 103–5) states that images of perpetual peace and weapon-free zones matter, as do actions like
removing economic support for lethality and protecting human rights. In the U.S., the basic Kantian concept of perpetual peace (Kant, 2003; see
Terminski, 2010) translated into President Roosevelt’s human security paradigm, as embodied in the 1941 State of the Union address (the Four
Freedoms Speech; see Kennedy, 1999) and then eventually into the Universal Declaration of Human Rights, adopted by the U.N. General
Assembly on December 10, 1948 as Resolution 217, in its 183rd session. Following Motlagh, we emphasize protecting the environment and the
ecological responsibility of humanity to manage the planet’s climate responsibly in the Anthropocene Era. Consequently, addressing
climate change via our hybridized specialist fusion governance instrument, the global commission, also serves as an
inspiration for peacebuilding .
A strategic North-South partnership on developing fusion energy that re-engages the U.S. and China in science and energy diplomacy should
also stimulate negotiations to use fusion energy for solely peaceful purposes. At the time of the original Baruch
Plan, the U.S. and the U.S.S.R, divided by ideological differences, lacked a common language for negotiations. We suggest that negotiations via
the Global Commission for Urgent Action on Fusion Energy would start to create that common language . They would lead
to a new Baruch Plan via a technical report and business prospectus that employ the NKGPS peacebuilding , life-
affirming paradigm of nonkilling , as a science-based philosophy of survival through coop eration that
advocates pursuing mutually beneficial goals to overcome deadly antagonisms . This is possible because NKGPS
specifically emphasizes that “science provides knowledge for liberation from lethality” and advocates humanity adopting multiple peace-
bringing big science projects (Paige, 1996, p. 9).

innovation diplomacy required to rapidly develop and direct fusion energy for peaceful ends
In other regards, the North-South
would essentially revisit the same basic philosophical arguments regarding realizing perpetual peace that were triggered
by the Trinity Test critical juncture, provoking the U.N. normative global governance regime. Once again, a completely novel nuclear energy
source will emerge that could be militarized. Once again, there will be a momentous opportunity for peacebuilding,
involving the U.S. and the West , the Global South , and China . And once again, the U.S. will be challenged to provide
global leadership. Its incentive will be the possibility of revitalizing the flagging Washington consensus -based
approach to global development (Löfflman, 2019), fueled by a Fusion for Peace program through a Universal
Global Peace Treaty , a successor to the world’s first Global Ceasefire , called as a response to Covid-19 (Gifkins &
Docherty, 2020). A UGPT could rejuvenate the U.N. System in permitting humanity the opportunity to live
without fear, or at least with less fear, while utilizing fusion power to help address climate change and achieve the U.N.
Sustainable Development Goal of energy for all, whiling reaching for other goals , like the colonization of space,
facilitated by fusion drives (United States Department of Energy, 2021).
1AC---Competition---ADV
Advantage two is competition.
Russian dominance of Uranium exports is the backbone for broader energy hostage
keeping.
Bearak ‘23 [Max; reporter for The New York Times covering energy politics, global climate
negotiations and new approaches to reducing greenhouse gas emissions---Carleton College; 6-14-2023;
"The U.S. Is Paying Billions to Russia’s Nuclear Agency. Here’s Why” New York Times;
https://www.nytimes.com/2023/06/14/climate/enriched-uranium-nuclear-russia-ohio.html; ‫]بالل‬
It is one of the most significant remaining flows of money from the United States to Russia, and it continues despite strenuous efforts among
U.S. allies to sever economic ties with Moscow. The enriched uranium payments are made to subsidiaries of Rosatom, which in turn is closely
intertwined with Russia’s military apparatus.

The United States’ reliance on nuclear power is primed to grow as the country aims to decrease reliance
on fossil fuels. But no American-owned company enriches uranium. The United States once dominated
the market, until a swirl of historical factors, including an enriched-uranium-buying deal between Russia and the
United States designed to promote Russia’s peaceful nuclear program after the Soviet Union’s collapse, enabled
Russia to corner half the global market . The United States ceased enriching uranium entirely.
The United States and Europe have largely stopped buying Russian fossil fuels as punishment for the Ukraine invasion. But building a new
enriched uranium supply chain will take years — and significantly more government funding than currently allocated.

That the vast facility in Piketon, Ohio, stands nearly empty more than a year into Russia’s war in Ukraine is a testament to the difficulty.

Roughly a third of enriched uranium used in the United States is now imported from Russia , the world’s
cheapest producer . Most of the rest is imported from Europe. A final, smaller portion is produced by a British-Dutch-German
consortium operating in the United States. Nearly a dozen countries around the world depend on Russia for more than
half their enriched uranium.
The company that operates the Ohio plant says it could take more than a decade for it to produce quantities that rivaled Rosatom. The Russian
nuclear agency, which produces both low-enriched and weapons-grade fuel for Russia’s civilian and military purposes, is also responsible in
Ukraine for commandeering the Zaporizhzhia nuclear power plant, Europe’s biggest, sparking fears that a battle over it could cause leaks of
radioactive material or even a larger meltdown.

“We cannot be held hostage by nations that don’t have our values, but that’s what has happened,” said Senator Joe
Manchin III, the West Virginia Democrat who leads the Senate’s energy committee. Mr. Manchin is the sponsor of a bill to rebuild American
enrichment capacity that would promote federal subsidies for an industry the United States privatized in the late 1990s.

Nuclear-power vulnerability

The reliance also leaves current and future nuclear plants in the United States vulnerable to a Russian
shutdown of enriched uranium sales, which analysts say is a conceivable strategy for President Vladimir V. Putin, who often
wields energy as a geopolitical tool.

Russia weaponizes its stranglehold for revisionism, encouraging Iranian adventurism.


Galushchenko ‘24 [German; Ukrainian lawyer currently serving as Minister of Energy of Ukraine---
member of the American Society of International Law. Arbitrator from Ukraine of the International
Centre for Settlement of Investment Disputes in Washington, D.C. He also served as an ICC arbitrator.; 3-
20-2024; "Want to help Ukraine? Decouple from Russia’s nuclear industry"; Hill;
https://thehill.com/opinion/international/4530325-want-to-help-ukraine-decouple-from-and-sanction-
russias-nuclear-industry/; ‫]بالل‬

A powerful weapon in Vladimir Putin’s war machine is Rosatom, Russia’s state-owned nuclear energy
giant . It has long sought to generate dependence among U.S. allies and supported the nuclear ambitions of
American adversaries , especially Iran .

A hearing in the U.S. House Foreign Affairs Committee earlier this week laid bare the growing threat, with Europe Subcommittee Chairman
Thomas Kean (R-N.J.) calling Rosatom “one of the most nefarious tools of Russian malign influence .”

Rosatom is deploying what is nominally civilian nuclear energy as a weapon of war in Ukraine at the Zaporizhzhia
nuclear plant. Meanwhile, in the Middle East, it supports Iran’s nuclear program in a manner that emboldens
Tehran’s adventurism in the region, including Iran’s backing of Hamas against Israel.

It is critical that the U.S. and its allies recognize the reality that they too remain vulnerable to the Kremlin’s
weaponization of energy . This dependence is not simply through Russian oil and gas, but also results from Rosatom’s
extensive operations around the globe and, at times, the West’s own continued reliance on Rosatom’s civilian
nuclear energy infrastructure.

The U.S. and E uropean U nion are spending billions of dollars to fight Putin and starve his war machine. But
the Western nuclear industry often pumps money right back into Russia through its ongoing
cooperation with Rosatom and its subsidiaries, such as AtomEnergoProm, the eighth largest contributor to Russia’s state
budget.

Rosatom builds more nuclear reactors globally than any other entity. In the absence of sanctions against the Russian nuclear industry,
Moscow uses Rosatom to circumvent existing U.S. and EU sanctions, thus reducing their impact and
fueling Kremlin propaganda that sanctions don’t work.

As Kean noted in his opening remarks, Russia’s GDP grew three percent in 2023, which exceeded the growth of all G7
economies.

Despite successes in finding alternative sources of oil and gas and developing renewables, the
world is still too reliant on Russian
civilian nuclear energy. But it doesn’t have to be. In fact, under the right conditions, the transition away from
Russian nuclear can occur more quickly than the transition away from Russian oil and gas.

Russia leads the world’s civilian nuclear energy market, accounting for more than half of all global
uranium enrichment . Rosatom alone accounts for 36 percent of the world market share in uranium-enrichment and supplies reactor
fuel to 78 reactors across 15 countries, including several EU members and the U.S. With Russia’s stranglehold on the world’s
nuclear energy supply, it retains a powerful energy weapon it can use against the West and throughout the
world.

However, investments in the revitalization of the U.S. nuclear industry can ensure decoupling from
Russian civilian nuclear energy supply chains over the long-term. The immediate focus should be on replacing Russian
nuclear fuel, spare parts, maintenance and equipment supply chains with Western ones.
That draws in global powers---nuke war.
Dolzikova and Savill ’24 [Darya Dolzikova is a research fellow in proliferation and nuclear policy at
the Royal United Services Institute in London; Matthew Savill is the director of military sciences at the
Royal United Services Institute in London; 4/26/2024; Bulletin of the Atomic Scientists; “Why Iran may
accelerate its nuclear program, and Israel may be tempted to attack it,”
https://thebulletin.org/2024/04/why-iran-may-accelerate-its-nuclear-program-and-israel-may-be-
tempted-to-attack-it/]//LASA-AB
Israel’s strike was carried out against an Iranian military site located in close proximity to the Isfahan Nuclear Technology Center,
which hosts nuclear research reactors, a uranium conversion plant, and a fuel production plant, among other facilities. Although the attack did
not target Iran’s nuclear facilities directly, earlier reports suggested that Israel was considering such attacks. The
Iranian leadership
has, in turn, threatened to reconsider its nuclear policy and to advance its program should nuclear sites be
attacked.

regional escalation dynamics posed by Iran’s near-threshold nuclear capability,


These events highlight the threat from
which grants Iran the perception of a certain degree of deterrence—at least against direct US retaliation—while also serving as an
understandably tempting target for Israeli attack. As tensions between Israel and Iran have moved away from their
traditional proxy nature and manifested as direct strikes against each other’s territories, the urgency of finding a timely and non-military
solution to the Iranian nuclear issue has increased.

A tempting target. While the current assessment is that Iran does not possess nuclear weapons, the Islamic Republic maintains a
very advanced nuclear program, allowing it to develop a nuclear weapons capability relatively rapidly ,
should it decide to do so. Iran’s “near-threshold” capability did not deter Israel from undertaking its recent attack. But Iran’s nuclear
program is a tempting target for an attack that could have potentially destabilizing ramification: The
program is advanced enough to pose a credible risk of rapid weaponization and at a stage when it could
still be significantly degraded, albeit at an extremely high cost.
Iran views its nuclear program as a deterrent against direct US strikes on or invasion of its territory, acting as an insurance policy of sorts against
invasion following erroneous Western accusations over its nuclear program, ala Iraq in 2003. That’s to say, during an attempted invasion, Iran
could quickly produce nuclear weapons. This capability allows Iran’s leadership to engage in destabilizing activities in the region with a
(perceived) limited likelihood of retaliation against its own territory. Concerns over escalation and a potential Iranian push toward
weaponization of its nuclear program may have been one of multiple considerations that contributed to the US refusal to take part in Israeli
retaliatory action following Iran’s April 13 strikes on Israel.

Israel sees the Iranian nuclear program as an existential threat and has long sought its elimination. For
this reason, reports that Israel might have been preparing
to target Iranian nuclear sites as retaliation for Iran’s
strikes against its territory came as little surprise. Israel’s attack on military installations near Iranian nuclear facilities—and
against an air defense system that Iran has deployed to protect its nuclear sites—appears to have been calibrated precisely to make the point
that Israel has the capability to directly attack heavily-protected nuclear sites deep inside Iran. Some commentators have speculated that
subsequent strikes on Iranian nuclear sites may still be desirable or necessary.

<<FIGURE OMITTED>>

In this context, Iran’s


nuclear sites will continue to present a tempting target for Israel in any further
escalation of the conflict between the two. Moreover, Israel may also conclude that its own undeclared nuclear capability has
failed to act as a deterrent against two major assaults on its territory. The attacks by Hamas on October 7 and Iran on April 13 probably added
to Israel’s sense of strategic vulnerability, although that perception may have been partly alleviated by the largely successful defense against
Iran’s attempted drone and missile strikes.

Israel has historically targeted Iran’s nuclear program through relatively limited sabotage in the form of cyber-attacks, assassinations of
scientists, and bombs placed at Iranian nuclear facilities. This strategy has allowed Israel to repeatedly roll the clock back on Iran’s nuclear
progress while maintaining some level of credible deniability and avoiding further military escalation, therefore largely remaining within the
“rules” established by Israel and Iran in conducting their shadow war. Now, with
both countries openly striking each other’s
territory, Israel may see this as an opportunity —or feel compelled—to target Iran’s nuclear facilities directly.

A range of bad options. The possibility of Iranian weaponization and Israeli attacks on Iran’s nuclear sites
could lead to a serious escalation spiral and, potentially, a wider military conflict in the region.

Should Iran anticipate that Israel is preparing to carry out strikes against its nuclear sites, it may decide
to rush toward producing a nuc lear weapon before Israel has the time to inflict any significant damage on its ability to do
so quickly. In turn, expecting an anticipatory push toward weaponization by Tehran, Jerusalem may be
incentivized to carry out strikes to pre-empt Iran from acquiring a nuclear weapon. The disparity in timelines
here favors Israel and creates risk for Iran: The former could attempt a strike in a short period—maybe days or weeks—whereas it would
probably take Iran several months to a year from the point of decision to have a viable weapon, although estimates remain uncertain. Yet,
through the advanced state of its nuclear program, Iran may be able to make significant advances toward a deployable nuclear weapon before
the International Atomic Energy Agency (IAEA)—or indeed, Israeli intelligence—catches on to developments, which would limit the time Israeli
planners would have to mount a pre-emptive response.

Tehran may make the decision to build nuclear weapons in response to a limited Israeli strike on its
nuclear facilities. The Iranian nuclear complex is too dispersed, key facilities too hardened, and nuclear expertise too consolidated to be
eliminated through limited military strikes. Iran’s uranium enrichment facilities at Natanz and Fordow, where Iran produces the fissile material
needed to produce nuclear weapons, are either fully (in the case of the enrichment facility at Fordow) or partially (at Natanz) underground and
are heavily defended. Any Israeli strike that would cause damage to other Iranian nuclear sites—such as its centrifuge production or uranium
conversion facilities, or even the not-yet-operational Khonab heavy water research reactor—would set the program back but would ultimately
leave Iran with the ability to keep ramping up its uranium enrichment, potentially moving toward the production of weapons-grade uranium
(enriched to 90 percent uranium 235). Any
work Iran may be currently conducting to weaponize its nuclear
technology—even as the US intelligence community assesses it is not doing so—would probably be performed in dispersed and
undisclosed locations, making military targeting very challenging.
Following past instances of Israeli sabotage against the Iranian nuclear program, Tehran has doubled down—rebuilding damaged sites,
hardening facilities, and ramping up its nuclear activity. The same is likely to be true should Iranian facilities be targeted directly this time, only
to a greater degree. The shift from a proxy conflict between Iran and Israel to a direct engagement will only increase the value Iran places on its
nuclear program as a deterrent against further direct attack on its territory and US military intervention. Should Iran assess that its regional
proxies and its missile and drone capabilities have been insufficient to deter Israel from conducting direct strikes against its strategically
significant nuclear program, Tehran may see the actual weaponization of its nuclear program as the only option left that can guarantee the
security of the Iranian regime.

Unfortunately, an Israeli attack on non-nuclear Iranian assets may lead the Iranian leadership to reach a similar conclusion. As others have
discussed elsewhere, since the Hamas attack on Israel on October 7, Iran’s weaknesses in deterring aggression against its assets in the region
and capitalizing on the ongoing instability to advance its own security priorities have become apparent. Such weaknesses may be increasing the
perceived strategic value of its nuclear program to Iranian leadership.

Short of developing a full nuclear weapons capability, Iran may first respond by enriching uranium to weapons-grade levels. While weapons-
grade uranium alone is not enough to produce a nuclear weapon, it would be a decisive step in that direction. Iran may also retaliate for
further Israeli attacks by withdrawing from the Nuclear Non-Proliferation Treaty ( NPT ). A withdrawal would be
followed by the exclusion of IAEA inspectors from the country. Although Iran has significantly restricted inspector access in recent years, the
IAEA continues to monitor and report on key aspects of the Iranian nuclear program. An Iranian withdrawal from the NPT would leave the
international community with no visibility of developments in the program apart from national intelligence collection or satellite imagery.

Such uncertainty—and a formal reneging of Iran’s commitment under the NPT to forego a nuclear weapons capability— risk
seriously
exacerbating regional instability . An Iranian withdrawal from the NPT may also incentivize nuclear prolif eration in
the region, with Saudi Arabia having previously threatened to acquire nuclear weapons if Iran does.

All or nothing. The counterproductive effect of a limited strike on Iran’s nuclear program could lead Israel to
consider a large-scale military operation to set the program back as decisively as possible. This option, however, would
almost certainly result in an all-out , highly-destructive war between Iran and Israel, probably dragging other
regional factions, the U nited S tates, and possibly others in to the conflict.

Putin uses uranium to hold the US hostage and threaten energy shocks.
Johnson ’24 [Keith and Amy Mackinnon; April 4; Reporter at Foreign Policy covering geoeconomics
and energy; National security and intelligence reporter at Foreign Policy, International Masters in
Russian, Central and East European Studies at University of Glasgow; Foreign Policy, “U.S. Reactors Still
Run on Russian Uranium,” https://foreignpolicy.com/2024/04/04/us-nuclear-reactors-russian-uranium/]

“Currently, Russia and China dominate the market, and Russia has proven its desire to weaponize energy for
geopolitical gain ,” Walter said. Together with the huge expansion planned for civil nuclear power, “it’s a combination that is driving this
push.”

The U nited S tates relies on Russia for about one-quarter of the enriched uranium to power its 93 nuc lear
reactors . It’s a relationship that was cemented by the Megatons to Megawatts program created after the fall of the Soviet Union in which
decommissioned nuclear weapons were reconstituted to provide fuel for U.S. reactors and strengthen diplomatic ties between the erstwhile
Cold War rivals. “It was thought to be a very strategically valuable way to ensure that these capabilities weren’t being used for nefarious
purposes,” Crozat said.

But in the wake of Russia’s invasion of Ukraine and ongoing Western efforts to decouple from reliance on Russia for any key materials, calls are
growing louder on Capitol Hill to sever that decades-old dependence. The House of Representatives passed legislation last year that would ban
imports of Russian uranium by 2028, a call echoed by the U.S. energy secretary, though it has not yet been taken up by the Senate.

“One of the most urgent security threats America faces right now is our dangerous reliance on Russia’s
supply of nuclear fuels for our nuclear fleet ,” said Rep. Cathy McMorris Rodgers, the chair of the House Energy Committee,
discussing the bill ahead of its passage. “We’ve seen how [Russian President Vladimir] Putin has weaponized Europe’s reliance on Russian
natural gas. There’s no reason to believe Russia wouldn’t do the same with our nuclear fuel supply, if Putin
saw an opportunity .”
Unlike with natural gas, though, Moscow has not so far tried to use its dominant position in uranium supply as a cudgel. Rosatom, Russia’s
state-owned nuclear behemoth, continues to supply utilities in the United States and across Europe and in other countries under long-term
contracts, even increasing its export revenue during the war. Rosatom, even more than its gas-producing counterpart Gazprom, is keen to
preserve its reputation as a reliable supplier, said Dmitry Gorchakov, a nuclear advisor at Bellona, a Norway-based environmental nonprofit.

Rosatom has played a leading role in Russia’s nuclear energy diplomacy around the world, acting as a one-stop shop for construction, fuel, and
waste disposal. Russia was named as the supplier in half of all international agreements to build and sustain nuclear power plants between
2000 and 2015, according to a study published in Nature Energy, providing Moscow with other avenues of influence.

But Putin could still order the firm to curtail supplies , which would cause a serious short-term supply
crunch for utilities in Europe and the U nited S tates, Gorchakov said, as well as raise already-high prices for
enriched uranium. Such a move could be even more palatable for the Kremlin than the weaponization
of gas exports because of the relatively small dollar amounts involved: Rosatom only makes about $3 billion a year from uranium fuel sales
abroad, versus the $200-odd billion it made in oil and gas sales before the war wrecked its main export market.

One sign that nuc lear operators everywhere are getting nervous about a sudden Russian fuel embargo was the
huge uptick in imports of Russian fuel by Eastern European countries last year. Those countries all run Soviet-era reactors that require special
fuel assemblies usually made by Rosatom and its subsidiaries.

“Now we see it as a trend—almost all those countries are buying much more fuel. Our hypothesis is that they are trying to make stockpiles”
against a possible Russian supply cut, Gorchakov said. While Westinghouse, the U.S. nuclear company, has made great strides in providing some
fuel assemblies for Soviet-era reactors, notably in Ukraine, it is still working to be able to reliably fuel the majority of the older-generation
reactors that run power grids in countries such as the Czech Republic, Hungary, Slovakia, and Finland.
The big question is whether the U nited S tates and other Western countries can increase their production of
enriched uranium, even the old-school variety used in present-day reactors, quickly enough to provide supply
assurances against any Russian interruption .

Russian energy confrontations cause great power war.


Danylyuk ’19 [Loren; 2019; Former Ukrainian finance minister and former secretary of the National
Security and Defense Council; Military Times; “How gas can lead to war (but not where you might
think),” https://www.militarytimes.com/opinion/commentary/2019/11/19/how-gas-can-lead-to-war-
but-not-where-you-might-think/]

Despite all efforts made by the U.S. and a number of European countries, next year Russia
can receive an unprecedented
instrument of pressure on Europe, which can be used not only to intensify a hybrid war, but also to open the way to a
full-scale European military conflict that can affect not only Ukraine, but also those countries in Eastern Europe that are NATO
members. In fact, the completion of the Nord Stream 2 is the biggest security challenge in Europe since the 1960s and puts the
world at risk of a new global conflict .
This is by no means an exaggeration. All Russian gas pipelines that were built and are being built in recent years, including Nord Stream 2, have
no economic sense. Thanks to these gas pipelines, Russia does not increase the volume of gas it supplies to the EU, but reduces the use of
existing gas pipelines passing through the territories of such European countries as Ukraine, Belarus, Poland, Slovakia, Czech Republic, Romania,
Hungary, Slovenia, Austria and Croatia. Using
new gas pipelines, Russia is able to move to economic pressure on these
countries without worrying more about dependence on gas transit through their territories. The very

opportunity to exert this economic pressure and stop gas supplies to the above countries was the real
purpose of the Nord Stream-2 and other bypass pipelines.

Economic coercion is an inherent part of modern hybrid warfare, and the Russian Federation reached
unsurpassed skill in its application. Gazprom, the largest gas supplier in Europe, controlled by the Russian government, is the
flagship among the economic means used by the Russian Federation to achieve its geopolitical goals, including election interference and regime
changes in Eastern and Central European countries. Since 1991, Russia has had nearly 60 major gas conflicts with other
nations. Only 11 of them had no explicit political component. In 40 cases , Russia has imposed a
suspension of gas supplies to other countries.
In addition to the above countries, Azerbaijan, Armenia and Georgia were also victims of the Russian gas war. Russia used different forms of gas
blackmail for each of these countries. In the case of Georgia, Russia even resorted to direct military aggression in
2008. One of the motives behind this aggression was to counteract the completion of the Nabucco gas pipeline, which would reduce the
European Union’s dependence on Russian gas. The object of direct Russian attacks during the Russo-Georgian war was also the BTC oil pipeline,
which also reduced European dependence on Russian oil.

As we can see, in its energy policy Russia is not limited to hybrid methods, but is able to move to full-scale
military interventions . From this point of view, it is important to realize that Russia’s current dependence on gas
transit through the territories of the aforementioned countries of Eastern and Central Europe restrains Russia not
only from the energy blackmail of these countries but also from military aggression against them. Obviously, such
aggression against these countries is impossible , first of all, due to the inevitable cessation of Russian gas
supplies to Western Europe in such a case. After starting the Nord Stream 2, Russia gets rid of these
limitations and will certainly move to a more aggressive policy towards the countries of Eastern and
Central Europe, including those countries that are already members of NATO . Such an aggressive policy may
involve not only economic coercion , but also provocation of military conflicts both through the use of
proxies and with the direct involvement of Russian regular troops. Thus, refusing to start this pipeline is not only a
defense of the economic interests of these countries, but also of their territorial integrity.

Fusion solves---it ends uranium reliance.


Ferrell ‘20 [Matt; Creative Director with over 15 years of management experience that excels at
leading teams, mentoring, and helping to establish consistent design standards---Currently a YouTube
creator specializing in sustainable and renewable technologies ; 4-28-2020; "The truth about nuclear
fusion power – it’s coming"; Undecided with Matt Ferrell - Exploring how technology impacts our lives;
https://undecidedmf.com/episodes/the-truth-about-nuclear-fusion-power/; ‫]بالل‬

Fusion is the opposite of fission, and it’s what’s happening in our sun and all stars in the universe. Fusion is when two atoms slam
together to form a heavier atom, like two hydrogen atoms fusing together to form one helium atom. The power released in this
process is several times greater than the power released from fission.

Fission reactions don’t occur normally in nature, but as I mentioned before, fusion occurs in stars. One big benefit of fusion over fission is that
it doesn’tproduce highly radioactive fission products. It’s also safer because of how the nuclear chain
reaction behaves. In fission, the chain reaction of splitting atoms can get out of control, which will either cause an
explosion or a reactor to meltdown and release massive amounts of radioactive particles … for years and decades into the
future. While nuclear reactors are very safe in practice, there have been several notable examples of what can happen if things go wrong. All
you have to do is look at Three Mile Island in 19792, Chernobyl in 19863, or Fukushima in 20114. Fusion reactions, however, are very
different. It takes a massive amount of heat to create plasma in the center of a fusion reactor. Plasma is superheated matter that gets so hot it
rips away electrons from the atoms forming an ionized gas.5 The freed nuclei, which are positively charged and usually repel from each other,
start bouncing around at great speeds and can fuse together, which releases energy in the process.6 If a fusion reaction becomes unstable
or unbalanced, it naturally slows
down, dropping the temperature until it stops. The worst case scenario is
damage to the fusion reactor and immediate surroundings, but very little else .7

But the safety benefits aren’t the only reasons we’re working to build fusion reactors. Instead of using something like uranium or
plutonium as a fuel source, fusion reactors can use deuterium and tritium . When heated to 150 million degrees and
slammed into each other, they produce helium and a neutron. Both deuterium and tritium can
be extracted from seawater and
lithium. There’s enough fusion fuel on earth to power our planet for millions of years .

Revitalizing domestic nuclear energy production is key to maintaining non-


proliferation norms.
Ahn ’23 [Alan, Josh Freed, Ryan Norman, and Rowen Price; Deputy Director for Nuclear for Third Way’s
Climate and Energy Program. Alan earned his B.S. in International Political Economy from Georgetown
University and his Master of Arts in Law and Diplomacy (MALD) from the Fletcher School at Tufts
University; led Third Way’s clean energy and climate advocacy efforts, former Chief Policy Advisor for
Representative Rob Andrews, former Deputy Chief of Staff for Representative Diana DeGette, Master's
from the University of Maryland, College Park and his Bachelor's degree from Brandeis University; Policy
Advisor for Third Way's Climate and Energy Program; Policy Advisor for Nuclear Energy at Third Wave;
Third Way, “Nuclear Fuel is a National Security Imperative,” https://www.thirdway.org/memo/nuclear-
fuel-is-a-national-security-imperative]

Building out our nuc lear fuel infrastructure is a pressing and urgent issue given our significant
dependence on uranium fuel from Russia . Russia’s use of energy exports as a geopolitical weapon is
well-documented —the country has historically used the threat of supply cutoffs to influence client
states and make them more pliant to Russian interests . Continued reliance on Russian nuclear fuel is,
as a result, a threat to the sovereignty of the US and democracies around the world.
Energy security is itself a national security issue, but the national security implications of nuclear fuel go well beyond that. Third Way is
currently collaborating with partners and subject matter experts on a white paper, exploring the diverse ramifications that a reliable and strong
nuclear fuel supply chain has for our defense, national security, geopolitical, and nonproliferation interests.

Here are Third Way’s initial takes on the white paper effort thus far:

Nuclear fuel is…

…typically framed as a domestic energy and commercial issue. However,


a healthy nuclear fuel sector (or lack thereof)
has profound impacts on our foreign policy and national security goals .
Not Just an Investment for Security of Supply, but an Investment for our Security

Exporting American technology is crucial to our civil nuc lear leadership internationally, and the global
deployment of nuclear energy will be vital to meeting our climate goals . But allowing uranium
enrichment tech nology to spread widely would significantly increase nuc lear security and weapons
prolif eration risks .

Strengthening international confidence in reliable nuclear fuel supply is not just an energy security
issue, it serves as a linchpin to US policies in countering the spread of nuc lear weapon s and the
means to produce weapons-usable material . As the US works to reduce its reliance on Russian fuel, the buildout of
nuclear fuel infrastructure should be sufficiently flexible to meet not only domestic needs, but also the
needs of our allies and partners around the world.

Accordingly, nuclear fuel exports both generate revenue and strengthen international peace and
stability by mitigating prolif eration risks . A substantial down payment into the nuclear fuel supply
chain is, therefore, more than an investment in our economic and industrial objectives . It’s a true
investment in maintaining and strengthening US national security .
Increasingly Vital to our Global Leadership and Presence in Nuclear Energy

US competitiveness in the global civil nuclear market is as much a national security priority as it is a commercial one. Our international
leadership and presence in nuclear energy is essential so that we can set the highest standards on
nuclear safety, security, and nonprolif eration around the world.
The federal government has already invested billions of dollars into the development and deployment of innovative US advanced reactor
technologies, including significant forward funding for the DOE Advanced Reactor Demonstration Program (ARDP) Pathway 1 demonstrations
(X-energy’s Xe-100 and TerraPower’s Natrium) in the Bipartisan Infrastructure Law. Both Pathway 1 projects, as well as many other designs
supported by ARDP, will require high-assay low-enriched uranium (HALEU) as fuel–commercial supply of which is currently dominated by
Russia.

The innovative features of these advanced reactors represent a potential competitive edge for the US in the international reactor market. But
reactor designs with uncertainties around fuel supply will clearly be less competitive overseas, and the general lack of commercial HALEU
infrastructure outside of Russia currently puts US advanced reactor technology at a disadvantage. Case in point: of the six technologies recently
selected for the Great British Nuclear Small Modular Reactor (SMR) competition, not a single HALEU-fueled reactor design was chosen.
While US global leadership in nuclear energy is broadly important to our national security, at a more
granular level, new and emerging competitors are now offering tech nologies, components, and
materials that are outside the purview of US export controls —meaning that such transactions do not
require US export licenses , bilat eral nuc lear coop eration agreements with the US, etc.
Under these circumstances, a robust nuclear fuel supply chain not only supports the competitiveness of American reactors, but through
enabling the export of competitive US nuclear fuel services and products, can help generate new avenues to reestablish our market presence
and authority over international nuclear trade and commerce. This would be a win-win for both our commercial and national security interests.

Crucial to Leveling the Playing Field Against Russia and China

Our competition against Russian and Chinese state-owned nuclear enterprises has particularly
heightened national security implications for many reasons. Both Russia and China have a track record of
weaponizing energy exports to project geopolitical influence , and there are major concerns about
their capacity and willingness to serve as responsible stewards of international civil nuclear norms and
practices—Russia’s unprecedented actions around the Zaporizhzhia Nuclear Power Station in Ukraine provide a very recent and terrifying
example of Russian recklessness and negligence around civilian nuclear facilities and infrastructure.

Compounding these worries is the fact that both of these countries have been highly aggressive in engaging and
courting international nuclear energy markets . Third Way and Energy for Growth Hub recently released a map of US, Russian,
and Chinese nuclear cooperation agreements, showing that both Russia and China lead the US in the number of “hard” MOUs—bilateral
agreements with sales of actual hardware, materials, or services attached. Russia and China had hard MOUs with 45 and 13 different nations,
respectively, while the US only concluded 12 such agreements.

In large part, Russia and China’s competitive edge comes from their all-in-one export packages . These often include
training , financing , construction , op eration, decommission ing, and other concessions , with fuel
supply often a part of these comprehensive offers. For some countries, there is significant allure in the fuel
provisions, ensuring a steady and reliable supply of nuclear fuel, often for the entire lifespan of the
reactor. Russia provides official commitments guaranteeing fuel for the life of exported reactors, sometimes with additional agreements to
take back spent fuel. Similarly, China is supplying all of the fuel needs for the reactors it exported to Pakistan, and while the primary focus now
is addressing growing domestic demand, their ambitious efforts to expand fuel cycle infrastructure mean that they will be in a position to offer
comparable assurances to international customers in the future. These types of offers on fuel supply can be particularly attractive for new
nuclear states and embarking nations, as they simplify the process of starting nuclear energy programs.

Building out nuc lear fuel production capacity at home is an important step towards evening the odds.

A robust domestic nuclear fuel sector can support critical defense capabilities and the readiness of our
strategic forces.
Unobligated uranium—meaning uranium fuel produced domestically using only US technology—is required for critical defense applications,
some of which (including the production of tritium and naval reactor fuel) have a direct bearing on the readiness of US strategic forces. A US
commercial nuclear fuel supply chain can be a source of unobligated material, thereby supporting US strategic readiness and, by extension, the
credibility and reliability of America’s extended defense and deterrence commitments abroad.

Why We Must Act

Recent debates around the possibility of US-Saudi civil nuclear cooperation have broadly highlighted both the importance of and challenges in
implementing effective barriers to the proliferation of sensitive technologies, including uranium enrichment. With geopolitical turmoil and
conflicts emerging in various parts of the world, it is more important than ever for the United States to fully embrace its role as a force for
global peace and stability. A part of this charge is to ensure that nuclear technology and materials are not misused and remain under effective
and rigorous control throughout the world.
Our leverage in upholding and negotiating—both bilaterally and multilaterally—the highest global
standards for security and nonprolif eration, including effective measures to contain the spread of
enrichment and other sensitive technologies, hinges on the different factors outlined above:
Our ability to assure countries of reliable nuclear fuel supply;

Our commercial presence and competitiveness in the global nuclear market, particularly against China and Russia;

And the strength and credibility of our extended deterrence commitments to our international friends and allies.

Without nuclear fuel, the proverbial rug will be pulled out from under all of these, threatening to
collapse our international leverage and influence on these matters like a deck of cards .

Proliferation cascades cause nuke war.


Brown ’22 [Gerald; Works as an analyst for the Department of Defense’s Nuclear Enterprise where he
aides in nuclear strategy research and conducts exercises to verify nuclear surety policy, and holds a
Master of Art in International Relations; Global Security Review, “Conflict And Competition: Limited
Nuclear Warfare And The New Face Of Deterrence,” https://globalsecurityreview.com/conflict-
competition-limited-nuclear-warfare-new-face-deterrence/]

Nuclear weapon use in a limited manner may be a serious threat , and the proliferation of nuclear weapons and
the changing state of the world into a multipolar nuclear order may encourage this. Despite tensions between the U.S.
and USSR, they were ultimately able to manage this competition in a bipolar nuclear world; this competition for advantage and security ended
with the eventual collapse of the USSR. The security dilemma ran its course without the use of nuclear weapons, and the U.S. rose to become
the hegemon of a unipolar world. However, in a multipolar nuc lear world , the challenges faced previously are
significantly exacerbated . Currently, the nine known nuclear-weapon states are the United States, the United Kingdom, Russia,
China, France, Israel, Pakistan, India, and North Korea.[14] Strategies that worked in a bipolar world may not be as effective
in the modern landscape , thus preventing the failure of deterrence — and the subsequent use of a nuclear
weapon — may be more challenging than before .
The most recent nuclear state, North Korea, is one of the most troubling in the current group of nuclear states. North Korea is one of the
world’s poorest states, facing harsh sanctions and isolation from much of the international community. Yet, despite the hardships, poverty, and
poor economy of this autocratic state, it managed to defy the nonproliferation regime and create a fully operational nuclear arsenal.[15]
Pyongyang is not bashful about its willingness to use its weaponry either, stating that it will use its weapons to “reduce the U.S. mainland to
ashes and darkness.”[16] Such a clear security threat may increase proliferation elsewhere in response. Allison calls this the “ nuc lear

cascade ,” and suggests that if a state as weak and isolated as No rth Ko rea can defy the non-prolif eration
regime , other states are likely to follow suit.[17] If the United States is incapable of preventing such a clear security threat, why
would Tokyo and Seoul rely on Washington to defend them in the face of a nuclear threat? Japan already has the capability to build nuclear
weapons, possessing well-developed uranium enrichment and missile programs that could allow Japan to rapidly create a credible nuclear
weapons program to defend itself and its national interests without the United States. According to The Council on Foreign Relations, there are
thirty states that have the tech nological ability to quickly build nuclear weapons.[18]
While Pyongyang claims offensive intentions, it is incredibly unlikely to attempt to use its nuclear forces offensively against the United States.
Doing so would be an act of suicide, the disparity between U.S. and North Korean forces is far too great. Instead, these weapons were more
than likely obtained for defensive purposes. Pyongyang may not be able to destroy the United States, but it can ensure its own sovereignty.
Forcibly trying to topple the Kim regime could escalate into the use of nuclear force if Pyongyang got desperate, and a strike designed to
eradicate their nuclear weapons would again invoke this “use it or lose it” mentality. While Pyongyang may not be able to destroy the U.S. with
its capabilities, it can undeniably cause immense harm to the US. It could cause even greater harm to smaller, closer countries such as U.S. allies
Japan and South Korea. Knowledge of this is a strong deterrent against U.S. intervention, allowing Pyongyang to carry on less cautiously without
fearing foreign intervention. The creation of this deterrent may have effectively ensured the sovereignty of the Kim regime for the time being,
and they are unlikely to relinquish this guarantee. The establishment of this deterrent highlights some of the challenges in the modern nuclear
era. North Korea’s outright defiance of the nonproliferation regime sends a signal that other states can build a nuclear
capability as well and that such a force may be an effective way to guarantee their sovereignty against the Western world.[19]

Proliferation to autocratic states is a cause for concern, primarily because they are considerably less stable than democratic
states and may be more willing to utilize a nuc lear weapon . The inherently volatile nature of these regimes
poses a significant challenge . North Korea has a very poor and impoverished populace, held under authoritarian rule. Regimes such
as these are not known for their longevity and stability. The threat of regime change and revolt from within is a realistic
consideration with autocratic states. If this occurred, it could result in the loss of a nuclear weapon, or their domestic
use to quell a rebellion.[20] It could also escalate into conflict as Chinese and U.S. forces both seek to secure their nuclear
assets and end up in conflict with each other. China would certainly not accept U.S. forces along the Yalu river, and both would want to
immediately seek to ensure the stability of Pyongyang’s nuclear assets.

Autocratic states could also safely assume that Western powers would prefer it if they were a democratic government friendly to the West.
With the international liberal orders push for global democracy, autocratic rulers are likely to fear Western interference. After Pyongyang’s
recent success, a nuclear weapons capability may appear to be an effective way to prevent Western interference and ensure its sovereignty.

With smaller autocratic states , the constant external and internal threats to the stability of their regimes
breed paranoia and volatility . Leading government officials tend to be promoted based on loyalty rather
than competence , and disagreement or discontent with the dictator may be punished harshly, stifling progress and ingenuity. These
regimes also tend to have strong military leadership directing the country. Pakistan is notable in this regard, where the military
maintains significant control over the government and has a history of instigating a military coup when they dislike civilian leadership. Pakistan
has had four separate military coups since its creation, with military dictators constantly consolidating their power into the executive branch.
far more likely to see nuc lear weapon s use as a viable option , which increases the
[21] Military leadership is
instability of nuclear autocratic regimes even further. Civilian leadership has arguably been a key factor in preventing
nuclear use thus far. Military officers often possess a different mindset and attitude on the subject than civilian leadership due to their career
path. During the Cold War, there were numerous instances where the Joint Chiefs of Staff were far more willing to utilize nuclear weapons in a
preventive war and were reined in by U.S. civilian leadership.[22]

Throughout the Cold War, there were numerous false alarms ; equipment detected missile launches that
did not exist, drills were confused for real launches , and communication cut-offs and the “ fog of war ” nearly
led to nuclear use .[23] If faced with similar threats, it is less likely that an autocratic state will respond in such a level-
headed manner . With shorter-range nuclear weapons, this could be exacerbated. These states are less likely to have a
robust , survivable nuclear arsenal. If a state’s nuclear arsenal is threatened, it is likely to take action to ensure
its survival or use. Without having the same geographic separation that the U.S. and USSR did, several states today rely on shorter-range
weapons, like short-range missiles and multi-role fighter/bomber aircraft. Whether these weapons systems carry nuclear or conventional
payloads may be unknown; being forced to make a rapid decision to respond to a potential threat may push a state over the edge to ensure its
security.[24]

Particularly concerning, at least in regard to stability, is the smaller size and the heightened vulnerability of
many arsenals compared to other states. The multipolar nuclear order lacks the same levels of parity both quantitatively and
qualitatively that were present in the Cold War. The number of weapons between states varies significantly . While exact
numbers are typically classified, experts have estimated a range varying from approximately 20 warheads in North Korea, to around 6,000 for
both the U.S. and Russia.[25] Destroying all the nuclear weapons in North Korea is significantly easier to do than
performing the same action against the U.S. or Russia, and this may be especially true with an even newer autocratic state
that develops a brand-new nuclear capability. The parity dilemma further extends to conventional capabilities . A
state with inferior conventional capabilities such as North Korea compared to the U.S. or Pakistan compared to India, may
feel pressured into utilizing, or at least threatening, to use its nuclear capabilities to make up for its inferiority. If a
nuclear-armed state lacks an effective conventional response option and is faced with a crisis that threatens its security, it may decide to
escalate with a limited nuclear strike to preserve its integrity and security.
Plan---1AC
The United States federal government should strengthen its protection of nuclear
fusion patents.
Solvency---1AC
Finally, solvency.
Patent protection solves---it increases private investment whilst avoiding trade
secrets, facilitating universal development.
Wooley and Baker ‘22 [Russell; part of the Chemistry team at Carpmaels. He has a diverse portfolio
of work across a broad range of core chemistry technologies, with a particular focus on energy storage
and ophthalmic devices -- MChem (University of Oxford) PhD Materials Science (Imperial College
London) Chartered Patent Attorney European Patent Attorney European Design Attorney CertIP
Certificate in Intellectual Property Law UPC Representative – Senior Associate; Adam; Associate -- works
within the Engineering and Tech teams at Carpmaels & Ransford -- MEng Mechanical Engineering &
Aeronautics (Durham University) Registered Patent Attorney European Patent Attorney CertIP
Certificate in Intellectual Property Law UPC Representative; March 21 2022; "Patenting fusion";
Carpmaels and Ransford; https://www.carpmaels.com/patenting-fusion/; ‫]بالل‬

Once consigned to the realms of science fiction or dismissed as being perpetually 20 years away from fruition, there is a growing hope
that fusion power may be on the path towards commercialisation . In a field traditionally dominated by large, state-
funded efforts, a number of entities in the private sector are taking different approaches to surpassing the “breakeven” point, where a
useful amount of energy can be extracted from a fusion reactor compared to the significant amounts of energy
required to initiate and sustain the reaction. Achievingthis goal brings the promise of clean and green power , free
from reliance on the uncontrollable natural phenomena required by traditional renewable power sources (e.g. wind, sunlight, etc.), and free
from the unwanted radioactive waste resulting from traditional nuclear power generation. Given the universal need for clean
and green power, if and when the fusion code is cracked, the whole world will want in .

Due to these advantages, achieving this goal naturally also brings the promise of commercial opportunities .
Indeed, a recent article in Nature notes the significant amount of funding obtained by the private fusion sector, said to sum to more than two
billion dollars according to an October 2021 survey by the Fusion Industry Association. In the three months following the survey, this total has
more than doubled thanks to billion-dollar backing for both Helion Energy and Commonwealth Fusion Systems (CFS), The Engineer reports.
Why the sudden cash injection to the private sector? A recent flurry of technological breakthroughs is likely
to be playing a part; the Joint European Torus (JET) made national headlines in February 2022 by comfortably
beating its previous record for production of sustained fusion energy.

JET’s big brother, the much larger ITER reactor, is slated to begin testing in 2025. ITER (a breath of acronym-free fresh air – ITER
translates as “the way” in Latin) is designed in a similar fashion to JET, and aims to be the first fusion device to produce net
energy . The recent success of JET makes this claim sound all the more reasonable. The US is not far behind; its National Ignition Facility
(NIF) at the Lawrence Livermore National Laboratory produced a sizeable 1.3 million joules of fusion energy in August
2021.

Although JET, ITER, and NIF are state-funded, the public sector does not have a monopoly on the buzz
around fusion. The private sector has expanded in recent years, with the main players including: TAE Technologies,
Helion Energy, Commonwealth Fusion Systems, General Fusion, and Tokamak Energy. As the CEO of Commonwealth Fusion Systems has noted,
“Companies are starting to build things at the level of what governments can build”.

Due to the level of investment and opportunity for profit, it is not surprising that a
large number of patent applications are
filed in the field of fusion power. Analysis of patent families owned by the main players listed above shows a significant
increase in the number of filings over the last 5-10 years. Tokamak Energy has largely led the way, building a large
portfolio in a short space of time. Understandably, given where much of the research and development in this field takes place, there is a bias
for all of these applicants towards filings in the US and Europe.
Patent families owned by selected fusion companies grouped by filing date. A decrease in the number of families in the last 11/2 years is expected due to the lag between filing and publication. Data and grouping of patent families
obtained from Orbit Intelligence.

The International Patent Classification (IPC) provides another way of reviewing patent filing trends, where code G21B covers fusion reactors. Analysis of PCT applications classified under this code filed in the last 20 years shows a
slight increase in the number of recent filings, likely reflecting the expansion of the private sector noted above (e.g. the peak of filings in 2018 coincides with the peak of Tokamak Energy’s filing strategy). Other busy applicants in
this field include academic or non-profit institutions – from the US, the Lawrence Livermore National Laboratory and the University of California, and from France the Alternative Energies and Atomic Energy Commission (CEA /
Commissariat à l’énergie atomique et aux énergies alternatives). This likely reflects that fusion power has traditionally relied on state funding, e.g. the ITER and NIF experiments noted above.

PCT applications classified under IPC code G21B. A decrease in the number of families in the last 11/2 years is expected due to the lag between filing and publication. Data obtained from Orbit Intelligence.

Delving deeper into the IPC, it is of note that there is a dedicated code for inventions in the field of “cold fusion”. Cold fusion is a proposed form of fusion said to occur at much lower temperatures than those typically used in fusion
reactors. There was a surge of interest in this concept in the late 1980s but it is now typically viewed with scepticism by the scientific community. IPC code G21B 3/00 covers cold fusion, described as “low-temperature nuclear
fusion reactors, e.g. alleged cold fusion reactors”. It may well be that the use of the word “alleged” indicates a degree of suspicion from the writers of the IPC over whether cold fusion is possible. Indeed, other uses of the term
“alleged” in the IPC are for fields firmly within the realms of science fiction, namely, various forms of perpetual motion machines (F03B 17/04, F03G 7/10, H02K 53/00, H02N 11/00, and lastly H02K 53/00 “alleged dynamo-electric
perpetua mobilia”), and also alchemy (G21G 5/00: “alleged conversion of chemical elements by chemical reaction”). Accordingly, it could be that the classification of an invention under code G21B 3/00 as relating to cold fusion
might cast doubt on whether the invention can be repeated, perhaps leading to objections of an insufficient disclosure of the invention or even a lack of industrial applicability. However, analysis of patent applications classified
under code G21B 3/00 shows that this does not necessarily signal the end of the road, as some such applications have been granted, although there are differences in approach depending on the patent jurisdiction.

Although there is a noticeable upward trend in the number of patent filings in the fusion field from the private sector, the total number
of filings remains relatively low when compared to many other fields also boasting investment, public
interest, and potential for growth. A field such as fusion is unlikely ever to reach the number of filings
seen in sectors such as medical tech nology or consumer electronics, where the number of units sold and relative
similarity between competing products demand a bristling patent arsenal. That said, for only one of fusion’s private-sector
pioneers regularly to have double-digit filing years seems amiss. Why might this be?
It is possible that the value of a fusion-related patent is somewhat limited by the number of potential infringers, and the divergence of approaches being taken. While all of the entities discussed in this article, private or state-
funded, are similar enough to be grouped under the umbrella-term “fusion power”, the techniques each is investigating are, in some cases, quite different. Take JET/ITER and NIF, for example, as the two leading state-funded
projects of Europe and the US, respectively: one aims to confine nuclear plasma with magnets; the other directs the world’s most energetic laser at plastic spheres. How relevant is one’s innovation to the other?

Another possibility is that the patent term of 20 years is somewhat off-putting for this industry. The lead-time on new technology making its way into research facilities is long enough: ITER’s design phase was completed in 1998,
but consistent Deuterium-Tritium Operation is planned to begin as late as 2035. Even if the private sector halved this timescale, a fusion patent might provide only one year of useful protection. Perhaps amassing a large portfolio
which relies on net energy fusion being cracked and commercialised within 20 years of making your filings is less attractive than funnelling more investment into R&D. Significant delays between filing and commercialisation are not
unique to fusion; in the pharmaceutical space, for example, clinical trials and regulatory approval can take up an appreciable amount of the 20-year patent term. Pharmaceuticals, however, can benefit from specific additional
protection in the form of Supplementary Protection Certificates (SPCs), which extend patent term by a maximum of 5 years beyond the original 20. Given the paramount importance of solving the global energy crisis, an equivalent
term extension available to fusion innovators does not seem unreasonable, although none appears to be calling for it at present.

With all of that said, a convincing case for filing fusion patents can still, and should, be made. It can certainly be argued that the
potential
value of strong protection in this field far outweighs the risk that commercialisation of the invention
takes time to materialise. Indeed, if fusion power is now actually 20 years away, rather than being perpetually 20 years away,
patent families filed now and covering core technologies may prove to be exceptionally valuable in their
final few years of term, precisely when the market is clamouring for this new-fangled fusion power.

On divergences in approach, while this is true for now, one method will be first to achieve net energy, and it is not unreasonable to expect a
significant narrowing of the breadth of approaches when this comes to pass. Moreover, notwithstanding divergent approaches, private fusion
entities should appreciate that they are not alone, and be prepared to use their patent portfolios to secure further investment ahead of their
competitors. Indeed, although private investment at the present stage may be sufficient to bring fusion power close to commercialisation,
actually achieving commercialisation, i.e. getting to the stage of building fusion reactors at a scale to replace fossil-fuel power stations, may well
require significant state investment. A patent monopoly on core technology gives governments a reason to select
one supplier over another rather than base a decision solely on price . Technological breakthroughs
may be decisive for winners and losers, but only if they are well protected .
Also important to note is that innovation in the fusion sector may have relevance outside the construction and operation of fusion reactors.
Parallels can be drawn between fusion and astronautical engineering in this regard. NASA is famous for commercialising its space-intended, but
accidentally-otherwise-useful technology, particularly via licensing. The full list of such technologies is as extensive and varied as it is highly
profitable: LASIK technology; scratch resistant lenses, freeze drying processes; enriched baby foods; and aircraft de-icing, to name but a few.
While the demands of space travel may require invention in a broader range of fields than fusion (we’re not holding our breath for ITER’s line of
baby foods), fusion is pushing the envelope in vacuum generation, superconducting magnets, heat resistant coatings, and control systems. A
NASA-style trickle down of this cutting-edge technology to adjacent commercial sectors, ready to implement the technology immediately,
should be possible and lucrative. A brief review of applications owned by the five private entities above suggests that such a strategy is at least
being considered; a recent Tokamak Energy Ltd application is pursuing a superconducting electromagnet “in particular, but not exclusively” for
use in tokamak plasma chambers, and independent claim 1 to the electromagnet makes no mention of a tokamak. Patents based on fusion
research but not limited to fusion may provide valuable additional income streams.
There appears, therefore, to be value in pursuing patent protection for at least some of the inventions
generated on the road to achieving workable nuclear fusion. Moreover, given the complexity of fusion
reactors, it is likely that innovators will amass a significant amount of “ know-how ” in the finer details of
what is required to make their individual designs work. Where patent protection is not pursued , it would
be advisable for innovators to put in place robust policies for identifying and protecting such know-how as trade
secrets . For more information on trade secrets see our article here, the first in a series of articles on trade secrets and confidential
information. Complementing trade-secrets with patent protection for key concepts and technology with application outside fusion may well be
the optimal strategy. Moreover, private fusion companies should pay close attention to their competitor’s filing strategies, and be prepared to
take action against patents which may affect their freedom to operate in this lucrative sector.

Fusion patents are key---R&D costs and gestation lags mean only exclusivity solves.
Lo and Whyte ’24 [Andrew Wen-Chuan Lo is the Charles E. and Susan T. Harris Professor of Finance
at the MIT Sloan School of Management; Dennis G. Whyte is the Hitachi America Professor of
Engineering at MIT, a professor in the MIT Department of Nuclear Science and Engineering, and former
Director of the MIT Plasma Science & Fusion Center; 2/27/2024; SSRN Online Journal; “What Fusion
Energy Can Learn From Biotechnology,” https://dx.doi.org/10.2139/ssrn.4779516]//LASA-AB

Both fusion and biotech face complex IP landscapes. Protecting the development of innovation through
patents and IP rights management is crucial to both fusion energy and biotechnology to attract investment and
maintain an economic competitive advantage . The two fields also share common challenges in the
commercialization of IP developed at universities, private research facilities, and government national laboratories. Both industri es
place a strong emphasis on strategic patents and licensing agreements to protect their IP and create
revenue streams, which is vital to their substantial R&D investments. However, collaboration between
nonprofit institutions and for-profit entities can be problematic because of the obvious differences in
their objectives and constraints. Therefore, bottlenecks often occur when negotiating commercialization
rights with academia.

gestation lags involved in fusion and biotech investments, typically a decade or longer before any
Moreover, the
cash payouts are possible, and these lags reduce the overall value of IP because a significant fraction of the 20-year patent life must
be devoted to R&D before any positive cash flows are realized. This phenomenon gives rise to “patent cliffs” for biopharma
companies—steep declines in revenues when a blockbuster patent expires—something that fusion companies will eventually need to address
(see Section 5).

The importance of IP also underscores the requirement in both fields for highly specialized skills and
knowledge. Competition to attract highly skilled and specialized professionals such as physicists, biologists, chemists, and
data scientists is an issue in both industries, particularly during a growth stage, when a scarcity of talent becomes a rate-
limiting factor, and it becomes challenging to find and retain talent. Fusion is presently in this stage.

Stable policy is key to certainty---that underpins investment.


Windridge ’23 [Melanie; April 13; PhD plasma physicist and science communicator best known for her
book Aurora: In Search of the Northern Lights and her educational work on fusion energy with the
Institute of Physics and the Ogden Trust; Forbes, “Investors Hold The Key To Fusion And Our Clean
Energy Future,” https://www.forbes.com/sites/melaniewindridge/2023/04/13/investors-hold-the-key-
to-fusion-and-our-clean-energy-future/]
Some consider that the financing risk is the biggest risk to fusion , so investors are critical to success.
Getting in on the action

Investors also have the chance to win big on fusion, a market that Bloomberg has predicted could reach $40 trillion.

Why is fusion so attractive? As John Bromley says, “Renewables will certainly be a large and important part of a decarbonised economy, but we
will also require dispatchable zero carbon energy sources to end fossil fuel reliance. Fusion energy holds the potential to achieve and sustain a
significant reduction in global emissions.”

There’s no doubt that funding fusion is challenging , involving high upfront costs , long timescales and high
uncertainty . Yet investment in fusion has been increasing.
Just last month, Breakthrough Energy Ventures (Bill Gates’ investment firm seeking to finance, launch, and scale companies that will eliminate
greenhouse gas emissions throughout the global economy—an investor in Commonwealth Fusion Systems and Zap Energy) invested in another
fusion company, Type One Energy.

Behind the scenes, more conventional investors, like pension funds, insurance companies and sovereign wealth funds, have quietly been
investing in fusion. The mainstreaming of fusion among capital-providers has begun.

What investors need

Yet getting into fusion investment requires a steep learning curve. Fusion is a big and complex subject.

Increasingly investors, investment banks or other financial players are enquiring wanting to learn about fusion, taking that first step into getting
familiar with a new industry.

Financing fusion is so critical to the mission that advocates of fusion should be asking how we can accelerate this mainstreaming of fusion and
draw new capital to the table.

Investors need access to opportunity, they need insight from industry insiders and existing investors, they need community and relationships.
This is why events that bring all these things together can be so important.

But investors also need gov ernment support and certainty . That’s one reason why the U.K. is currently in a strong
position for fusion energy development, because they have outlined their plans for a regulatory framework for fusion while other countries are
still in discussion.

It goes further than technology regulation, however. Policy and incentives will be required in the financial services industry to

drive the effective reallocation of capital .


Michelle Scrimgeour, Chief Executive of Legal & General Investment Management, gave evidence to a 2022 U.K. parliamentary inquiry entitled
‘The financial sector and the U.K.’s net zero transition’.

Scrimgeour said: “A successful transition to a decarbonised economy, consistent with less than 1.5 degrees warming, will require a substantial
change in capital allocation. Several trillion dollars a year of incremental capital will need to be invested into low carbon energy, energy
infrastructure and energy efficiency. For this capital allocation to occur, a financial services industry that is aligned with net zero outcomes will
be crucial. Equally, this requires global policy action at international governmental level, particularly on an effective
regulatory structure to price carbon and other greenhouse gases.”

So while investors hold the key to the success of fusion and our clean energy future, it’s not just down to investors— gov ernment policy
will be crucial in enabling investors to drive the change .
Extra---1AC
Decarbonization’s impossible with current energy sources---it’s try-or-die for fusion.
Smil ’22 [Vaclav; Distinguished Professor Emeritus at the University of Manitoba, Ph.D. in geography
from the College of Earth and Mineral Sciences of Pennsylvania State University, and the author of over
forty books on topics including energy, environmental and population change, food production and
nutrition, technical innovation, risk assessment, and public policy; How the World Really Works: The
Science Behind How We Got Here and Where We're Going, “Understanding Energy: Fuels and
Electricity,” Ch. 1]

Notice the key qualifying adjective: the target is not total decarbonization but “ net zero ” or carbon neutrality. This
definition allows for continued emissions to be compensated by (as yet non-existent !) large-scale removal of CO2
from the atmosphere and its permanent storage underground, or by such temporary measures as the mass-scale planting of trees. [71] By
2020, setting net-zero goals for years ending in five or zero has become a me-too game: more than 100 nations
have joined the lineup, ranging from Norway in 2030 and Finland in 2035 to the entire European Union, as well as Canada, Japan, and South
Africa, in 2050, and China (the world’s largest consumer of fossil fuels) in 2060.[72] Given the fact that annual CO2 emissions from fossil fuel
combustion surpassed 37 billion tons in 2019, the net-zero goal by 2050 will call for an energy transition unprecedented in both pace and scale.
A closer look at its key components reveals the magnitude of the challenges .
Decarbonization of electricity generation can make the fastest progress, because installation costs per unit of solar or wind capacity can now
compete with the least expensive fossil-fueled choices, and some countries have already transformed their generation to a considerable
degree. Among large economies, Germany is the most notable example: since the year 2000, it has boosted its wind and solar capacity 10-fold
and raised the share of renewables (wind, solar, and hydro) from 11 percent to 40 percent of total generation. Intermittency of wind and solar
electricity poses no problems as long as these new renewables supply relatively small shares of the total demand, or as long as any shortfalls
can be made up by imports.

As a result, many countries now produce up to 15 percent of all electricity from intermittent sources without any major adjustments, and
Denmark shows how a relatively small and well-interconnected market can go far higher. [73] In 2019, 45 percent of its electricity came from
wind generation, and this exceptionally high share can be sustained without any massive domestic reserve capacities, because any shortfalls
can be readily made up by imports from Sweden (hydro and nuclear electricity) and Germany (electricity coming from many sources). Germany
could not do the same: its demand is more than 20 times the Danish total, and the country must maintain a sufficient reserve capacity that
could be activated when new renewables are dormant. [74] In 2019, Germany generated 577 terawatt-hours of electricity, less than 5 percent
more than in 2000—but its installed generating capacity expanded by about 73 percent (from 121 to about 209 gigawatts). The reason for this
discrepancy is obvious.

In 2020, two decades after the beginning of Energiewende, its deliberately accelerated energy transition, Germany still had to keep most of its
fossilfired capacity (89 percent of it, actually) in order to meet demand on cloudy and calm days. After all, in gloomy Germany,
photovoltaic generation works on average only 11 –12 percent of time , and the combustion of fossil
fuels still produced nearly half (48 percent) of all electricity in 2020. Moreover, as its share of wind generation has
increased, its construction of new highvoltage lines to transmit this electricity from the windy north to the southern regions of high demand has
fallen behind. And in the US, where much larger transmission projects would be needed to move wind electricity from the Great Plains and
solar electricity from the Southwest to high-demand coastal areas, hardly any long-standing plans to build these links have been realized. [75]

As challenging as such arrangements are, they rely on technically mature (and still improving ) solutions —that is,
on more efficient PV cells, large onshore and offshore wind turbines , and high-voltage (including long-
distance direct current) transmission . If costs , permitting processes, and not - in -my- backyard sentiments
were no obstacles , these techniques could be deployed fairly rapidly and economically. Moreover, the problems of intermittency of solar
and wind generation could be resolved by renewed reliance on nuclear electricity generation. A nuclear renaissance would be particularly
helpful if we cannot develop better ways of large-scale electricity storage soon.

We need very large (multi-gigawatt-hour) storage for big cities and megacities, but so far the only viable option
to serve them is pumped hydro storage (PHS): it uses cheaper nighttime electricity to pump water from a low-lying reservoir to
high-lying storage, and its discharge provides instantly available generation. [76] With renewably generated electricity, the pumping could be
done whenever surplus solar or wind capacity is available, but obviously PHS can work only in places with suitable
elevation differences and the operation consumes about a quarter of generated electricity for the
uphill pumping of water. Other energy storages, such as batteries , compressed air , and
supercapacitors , still have capacities orders of magnitude lower than needed by large cities, even for a
single day’s worth of storage. [77]
In contrast, modern nuclear reactors, if properly built and carefully run, offer safe, long-lasting, and highly reliable ways of electricity
generation; as already noted, they are able to operate more than 90 percent of the time, and their lifespan can exceed 40 years. Still, the
future of nuclear generation remains uncertain. Only China, India, and South Korea are committed to further expansion of
their capacities. In the West, the combination of high capital costs , major construction delays , and the

availability of less expensive choices (natural gas in the US, wind and solar in Europe) has made new fission
capacities unattractive . Moreover, America’s new s mall, m odular, and inherently safe r eactors (first proposed during the
1980s) have yet to be commercialized , and Germany, with its decision to abandon all nuclear generation by 2022, is only the most
obvious example of Europe’s widely shared, deep anti-nuclear sentiment (for the assessment of real nuclear generation risks, see chapter 5).

But this may not last: even the European Union now recognizes that it could not come close to its extraordinarily ambitious decarbonization
target without nuclear reactors. Its 2050 net-zero emissions scenarios set aside the decades-long stagnation and neglect of the nuclear
industry, and envisage up to 20 percent of all energy consumption coming from nuclear fission. [78] Notice that this refers to total primary
energy consumption, not just to electricity. Electricity is only 18 percent of total final global energy consumption, and the
decarbonization of more than 80 percent of final energy uses —by industries, households, commerce,
and transportation—will be even more challenging than the decarbonization of electricity gen eration.
Expanded electricity generation can be used for space heating and by many industrial processes now relying on fossil fuels, but the course
of decarbonizing modern long-distance transportation remains unclear .

You might also like