0% found this document useful (0 votes)
128 views62 pages

Advanced Nuclear Energy: A Sustainable Solution

The document advocates for advanced nuclear energy as a sustainable and safer alternative to fossil fuels and traditional renewables, highlighting its potential for long-term energy generation and lower environmental impact. It argues that advanced nuclear technology produces significantly less waste that decays faster than legacy nuclear waste, and it emphasizes the need to maintain and modernize existing nuclear plants to meet future energy demands. The paper also discusses the advantages of small modular reactors and thorium reactors, suggesting that advanced nuclear can provide a reliable energy source for thousands of years.

Uploaded by

akarthikeyan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
128 views62 pages

Advanced Nuclear Energy: A Sustainable Solution

The document advocates for advanced nuclear energy as a sustainable and safer alternative to fossil fuels and traditional renewables, highlighting its potential for long-term energy generation and lower environmental impact. It argues that advanced nuclear technology produces significantly less waste that decays faster than legacy nuclear waste, and it emphasizes the need to maintain and modernize existing nuclear plants to meet future energy demands. The paper also discusses the advantages of small modular reactors and thorium reactors, suggesting that advanced nuclear can provide a reliable energy source for thousands of years.

Uploaded by

akarthikeyan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 62

1AC

Nuclear Energy---1AC
The Advantage is NUCLEAR ENERGY.
Both fossil fuels and traditional renewable energy are unsustainable because of
intermittency, mining, and environmental externalities. Only modernized nuclear
power solves.
Rehm ’23 [Thomas; March 2023; Ph.D. Ph.D. in chemical engineering from Northwestern University;
Current Opinion in Chemical Engineering, “Advanced Nuclear Energy: The Safest and Most Renewable
Clean Energy,” vol. 39]
Although legacy nuclear energy has been the safest form of electricity generation, it has been demonized as unsafe since the 1960s. The three well-known nuclear
it is
accidents, Three Mile Island, Chernobyl, and Fukushima, were legacy nuclear designs. Even with the best safety record of all types of electricity generation,
time to move away from legacy nuclear to reap the benefits of a truly renewable source of safe clean
energy, advanced nuclear. Solar and wind cannot hold a renewable candle to the vast renewable potential of advanced nuclear energy. The
transition to carbon-neutral energy can best be made with advanced nuclear , in safety, waste
minimization, true renewability for thousands of years, process heat for manufacturing, and a viable
means of replacing our chemical manufacturing dependence on fossil fuels . Some of my colleagues tell me, “There are
few opportunities for chemical engineers in nuclear”. I disagree. Opportunities include design and operation of high-temperature (550–750 °C) plants involving
molten salts, liquid metal, and helium; application of this high-temperature capability for industrial process heating; recycling legacy nuclear ‘waste’ to provide fuel
for advanced reactors; integration of the hydrogen economy into nuclear plant design and operation; improvement in moving pebble-bed advanced reactor technology;
mining improvements for uranium and thorium, including mining uranium from seawater; molten salt storage systems for improving load following functionality and
to provide process heat functionality; resolving corrosion challenges in molten salt reactors; and retrofitting existing oil-and-gas-based refineries to operate as nuclear
biorefineries.
Introduction
without advanced nuc lear energy ,
Renewables are considered by many to be the solution to global warming. Yes, they can contribute. However,

we will not solve global warming .


Nuclear energy is much safer than solar and wind renewables and has a lower life cycle carbon footprint.
The disadvantage of nuclear is its long-lived nuclear waste. To decay to a nominal background level, legacy spent-nuclear fuel requires tens of thousands of years.
This paper argues for advanced
nuclear, whose much smaller amount of nuclear waste (about 1% of legacy) will
decay to background levels in about 400 years [1].
It is important to note that legacy nuclear waste has minimal risk associated with it [2]:
There is not much of it. All the nuclear fuel waste generated thus far, on the planet, could be stacked onto one football field to a height of about 30 feet.
It is a solid, in small pellets contained in metal rods, stored inside ultrastrong 50-ton containers.
With advanced nuclear, this minimal risk is reduced further (Table 1).
Table 1. Vital statistics for renewables (solar/wind) versus nuclear (legacy/advanced).
Renewables Nuclear energy
Solar Wind Legacy Advanced
Life cycle carbon 41–48 14 12 No data yet but
emissions, g- probably less than
CO2-eq/kWh legacy nuclear
Industry 0.245 1.78–8.5 < 0.01 No data yet but
fatalities per probably less than
TWe-year legacy nuclear
Capacity factor 10–25% 30–50% 90% No data yet but
(fraction of probably better
nameplate power than legacy
capacity actually nuclear
produced)
Waste “We risk creating new environmental Safely stored, no Waste will be less
and economic burdens in the future,” environmental or than 1% of legacy
Peter Wright, EPA, 2020 economic burdens nuclear and about
1% as long-lived
as legacy nuclear
Planetary Less than 200 years About 90 years Several thousands
mineral-proven of years
reserves
<<<References Omitted>>>
Solar and wind are not really renewable on a 200-year timescale. Neither is legacy nuclear via current land-
based uranium mining . Arguments have also been made that legacy nuclear is not safe.
Advanced nuclear technology is far safer than legacy nuclear. The current widespread fear of nuclear, which is based on legacy nuclear accidents, is unwarranted [11]:
There were no deaths and no negative health effects from the trivial radiation release from the Three Mile Island accident.
At Fukushima, one power plant worker involved in recovery died. There were no deaths or incidents of medical harm to any member of the public from radiation
exposure. Four years later, thorough medical examinations were given to evacuees from the designated exclusion zone and many concerned residents from outside the
exclusion zone. Nearly a third of a million children who were age 18 or less at the time of the accident were screened for thyroid issues. The rate of thyroid anomalies
in Fukushima children was less than in other children in Japan.
Chernobyl was a poorly designed light water boiling reactor susceptible to thermal runaways. Reported fatalities ranged from 50 at Chernobyl to several thousand
offsite [12]. Following the Chernobyl accident, other Russian RBMK designs were modified to improve safety. Some RBMK plants have been shut down but not all.
No nuclear plants have been built with this design outside of the Russian federation [13].
Advanced nuclear technology
There is only one naturally occurring fissile isotope, Uranium-235 (U235). In a nuclear reactor, U235 atoms split and produce heat.
There are two naturally occurring fertile isotopes, Uranium-238 (U238) and Thorium-232 (Th232). They will not split until they first transmute to fissile isotopes via
neutron capture, which does not occur to a significant extent in a legacy reactor. In an advanced reactor, U238 and Th232 transmute to fissile isotopes Pu239 and
U233, which then split to produce heat.
As with any new technology, there are challenges associated with Th232, including the production of a small amount of U232 contaminant in the transmutation of
Th232. The decay of U232 produces very penetrating gamma rays. These gamma rays are hard to shield, requiring more expensive spent fuel handling and/or
reprocessing [14], [15].
Key distinctions between legacy technology and advanced technology:
A legacy reactor is designed at about 2000 psig. An advanced reactor is designed at near-ambient pressure as the normal boiling points of heat-transfer media exceed
1200 °C. A huge gain in safety.
A legacy reactor operates
at about 290 °C . An advanced reactor operates at 550–750 °C , creating a much larger
temp erature difference with the environment, which allows more options for passive decay–heat
removal systems. A huge gain in safety.
Advanced reactors operating at 550–750 °C allow for industrial process heating [16], [17].
Most legacy
fuel is U238 , which becomes nuclear ‘ waste ’ in legacy reactors. In advanced reactors, U238
transmutes to Pu 239 that ‘ burns ’ along with U235 , resulting in a waste volume of less than 1% of that
from legacy reactors. A huge gain in safety.
‘Waste’ from legacy reactors requires tens of thousands of years to decay. Waste from advanced reactors requires about 400 years. A huge gain in safety.
Advanced reactors operate with a negative temperature coefficient. They are not susceptible to thermal runaway such as what happened at Chernobyl. Advanced
reactors passively shut down if the reactor temperature gets too high. A huge gain in safety.
Are solar and wind renewable?
Solar and wind have renewability problems due to planetary mineral resource limits [8], the social impact
of mining those resources [18], and mineral recycling challenges at end of equipment useful life [19]
We are disingenuous if we do not look at least 200 years into the future. Psychologists say that a human is sincerely concerned with his/her children, grandchildren,
and great-grandchildren. Beyond those three generations, most humans do not care what happens to distant progeny. I may have great-grandchildren born as far out as
my 100th birthday in 2050, and they might live 100 years. Our action plan must be at least 200 years out.
Owing to the variability of solar and wind, 100% ‘renewable’ plans depend on energy storage. Battery storage is the hope. Lucas Bergkamp, Editor of Road to EU
Climate Neutrality by 2050, does not hold that view. “You cannot, in the near future at least, have an energy storage system that will allow you to power a whole
country through batteries,” he told EURACTIV, saying another energy source, such as nuclear or fossil fuels, will be needed to provide baseload electricity [20].
Solar is challenged in far northern latitudes. Earth’s ‘center of population’ latitude is 25.92 degrees north. At that latitude, solar irradiation on the winter solstice is
reduced by 35%. One-fourth of Earth’s population lives above 36.37 degrees north latitude, corresponding to a reduction of solar irradiation by 50% on the winter
solstice [21], [22]. Sunlit hours also drop with increasing latitude. As the earth heats up, the population will undoubtedly shift farther north, exacerbating the solar
energy ‘solution’. Fewer people will live in areas that are good for solar generation of electricity. A nuclear plant can operate anywhere.
But is not nuclear too costly? Nuclear is a better choice than solar and wind on both a land requirement basis and a consumer cost basis [20].
Overly optimistic views of solar and wind, coupled with an unfounded fear of nuclear, are leading many to shutter legacy nuclear plants. The United States currently
has 296 GW of nuclear energy capacity. If we do not keep these plants open through license renewals, we will lose 50 GW of nuclear capacity by 2030, 150 GW by
2040, and nearly all of it by 2050 [23]. We must not shut down legacy nuclear plants.
Advanced nuclear energy is truly renewable
advanced tech nology is good for thousands of years.
Although proven uranium reserves only give legacy technology 90 more years,

Uranium ore contains 100 times more U238 than U235. Advanced nuclear can theoretically provide 9000 years of renewable energy from
those reserves at today’s energy demand, and that is not taking into account the legacy nuclear ‘ waste ’ now safely stored, which

can become fuel for advanced reactors.


Advanced tech nology can be commercially viable in the U nited S tates by the 2030s. In 2001, nine nations formed the Generation
IV2 International Forum (GIF) [24] to develop sustainable, economic, safe, reliable, proliferation-resistant, and physically protected nuclear reactors. Founding GIF
countries include Argentina, Brazil, Canada, France, Japan, the Republic of Korea, the Republic of South Africa, the United Kingdom, and the United States.
Subsequently, Switzerland (2002), Euratom (2003), the People’s Republic of China (2006), the Russian Federation (2006), and Australia (2016) became members.
There are now two operational advanced reactors, one in Russia and one in China. The Russian BN-800 reactor, a sodium-cooled fast breeder reactor, in Zarechny, was connected to the grid in December 2015 [25]. China’s high-temperature pebble-bed modular reactor HTR-PM was connected to the grid in December 2021 [26]. The United States is lagging, but work is underway to catch
up. US Department of Energy (DOE) made two awards to build advanced reactors by 2027 [27], one to TerraPower [28] on Natrium technology, and the other to X-energy [29].
TerraPower is working to commercialize three technologies, the molten chloride reactor, the Traveling Wave Reactor, and the Natrium reactor that is a sodium fast reactor incorporating the design features of the Traveling Wave Reactor. Natrium is its premier technology, which will be demonstrated at the Naughton coal-fired power plant in Kemmerer, Wyoming, which is retiring in
2025.
X-energy has a moving pebble-bed high-temperature gas reactor (HTGR) design that will have 220 000 graphite pebbles with TRIstructural-ISOtropic (TRISO) particle fuel. They expect a 60-year operational life [30].
How about thorium? Per the Thorium Energy Alliance [31]:
Because liquid fluoride thorium reactors (LFTRs) burn virtually all of their fuel, 83% of the waste products are safe within ten years and the remaining 17% become safe after 300 years.
LFTRs can be used to burn legacy nuclear ‘waste’.
Thorium is the 36th most plentiful element in the earth’s crust. It is four times as common as uranium.
LFTRs have no refueling outages. They are continually refueled and continually remove waste product.
LFTRs can be factory-produced, allowing for lower costs and shorter commissioning timetables.
Kirk Sorensen, Founder of Flibe Energy, details the many LFTR benefits in a short video, those benefits being safety, a 100-fold decrease in waste generated, and immense quantities of thorium on the planet [1].
China is a leader in thorium. Experts say that China could be the first to commercialize thorium technology. An experimental thorium reactor in Wuwei could result in an operational 300 + MW thorium reactor by 2030 [32].
Centrus Energy Corporation and Clean Core Thorium Energy are working on an advanced nuclear fuel that combines thorium with high-assay, low-enriched uranium (HALEU). The new fuel is called Advanced Nuclear Energy for Enriched Life (ANEEL). ANEEL has less than 20% U238, compared with more than 94% in LWR fuel. HALEU is itself enriched to between 5% and 20%
U235 [33].
In May 2022, the Norway-based marine group Ulstein announced a concept design for cruise ships using a thorium Molten Salt Reactor [34].
There are significant advantages of molten salt reactors and hot-salt storage. A molten salt reactor has its nuclear fuel dissolved in fluoride salt or chloride salt. The Terrapower/Southern fast reactor is molten chloride. The Moltex design is a molten chloride fast reactor. Kairos Power has a fluoride salt reactor design. Flibe’s salt is Li2BeF4 salt that has a melting point of 459 °C and a
normal boiling point of 1430 °C [35]. Molten salt allows a design with a secondary hot-salt loop with storage and a separate ‘power block’ cycle to produce power-on-demand from the stored hot salt, allowing load following functionality as delivery of heat is controlled by a hot-salt pump. As the power block is not coupled to the reactor, it can be built to non-nuclear standards. The salt in
the loop is a nitrate salt in common use in concentrated solar power plants [36].
Although molten salt reactors hold much promise, there are challenges, the most significant being the risk of corrosion in reactor structural materials from high-temperature molten salt. A considerable amount of development work remains to be done on salt redox potential measurement and control tools in order to limit the corrosion rate [37].
Advantages of small modular reactors

Small modular reactors ( SMRs ) are nuclear plants of 300 MWe or less. SMRs have several advantages over large nuclear plants.
Onsite construction of large nuclear plants can take many years and have associated cost overruns. SMRs will be factory-built and moved to the site by truck, railcar,
or barge. SMRs employ economies of mass production rather than economies of scale.
Advanced technology SMRs are now operational . Two Russian SMRs (Pressurized Water Reactor, type KLT-40S) have been in
operation in Port Pevek, Russia, since December of 2019 [25], [38], and two Chinese helium-cooled pebble-bed reactors (HTR-PM) were connected to the grid in
December 2021 [26]. Advanced
technology SMRs could be deployed in very large numbers in the next decade, in time
to meet 2050 net-zero goals .
The emergency planning zone around a large legacy plant is typically 10 miles. The DOE ‘strongly supports’ an NRC proposal to apply risk-based emergency
preparedness requirements for SMRs, which will significantly reduce the size of this zone [39].
Gen III+ water-moderated/cooled SMRs have a design advantage over legacy technology. Even though the reactor is designed at 2000 psig, its small size allows for
ease of containment manufacture compared with a large legacy reactor. For example, each 3 m-wide NuScale reactor nestles into its own 4.6 m-wide steel
containment vessel [40].
Argentina, China, and the Russian Federation are on course to begin commissioning SMRs. North America will likely be the next to deploy an SMR, as the United States is looking at 2027 for the start of operations. Canada also expects to start-up demonstration reactors in 2027 [41].
The HTGR is a moving ‘pebble bed’ SMR design with an exit temperature of 750 °C or higher, ideal for providing industrial process heat. The Japanese HTGR Development Program achieved 50 days of continuous 950 °C operation in a test reactor in 2010 [42].
A major DOE/Industrial program is underway to commercialize HTGRs for industrial applications, being led by Dow Chemical [17]. Each pebble is TRISO coated and is about the size of a tennis ball. Each pebble is its own nuclear reactor, the outer shell being its containment vessel. Each reactor is cooled down by passive natural circulation, usually a gas such as helium, making it
impossible for an accident such as Fukushima to occur. TRISO fuels are fabricated by BWX Technologies Nuclear Operations Group (Lynchburg, Virginia) [43].
In 2017, Neutron Bytes reported that China is an HTGR leader. China is actively promoting its HTGR technology in Saudi Arabia, South Africa, the United Arab Emirates, and Indonesia [44]. China is building an HTGR demonstration plant that will have two reactor modules. The objective is a full-scale power plant with eighteen 210 MWe units. In March 2020, China announced that
the reactors, steam generator, and hot gas ductwork of the demonstration plant were connected [45].
Penultimate Power in the United Kingdom has partnered with the Japanese Atomic Energy Agency to develop plans for HTGR technology to be operational in Britain by 2029 [46].
Establishment of safety regulations associated with SMRs are not yet resolved. However, the Nuclear Regulatory Commission is working to resolve policy questions such as SMR siting, offsite emergency planning, and security and safeguards [47], [48], which are strongly supported by the U.S. Department of Energy [39].
Uranium and thorium reserves
At current global nuclear capacities, we have about 90 years of proven uranium reserves. There also is uranium in our oceans at a concentration of 3.3 micrograms per liter, which could provide tens of thousands of years of uranium [49]. Scientists from Oak Ridge National Laboratory have demonstrated a material that can adsorb 6 g of uranium for every kilogram of adsorbent material
[50]. Rainfall runoff from land masses will renewably replenish any uranium we remove from seawater.
There are about 6.4 million tonnes of thorium reserves. India leads with 846 000 tonnes, and has made utilization of thorium a major goal in its nuclear power program. The United States has the 3rd largest reserves at 595 000 tonnes. Russia and China lag considerably behind the United States with 155 000 tonnes and 100 000 tonnes, respectively. The United States can be a leader in
thorium advanced reactor technology [51].
Replacing crude oil: large-scale nuclear biorefineries
Crude oil and natural gas provide about 85% of the feedstocks in chemical manufacturing [52]. More than 500 million tonnes of oil-equivalent feedstock are consumed each year to make nearly 1 billion tonnes of chemical products [53].
If we are to decarbonize, we must figure out how to end our dependency on fossil fuels for chemical feedstocks. This can be done. A nuclear biorefining initiative is being led by MIT and INL. Highlights of this effort [54].3.
Replacing liquid hydrocarbons will be extremely difficult. In the United States, about 18 million barrels/day of petroleum products are produced.
To achieve the scale of biorefining capacity necessary to replace fossil fuels, three technologies are needed:
Consolidation of biomass into energy-dense, anaerobically stored, economically shippable commodities that are available year-round.
Biorefineries at the scale of 250 000 barrels per day.
Nuclear energy providing electricity, process heat, and hydrogen to the biorefinery.
In the United States, one billion tons of biomass will be required each year to match current consumption of refining products. The DOE and the USDA estimate that 1.4 billion tons could be produced, but that number could be tripled if (a) we pay farmers more; (b) we use 10% of semi-arid lands to plant opuntia (prickly pear cactus); (c) we use double-cropping extensively; (d) we
integrate food/feed/fuel production; (e) we improve pasture/energy crop productivity; and (f) we rehabilitate saline, retired, and degraded lands.
Nuclear biofuels could be deployed at scale in 20 years.
Conclusions

We must transition to carbon-neutral energy. Solar and wind are not truly renewable. Advanced
nuclear is far more renewable with
promises of many thousands of years of clean energy. It is also the safest form of electricity generation.
Industry fatalities per TWe-year are less than 0.01 for legacy nuclear energy, one to three orders of magnitude lower than solar or wind. Most of those legacy fatalities
were from plants designed with high-pressure Generation-II technology. Generation-III technology is safer, having its primary focus on safety. Generation-IV
advanced nuclear technology is safer yet. Fear of catastrophic nuclear accidents is driving us away from the best chance we have of solving global warming. That fear
is unfounded.
Some say that we must reduce energy consumption ! They say we must reduce the planet’s population that will help reduce energy
consumption. Wishful thinking will not get us out of this mess . About 1– 3 billion people on our planet either

have no electricity or have very meager/unreliable electricity. They will burn oil and gas, dung, or
anything they can get their hands on , to produce energy. We must figure out a way to replace fossil fuels for firm baseload. Energy
demand will likely double during this century, regardless of wishful thinking. Advanced nuc lear energy is the only viable option for

rapidly replacing fossil fuels as firm baseload. Do not be swayed by the argument that nuclear cannot possibly ramp up in time to
accomplish this objective. We can achieve major increases in nuclear energy capacity by 2040 if we put our minds and money to it.

In particular, Integral Fast Reactors (IFRs) are nuclear power plants that use
existing nuclear waste as their primary fuel, requiring no mining and generating no
byproduct. The technology is feasible but blocked by antiquated regulations that
prevent market pricing.
Blees ’24 [Tom; 2024; president of the Science Council for Global Initiatives, member of the selection
committee for the Global Energy Prize, considered Russia's equivalent of the Nobel Prize for energy
research, and a consultant and advisor on energy technologies on the local, state, national, and
international levels; SCGI, “Retaking the Global Lead in Nuclear Technology,”
https://www.thesciencecouncil.com/advisors/active-advisers/tom-blees-president/348-retaking-the-global-
lead-in-nuclear-technology-2]
By 1994, this team had accomplished what they’d set out to do. The pride in their accomplishment was palpable, for they’d literally solved
humanity’s energy problem. The integral fast reactor ( IFR ) technology they’d developed would
be able to use unwanted
nuclear weapons material, spent nuclear fuel (deplorably tagged with the misnomer “nuclear waste”), and
even the vast stockpiles of depleted uranium for fuel, and leave no long-lived radioactive byproducts to
bedevil future generation. So much potential fuel was already out of the ground that such reactors could power the planet
with carbon-free energy for a thousand years without any further mining or enrichment. It seemed almost too good to be
true.
Pure misguided politics killed and essentially buried the
Alas, it seems that it was, but not through any fault of the technology.
project just as it was in its final demonstration phase, and it languished virtually unknown for over a decade. Russia
still had a fast reactor running reliably (as it does to this day), but it lacked the key features that made the IFR so economical and
fail-safe. After that brutal disappointment in 1994, groundbreaking nuclear development at the national labs pretty much ground to a halt.
But the need for nuclear power was only increased , a realization that dawned on countless millions of people concerned
about both climate change and global development. France and Sweden had demonstrated that entire countries could
convert their generating systems to nuclear in a mere decade, and soon young nuclear engineering graduates and some of those who’d
worked on the older and very promising projects began forming startup companies and creating reactor designs that built upon previous work to
create nuclear power plants that promised to be walk-away safe and economical.
Unfortunately, by this time the regulatory regime that had developed at the N uclear R egulatory C ommission had morphed into
a virtually insurmountable obstacle to evolutionary development. Long-established companies like Westinghouse and GE found themselves
having to spend upwards of a billion dollars and waiting a decade or more just to get approval to build reactors that were merely evolutionary
modifications of existing light-water reactor concepts. Those (including GE) that wanted to build different types of reactors like the fast reactor
patterned after the IFR faced hurdles even greater than that, for such designs had never been approved and it’s doubtful that the NRC even has
sufficient qualified personnel to adequately put them through the certification process.
Since the national labs are no longer developing new designs and a lot of startups are taking on that role,
the actual building and deployment is essentially moribund . How can a small startup face a billion dollar (or more) hurdle in
hopes of selling a product that’s never been built and tested yet? The license and certification system requires
approval before anybody can even turn a wrench. The new reactor designs have to be built on paper and in computers. In the
old days, at the national labs that gave birth to the nuclear power era, the system could be described as
test-then-license . Today, that’s been turned on its head , to a license-then-test system.
Unless we can turn this system around and return to a test-then-license development model, countries like China and Russia will be the future
leaders of nuclear innovation. Those command economies make a test-then-license development model possible, and the advantages to leading
the world in this important field are obvious. As they and other nations pass the USA in advanced reactor development, they’ll also be the leading
voices when it comes to international oversight regimes that will apply to the dozens of countries where they’ll be selling their reactors.
But there is a way to turn the situation around in the USA, and it could happen quickly. It would take very little legislative change, if any. Mostly
it requires a different vision, and a recognition that trusting our eminently qualified national laboratories is as critical today as it was at the dawn
of the nuclear age.
Let’s illustrate such a vision with a hypothetical case. We’ll take a reactor designer called Newclear as an example, though it could be any of a
number of reactor startups. Newclear wants to build 100MWe small modular reactors to be clustered as required, with a 10-unit (1000MWe)
configuration being most frequently mentioned. How might that be done with a new developmental policy?
In this scenario, instead
of submitting their design to the NRC for review, Newclear submits it to a national
laborator y. Idaho National Laboratory is the logical candidate, both for its variety of experts but also for
its remoteness and large area, characteristics that led to it being chosen in the first place for nuclear reactor development. Newclear
ponies up a certain amount to cover the initial costs to INL, say five million dollars. Their plan is to build a full-scale 100MWe reactor module.
INL establishes an
oversight group ( OSG ) that will monitor the project from start to finish. The group brings in
nuclear engineers, fuel specialists, chemists, materials scientists, and other needed specialists from outside
if there aren’t suitable experts already at INL who are available to lend their talents to the OSG. The group would also include at least one person
from the NRC, so that when the design is eventually submitted for licensing they will have been there since Day One and will have a greater
understanding of both the reactor design and the process that led to its construction.
The first duty of the OSG, which will work closely with Newclear’s people for the duration of the project, is to evaluate the design in detail to
determine, primarily, its safety qualifications. The economics of Newclear’s reactor is beside the point as far as INL is concerned. If the prototype
reactor can be built and started up safely in the estimation of the OSG, they get a green light.
Now the site for the prototype is determined. It will be built along the perimeter of the INL property, within the security zone,
but close to the edge since later on it will be excised from the lab’s property if the project is successful (more on that later). Alternatively, a
sufficiently large annex just outside INL’s land can be chosen and temporarily incorporated into the lab’s security perimeter for the duration of
the project.
At this point Newclear gets to work. Access roads are built if necessary, and construction commences, all at the expense of Newclear. The OSG
monitors all aspects of the project and evaluates and approves the construction in phases as it progresses. Newclear and the OSG will have
already settled upon a stepwise process for construction, each step being approved by the OSG before the next is started. This process continues
until the reactor is fully built and ready for pre-fission testing.
Such testing and the subsequent fuel loading and stepwise fission testing of the reactor mirrors the development process that
served perfectly well through the entire development phase from the dawn of nuclear power until the last major project, the
aforementioned Integral Fast Reactor ( IFR ), was terminated in 1994. During all those years of impressive developments, our
nation trusted the judgement of its national lab scientists and engineers. The logic of this new approach is to copy and renew that old
approach . If we could trust these people to develop, build, and test new reactor designs when they were figuring it all out with slide rules,
how much more capable will they be today with powerful computer simulations to assist them, and with years of training that built upon the work
of the early pioneers? The argument upon which this entire proposal hinges is this: We trusted them then. We can—and should—trust them now.
So we return to our Newclear project: Obviously if the reactor design is found faulty at any point by the OSG, the stepwise process will be
stopped and Newclear will have to alter the design to the satisfaction of the OSG until it can be restarted. Thus the bugs get worked out until the
reactor project is either abandoned or until it’s successfully operational at full power.
One fly in the ointment is where that electricity goes. Though the EBR-II (the IFR project’s reactor) distributed power to an outside utility, there
has been at least a resistance to national labs selling power to the outside. Though I don’t believe there are any statutes forbidding it, it would be
good to clarify as part of this restructuring of the reactor development process a clear permission for the lab to contract with outside utilities to
sell power from such projects.
It’s been pointed out that the local electricity demand in the INL area is probably too small to deal with anything but a fairly small reactor, and
that for SMRs over 50 or 100 MWe the more logical sites would be Oak Ridge or Savannah River National Laboratory. Whichever site is judged
to be optimal, the same procedures would apply.
Newclear will be responsible for negotiating a power purchase agreement with a local utility, with whatever line construction and other
infrastructure will be necessary to connect their reactor to the grid. A reasonable expectation would be that Newclear would sell electricity to the
utility at half the wholesale rate, enabling the utility to earn a decent profit as they distribute and sell it.
The money that would be taken in from the utility would go to INL, not Newclear. At this point, the reactor will be up and running, and Newclear
would only then apply to the NRC for certification to allow them to build the reactors and sell them commercially. Obviously it would be
far easier for the NRC to certify such a design since there would already be a reactor up and running , with all the data
from the entire process in hand. Meanwhile, during the entire certification process (which can reasonably be expected to proceed far
faster than it does under the current system), electricity would be generated and the DOE would be collecting money for funding the
national labs. While it wouldn’t be a huge amount of money, it should easily cover all the costs of the OSG’s work and other expenses that have
been incurred by the lab for the duration of the project. Remember, Newclear ponied up some money at the outset (we’d used five million dollars
as a reasonable estimation), so the lab should be in the black for the entire project.
Once the NRC certifies the design to allow Newclear to sell their reactors commercially, the last phase of the project is to excise the reactor site
from INL. Since Newclear could decide to simply leave the reactor running for several decades, it should be outside the lab’s perimeter to avoid
the complication of its employees and visitors having to be admitted to the lab’s property. This is why the reactor is to be built along the
perimeter. Once this perimeter issue has been cleared up, Newclear is the proud owner of a functioning reactor, which they can either choose to
operate or to sell to another owner. Since in many cases the reactors being developed like this would be modular, they could also opt to move it to
a commercial site if that would be considered practical.
But there’s one more aspect of such a developmental framework that could be a boon to our national labs. For reactors developed in this manner,
a royalty would be levied for every reactor of that type to be built in the future. It could be a very modest one, similar to the spent fuel tax
attached to nuclear-generated electricity. A tiny royalty of one tenth of one cent per kWh wouldn’t impact any potential reactor buyer, but would
bring in a constantly increasing income to the DOE to fund our national laboratories. Ten Newclear clusters with their proposed configuration of
1,000MWe each, operating at a 90% capacity factor (a realistic expectation since our current fleet runs at a bit better average than that) would
earn nearly eighty million dollars per year in royalties for the national labs, likely for 60-100 years. Obviously the royalty rate could be higher or
lower, to be determined by the legislation that would enable these changes in reactor development and licensing.
The return to a test-then-license system would not only allow for rapid development and testing of advanced
reactor designs, but it would make the NRC’s job a lot easier. And even if this didn’t lead to a reduction in the cost of
NRC certification (it’s hard to believe it wouldn’t), a company with an operational reactor that was already
cleared at all levels by our national laboratory would be able to find investors to come up with the billion dollars
because licensing would be all but a fait accompli.
Without a transformative approach that sweeps away the upside-down license -then-test system that we have now, the
considerable efforts being made both within and without the government to expedite the process will very likely make barely a
dent in the problem. The nearly impossible position of private companies hoping to get a new reactor built and tested will consign the USA to
the back of the technology race, and as that happens our country will also lose its leverage in designing international protocols to reduce
proliferation risk and improve safety. We can reverse the nation’s slide into near irrelevance quickly, safely, and decisively without any risk to
the public from the process of developing transformative new reactor designs. But nibbling at the edges of the existing system like we're doing
now won’t get us there. We have to recognize the problem clearly that has evolved over the last few decades into an unworkable and
counterproductive impasse. It’s time we trust our national labs again.
Scenario One is CLIMATE CHANGE.

Current energy sources are hopelessly inadequate. Non-nuclear renewables would


require five times more resources than exist on Earth AND traditional nuclear
power is 100 times less effective than the plan.
Snyder ’23 [Van; March 16; spent 53 years as a mathematician and engineer at the Caltech Jet
Propulsion Laboratory, MS in Applied Mathematics and System Engineering, spent seventeen years as an
adjunct associate professor; “Five Myths About Nuclear Power,” https://substack.com/home/post/p-
108860660?utm_campaign=post&utm_medium=web]
IFR -type reactors extract 99 .99 % of the energy immanent in mined uranium but today's reactors extract only
0.6% . The price of uranium would contribute the same amount to the delivered electricity price from IFR-type reactors if it were to increase
167 fold. Uranium could be economically extracted from lower quality ores, or from seawater, where there is estimated to be at least a thousand
times more than could be extracted from land. Another low-quality ore is coal-fired power plant waste, which contains nineteen times more
energy in the form of uranium and thorium than was extracted by burning the coal. Thorium, four times more common than uranium, can be
converted to fissile fuel by neutron transmutation in a fast-spectrum reactor.
Nuclear fission is an effectively inexhaustible source of energy.
It is possible to breed about 5% more fuel from uranium than is consumed, but only about 1% more from thorium. If the goal is to deploy a fleet
of new breeder reactors fueled only by recycled fuel, thorium should not be used before sufficient reactors are in service.
The first two goals of the IFR project were safety and waste mitigation. The third was fuel economy.
The system problem
Most energy discussions focus only on components — wind turbines and solar panels.
Electricity production and distribution is a system problem, not simply a component problem.
In Burden of Proof: A comprehensive review of the feasibility of 100% renewable-electricity systems, Renewable
and Sustainable Energy Reviews 76, Elsevier (2017), pp 1122-1133, Ben Heard et al described an analysis of 24 studies that claimed to explain
how to construct and operate regional, national, or continental-scale electricity systems. None of the studies described systems that were
physically feasible . Heard et al concluded there was no point to study economic viability.
A more serious system problem is that the Earthdoes not have sufficient materials to build the “technology units” that the
International Energy Agency (IEA) demands be built to provide all energy from renewable sources. To stay out of the weeds,
here is just one problem: Five times more copper is needed than is known to exist on the Earth in forms that can be
recovered.

Dramatic expansions of modernized nuclear power are the only feasible path to
abate climate change.
Stein ’22 [Adam, Jonah Messinger, Dr. Seaver Wang, Juzel Lloyd, Jameson McBride, and Rani
Franovich; July 6; Director of the Nuclear Energy Innovation program at the Breakthrough Institute,
published by the Electric Power Research Institute, presented to the Nuclear Regulatory Commission, and
contributed to many high-profile projects, including the first-ever license application for an advanced
nuclear reactor in the U.S., Ph.D. and M.S. in Engineering and Public Policy from Carnegie Mellon
University where his research focused on changing the paradigm for emergency preparedness and
response for nuclear facilities; a non-resident Senior Energy Analyst at the Breakthrough Institute, Ph.D.
student at the Cavendish Laboratory of Physics at the University of Cambridge, was a Visiting Scientist
and ThinkSwiss Scholar at ETH Zürich, Master’s in Energy and Bachelor’s in Physics from the
University of Illinois at Urbana-Champaign; Breakthrough Institute Co-Director of the Climate and
Energy team, PhD in Earth and Ocean Sciences from Duke University as well as a BA in Earth Sciences
from the University of Pennsylvania; climate and energy analyst at Breakthrough, Bachelor of Science in
Mechanical Engineering at Howard University; graduate student in Technology and Policy at MIT, and a
researcher at the MIT Energy Initiative. He studies the political economy of decarbonization, with a focus
on US energy and technology policy published in the New York Times, the Los Angeles Times,
Greentech Media, and the Columbia Political Review; Master of Science in Industrial and Systems
Engineering from Virginia Tech; Breakthrough Institute, “Advancing Nuclear Energy,”
https://thebreakthrough.org/articles/advancing-nuclear-energy-report]
The Biden Administration has sought to restore America’s leadership in the global fight against climate change by investing in clean energy. The
results illuminate the potential contribution of advanced nuclear power to meeting the Biden Administration’s climate goals.
Upon taking office, President Biden rejoined the Paris Agreement, which seeks to limit the average global temp erature rise by
2100 to 1.5 to 2 degrees Celsius above pre-industrial levels. Research published by the Intergovernmental Panel on Climate Change suggests
that an unprecedented increase in global nuclear generation may be required, with global nuclear
generation increasing to up to 500 percent of current levels across modeled scenarios, to reach ambitious
climate targets like 1.5 C at low cost. President Biden has also announced a policy goal of reaching 100% clean electricity in the United States
by 2035. Nuclear already accounts for 48 percent of clean electricity generation in the U nited S tates at present, and
provides a valuable firm source of power to complement the increasing share of variable renewables on the grid. Meeting the administration’s
ambitious climate and energy targets will require continued existing nuclear power plant operation, as well as
advanced nuclear reactor deployment .
The modeling results, produced with Vibrant Clean Energy, suggest that commercializing advanced nuclear technology could
result in rapid growth of clean nuclear generation that would help to meet the administration’s climate goals. The
contribution of advanced nuclear to the United States electricity sector in 2050 across the scenarios is summarized in Table 7-1.
In the optimistic Low-Cost High-Learning scenario, the
least-cost path way to meeting a 2050 net-zero power sector
target in the United States would have nuclear power provide approximately 50 percent of the entire US electricity
demand, up from 19 percent today.
The majority of this nuclear generation would come from advanced reactors , with the deployment of 469 GWe of
advanced nuclear power by 2050. Nuclear energy is able to provide this high share of generation with only 21 percent of the capacity in the
electricity system, due to the high capacity factors of nuclear plants relative to other clean sources. Additionally, this growth comes in
spite of a steady decline in generation from existing traditional nuclear plants , which declines by 80 percent by 2050
in the Low-Cost High-Learning scenario.
The results illustrate the potential importance of advanced nuclear power relative to solar and wind. In the Low-
Cost High-Learning scenario, nuclear generation exceeds solar generation by 75 percent and exceeds wind generation by 50
percent in 2050. This suggests that the market size for advanced reactors could substantially exceed the projected large markets
for solar and wind power in the course of achieving a future low-cost net-zero power sector. However, finance and policy
support would be necessary to achieve the low costs and high learning rates implied by this optimistic scenario.
In these modeling results, 20 to 50 GWe of advanced nuclear capacity is deployed by 2035. The contribution of advanced nuclear to the United
States electricity sector in 2035 is summarized in Table 7-2. Across the scenarios, advanced nuclear power contributes 3 to 8% of
US generation by 2035, with all nuclear generation providing 15 to 19% of US generation that year. In 2035, the percentage of total
generation from the sum of conventional and advanced nuclear power plants across all scenarios is comparable to generation from wind or solar.
Table 7 2
Table 7-2: Nuclear shares of total US generation and capacity in 2035 (least-cost optimized for 2050 net-zero power sector target).
By 2035, the United States achieves around a 60% total reduction of direct power sector CO2 emissions relative to 2020 fossil CO2 emissions
across all four of the scenarios. This corresponds to 2035 power sector CO2 emissions of around 700 million metric tons of CO2 (Mt CO2),
compared with 2020 emissions of 1,750 Mt CO2. In the model scenarios, power sector emissions fall by 90% relative to 2020 levels by 2045
(175 Mt CO2 in 2045), before the power grid achieves essentially full decarbonization in 2050 (Figure 7-1). Note that the current US grid has
already achieved some decarbonization relative to 2010 power sector fossil emissions of 2,400 Mt CO2.
The scenarios used in this report were constructed around a 2050 net-zero power sector target rather than the Biden Administration’s 2035 goal
for a zero-emission power sector, which means that these results may understate the potential contribution of advanced nuclear technology in
reaching a binding 2035 net-zero target. Reaching a 2035 net-zero target would require substantially more policy and financial support. Across
the scenarios, around 70% of the United States generation comes from clean sources in 2035.

Warming causes extinction, and emissions reduction prevents uncontrollable and


self-perpetuating warming.
Spratt ’12-2 [David citing Jason Eric Box, Merritt Turetsky, Katharine Hayhoe, Gavin Schmidt, Johan
Rockström, Stefan Rahmstorf, Tim Lenton, and Jim Skea; December 2; Research Director at the
Breakthrough National Center for Climate Restoration, Founder of the Climate Action Center, two-
decades experience in climate modeling and advocacy, BS in Economics from Australian National
University; Professor of Glaciology at Ohio State University, PhD, University of Colorado; Professor of
Ecology and Evolutionary Biology at University of Colorado, PhD, University of Alberta; Professor of
Atmospheric Science at Texas Tech University, PhD, University of Illinois; Director of the Goddard
Institute at NASA, PhD, University College London; Professor of Environmental Science at Stockholm
University, PhD, Stockholm University; Professor of Ocean Physics at Potsdam University, PhD, Victoria
University; Professor of Earth Science at University of Exeter, PhD, University of East Anglia; Chair of
the IPCC, Professor of Sustainable Energy at Imperial College London, PhD, Cambridge University;
Breakthrough National Center for Climate Restoration, “Collision Course: 3-degrees of Warming and
Humanity’s Future,”
https://www.breakthroughonline.org.au/_files/ugd/148cb0_085aaeb2f1a1481789014b8e895ad23b.pdf]
The big picture is that important elements of the global climate system, such as polar ice-sheets, are reaching their tipping
points decades to centuries faster than was previously projected. Many events in the climate system are beyond climate models’
projections; that is, current models are not capturing all the risks.30 They overlook or downplay the impacts of non-linear and
difficult to predict processes
such as the loss of ice sheet mass, ocean heat drawdown, rising sea levels, upticks in extreme events,
carbon stores losing integrity, and more . Examples include:
— Prof. Jason Box says that for Greenland there is “a more rapid response of the ice than is currently encoded in
climate models that project sea-level rise… we cannot yet rely on ice sheets models for credible sea level projections”.31
— CO2 and methane release from deep permafrost are not routinely included in climate models.32 Prof. Merritt Turetsky says that:
“Permafrost is thawing much more quickly than models have predicted, with unknown consequences for g reen h ouse- g as
release.”33
— Nor have models accounted well for the slowing of the Atlantic Meridional Overturning Circulation (AMOC).34
— Models have been unable to reproduce the frequency and intensity of persistent summer weather extremes of recent years.35
— Australia has experienced bushfires of an intensity not projected to happen until the 2090s, forcing a change in the fire intensity rating system.
But in 2024 scientists warned again that Australian bushfires could still be more intense and extensive than current predictions.36
— And observed changes in temperature extremes in parts of Australia during 2011–2020 tracked much higher than the projected changes for
that period, and already are tracking at the changes projected for 2030 (2021–2040) period.
— In 2023, the rapid warming of the North Atlantic was beyond model expectations (greater than four standard deviations), with conditions
similar to those scientists expect to be the average at 3°C of warming.37 By August 2024, the El Niño was fading but the mean Northern mid-
latitude SST kept warming. Likewise, in 2023 the rapid retreat of Antarctic sea-ice was astounding and five standard deviations beyond the
mean.38
Reflecting on the extreme events of 2023, Prof. Katharine Hayhoe told The Guardian that: “We have strongly suspected for a while that
We are truly in uncharted
our projections are underestimating extremes, a suspicion that recent extremes have proven likely to be true…
territory in terms of the history of human civilisation on this planet .”39 In a similar vein, NASA’s Gavin Schmidt points to the
problem of trying to understand the future based on the recent past when the changes are now rapid and systemic: “The system is changing in a
way where what happened in the past is no longer a good guide to what’s going to happen in the future.”40
03 System Tipping Points Tumble Abruptly
tail risks
Climate change has arrived, with severe impacts emerging at lower temperatures than expected. The distribution has shifted; historic
are now expected . Climate risks are complex, interconnected and could threaten the basis of our society and
economy. A systems approach is required. Climate Scorpion, March 2024 41
There is now clear evidence that a number of crucial system-level tipping points have been reached, in some cases decades to centuries earlier
than had been projected. Seven of nine sustainable planetary boundaries have already been exceeded.
Tipping Points
A tipping point is a threshold beyond which large change is initiated in a system and becomes self-perpetuating, and the change is often abrupt
and irreversible on long timescales.42 Passing these thresholds may constitute an ecological point of no return, after which it may be practically
impossible to return the climate to pre-industrial (Holocene) stability. Tipping points may interact to form tipping cascades that act to further
accelerate the rate of warming and climate impacts (see Section 7).
Australian researchers warned in February 2024 that the effects of tipping points on the global climate “are generally not currently accounted for
in projections based on climate models. This means that effects of tipping points are also not included in national climate projections and impact
assessments for Australia and may represent significant risks on top of the changes that are generally included.”43
Prof. Johan Rockström says that the following Earth system elements “are likely to cross tipping points already” at
1.5°C of warming:44 The Greenland Ice Sheet; the West Antarctic Ice Sheet (WAIS); abrupt thawing of permafrost ; loss
of all tropical coral reef system s ; and collapse of the Labrador Current, one element of the A tlantic M eridional O verturning
C irculation (AMOC).
This analysis comes from the Global Tipping Points report, which in late 2023 warned that five important natural thresholds already risk being
crossed, and three more may be reached if the world heats to 1.5°C above pre-industrial temperatures. Such tipping
points “can trigger
devastating domino effects , including the loss of whole ecosystems and capacity to grow staple crops , with
societal impacts including mass displacement, political instability and financial collapse ”.45
They include:
— The Greenland Ice Sheet likely reached its tipping point 20 years ago.46
— The West Antarctic glaciers have passed a tipping point;47 and the Paris Agreement temperature target of 1.5°C is sufficient to drive the
runaway retreat of WAIS.48 In May 2024, scientists warned that Thwaites Glacier, nicknamed the “ Doomsday Glacier ”, is near
collapse .49
— Parts of East Antarctica might be similarly unstable .50 Denman Glacier has been identified as susceptible to collapse of
its ice shelf and inundation of the glacier itself, which sits on a retrograde base below sea level.51
— Summer Arctic sea- ice , where three-quarters by volume has already been lost ,52 and is in a death spiral .53
— Arcticpermafrost , which is now a net source of major g reen h ouse g ase s .54
— Canada’s boreal forests are one of Earth’s largest terrestrial carbon storehouses. Long a reliable “sink” for carbon, the forests since
2001 have become instead an increasing carbon “ source ”, and passed their tipping point. In the 2020s, Canada’s forests have
raised the country’s total emissions by 50% .55
— Tropical forests are also nearing critical temperature thresholds.56 The forest systems are oscillating to non-forest ecosystems in eastern,
southern and central Amazonia.57 The Amazon has become a net carbon source during recent climate extremes and the south-
eastern Amazon was a net land carbon source over the period 2010–2020.58 And the South American monsoon is heading to wards a
“critical destabilisation point” or tipping point likely to cause Amazon dieback .59
— Tropical coral reef systems. Researchers warn that “warming of 1.5°C relative to pre-industrial levels will be catastrophic for coral reefs”
worldwide.60 The IPCC reported that nearly all tropical reefs will become extinct even if global warming is kept to 1.5°C.
The permafrost and forest changes represent fundamental changes in the carbon cycle, in which systems that have had a major role in absorbing
carbon from the atmosphere and storing it, flip to becoming a source of carbon to the atmosphere. If the land-based stores become sinks, that
drives up the rate of warming. In October
2024 , The Guardian reported on preliminary research findings showing the amount of carbon
absorbed by the land sinks had temporarily collapsed in 2023: “The final result was that forest, plants and soil – as a net category
– absorbed almost no carbon .”61
And the biggest story of 2024 is the non-trivial and unacceptable risk of Atlantic Meridional Overturning Circulation collapsing by mid-century,
which would be “a going-out-of-business scenario for European agriculture”.62 A July 2023 study estimated “ a collapse of the AMOC
to occur around mid-century under the current scenario of future emissions”, with a 95% probability of it occurring between 2025
and 2095.63 And a paper in publication estimates the probability of an AMOC collapse before the year 2050 to be 59±17%.64 A full
breakdown of AMOC could happen within a few decades , says AMOC specialist Stefan Rahmstorf.65 AMOC collapse would
result in the West Africa and South Asia monsoons becoming unreliable , a one-metre sea level rise on both sides of the North
Atlantic, Australia becoming warmer and more prone to flooding, a flip of the wet and dry seasons in the Amazon, and as much as
half of the world ’s viable area for growing corn and wheat could dry out . “In simple terms [it] would be a combined food and
water security crisis on a global scale.”66
<<TEXT CONDENSED, NONE OMITTED>>
Planetary Boundaries In 2009, a group of eminent researchers identified a framework of “planetary boundaries” that define “a safe operating space for humanity”.67 If we cross these limits, they said, abrupt or irreversible environmental changes can occur with serious consequences for humankind. The nine planetary boundaries identified are: climate change; change in biosphere integrity (biodiversity loss and species extinction); stratospheric ozone depletion; ocean acidification; biogeochemical flows (phosphorus and nitrogen cycles); land-system change (deforestation);
freshwater use; atmospheric aerosol loading (microscopic particles in the atmosphere that affect climate and living organisms); and introduction of novel entities. The boundary for atmospheric CO2 was said to be no more than 350 ppm, because transgressing this boundary “will increase the risk of irreversible climate change, such as the loss of major ice sheets, accelerated sea-level rise and abrupt shifts in forest and agricultural systems”. The current CO2 level is greater than 420 ppm. An update in September 2023 described as “the first scientific health check for the entire
planet” found that six out of nine planetary boundaries had been broken because of humancaused pollution and destruction of the natural world.68 And in September 2024 researchers announced that a seventh boundary — ocean acidification — is on the brink of being breached.69 In May 2023, an assessment of “Safe and just Earth System Boundaries” (ESBs) identified eight global and regional ESBs and found that “seven of eight globally quantified safe and just ESBs and at least two regional safe and just ESBs in over half of global land area are already exceeded”.70 In
October 2023, twelve authors, including those who had led the planetary boundaries work, published “The 2023 state of the climate report: Entering uncharted territory”, and warned that at 2.6°C warming we face “potential collapse of natural and socioeconomic systems in such a world where we will face unbearable heat, frequent extreme weather events, food and fresh water shortages, rising seas, more emerging diseases, and increased social unrest and geopolitical conflict. Massive suffering due to climate change is already here, and we have now exceeded many safe and just
Earth system boundaries, imperilling stability and life-support systems.”71 04 The World is not Decarbonizing It appears the green recovery following COVID-19 that many had hoped for has largely failed to materialize. Instead, carbon emissions have continued soaring, and fossil fuels remain dominant. The 2023 state of the climate report: Entering uncharted territory, October 202372 Annual human-caused greenhouse gas emissions continue to increase. In absolute terms, decarbonisation has not occurred because lower emissions from electricity use are being offset by growth
in other areas of energy use, and there is likely to be a slow decline in total emissions up to 2050. Emissions The world’s energy-related CO2 emissions in 2023 increased to a new record high,73 despite clean energy growth. Whilst wind and solar power climbed by 13% in 2023, that did not match the world’s growing consumption of primary energy, which rose 2% for the year.74 Much of that increased demand came from AI, cloud computing and crypto industries. In summary, the world has not yet started to decarbonise in absolute terms because lower emissions from
electricity use are being offset by growth in other areas of energy use. So, for the time being at least, we are not experiencing an energy transition: what humanity is doing is adding energy from renewable sources to the growing amount of energy it derives from fossil fuels.75 Australia is a good example, where reductions in the electricity sector are cancelled out by rises in other sectors (see Figure 2). [[FIGURE 2 OMITTED]] United States crude oil production officially hit a record 13.4 million barrels per day in August 2024, and since 2008 has skyrocketed 350%. The US is
now the world’s largest oil producer, exceeding Russia’s output by ~35% and Saudi Arabia by ~38%.76 And the US plans to more than double gas exports. Replacing coal with gas is no help if that gas is exported as LNG, with a new study finding that exported gas is 33% worse in terms of planet-heating emissions over a 20-year period compared with coal.77 And coal use up to 2030 may be higher than previously anticipated. The STEPS scenario in the International Energy Agency’s World Energy Outlook 2024 has electricity demand rising faster than renewables output,
resulting in coal demand in 2030 being 300 million tonnes of coal equivalent (Mtce) higher than in their 2023 report.78 The share of fossil fuels in global energy consumption over the last 25 years has decreased from 86% in 1997 to 82% in 2022. And the IEA calculates that the CO2 intensity of power fell 6% in the thirty years from 1990 to 2021.79 Projected Emissions At some point, perhaps soon, yearly global emissions will peak, plateau and slowly decrease, depending on the rate of increase in renewable energy construction and innovation, the rate of economic growth, the
fate of fossil fuel financial dis/incentives such as subsidies and carbon taxes, and several other factors. When that will happen, and the likely rate of fall, is contentious, as are the projections of future fossil fuel demand by the industry itself, and from the International Energy Agency (IEA). The UN Environment Program (UNEP) and IEA both project emissions will drop only 10–20% by 2050, with all major oil/gas nations planning to expand production. The UNEP Production Gap report finds on current plans emissions may be as high in 2050 as today (Figure 3).80 The IEA
says that stated policies will result in oil and gas production in 2050 as high as 2020, with coal halved.81 The OECD projects that a world economy more than twice the size of today will need 80% more energy in 2050 and, without new policy action, the global energy mix in 2050 will not differ significantly from today, with the share of fossil energy at about 85%, renewables including biofuels just over 10%, and the balance nuclear.82 But others say this is too pessimistic: “While the fossil fuel industry still argues there will be strong market demand for oil and gas in 2050, this
ignores all the evidence of past disruptions that superior technologies don’t take market share, they take whole markets.”83 And the IEA also says that global oil demand growth is slowing sharply due to surging electric vehicle sales.84 05 Petrostates & Big Oil are on the Offensive Unexpected strong demand for oil has stiffened the industry’s opposition to government and activist demands to phase out fossil fuel development. Policymakers also have shifted their focus to energy supply security and affordability since Russia invaded Ukraine and during the latest conflict in the
Middle East. Marianna Parraga and Arathy Somasekhar, 19 March 202485 Contrary to global policymakers’ stated collective intent, petrostates — including Australia — and big oil have signalled their intention to abandon mitigation commitments and continue to expand production in the coming decades. The largest fossil-fuel-producing states around the world plan to keep on expanding production, whilst major fossil fuel companies are backtracking on their climate pledges.86 A report by Global Energy Monitor has concluded that the world’s fossil-fuel producers are on
track to nearly quadruple the amount of extracted oil and gas from newly approved projects by the end of this decade, with the US leading the way in a surge of activity.87 This is just one indication that petrostates and big oil have no intention of reducing production, but the opposite. This is well illustrated in the UNEP Production Gap report from 2023 (see Figure 4). As a result, current government plans worldwide will likely result in emissions in 2050 almost as high as they are today.88 Petrostates The intentions of the world’s five largest fossil fuel producers are clear — and
civilisation-threatening — as reported by the UN:89 — In China, oil production is projected to be flat to 2050, but gas will increase more than 60% from 2020 to 2050, while coal use will remain high till 2030 then decline sharply. — In the United States, oil production will grow and then remain at record levels to 2050, and gas is projected to continuously and significantly increase to 2050; whilst coal will drop by half. LNG export capacity is on track to more than double between 2024 and 2028 if projects currently under construction begin operations as planned. — Projections
for Russia are available only to 2035, with coal and gas production projected to increase significantly, while oil remains flat. — In Saudi Arabia, oil production is projected to grow by 26-47% by 2050, with gas up 40% between 2019 and 2050. Together they make up almost half of the Saudi economy. — And in Australia, one of the world’s top two liquified natural gas and coal exporters, gas production is projected to stay above the current level for the next 15 years, with coal remaining high over the same period, above 450 million metric tons annually. The world’s largest oil
producers are (in order) the United States, Saudi Arabia, Russia, China, Canada, Iraq, Iran, United Arab Emirates, Brazil and Kuwait. And the world’s largest gas producers (in order) are the United States, Russia, Iran, China, Canada, Qatar, Australia, Norway, Saudi Arabia, and Algeria. Of those 15 states, seven are theocratic states or one-party dictatorships, where there is no democratic space to challenge state policy; and in the remainder the fossil fuel industry wields enormous political power. — Those 15 nations include three of the four top arms manufacturers, two of the
top three arms exporters and the top three arms importers (India, Saudi Arabia and Qatar). Fossil fuels fund militarisation. — A petrostate chaired the 2023 UN climate policy-making conference COP28 (UAE); another COP29 (Azerbaijan). Azerbaijan appointed a state oil company veteran as COP29 president. — It is likely that a number of states would fiscally collapse without fossil fuel income. States where the fossil fuel sector is greater than 20% of GDP are shown in Table 1. [[TABLE 1 OMITTED]] Big Oil From 2016 to 2022, fifty-seven entities including nation-states,
state-owned firms and investor-owned companies produced 80% of the world’s CO2 emissions from fossil fuels and cement production.90 The three biggest companies were all state-owned: oil firm Saudi Aramco, Russia’s energy giant Gazprom and state-owned producer Coal India. Big oil says it should expand or maintain oil and gas production, and has largely abandoned any net-zero-2050 commitments that may have been made: — Saudi Aramco CEO Amin Nasser said in March 2024 that the world should give up on the idea of phasing out oil and gas: “We should abandon
the fantasy of phasing out oil and gas, and instead invest in them adequately.”91 — Meg O’Neill, CEO of Woodside Energy, rejected what she called simplistic views that the transition to cleaner fuels can “happen at an unrealistic pace”.92 — Exxon Mobil forecasts global oil demand in 2050 will be the same — or even slightly higher — than current levels, driven by growth in industrial uses such as plastic production and heavy-duty transportation. Exxon’s forecasts are similar to other recent projections, including by OPEC and Enbridge.93 — In March 2024, Shell — the
world’s second-largest oil and gas company and largest LNG producer — announced it was watering down its climate targets, with chief executive Wael Sawan saying it was “perilous” for Shell to set 2035 emission reduction targets because “there is too much uncertainty at the moment in the energy transition trajectory”.94 Big Money Likewise, for big business CEOs, giving priority to sustainability and climate has declined sharply over the last year,95 and they are more concerned about inflation, artificial intelligence and geopolitics. Big finance is backing away from taking a
“white knight” role in leading the energy transition, now saying that the first priority is delivering profits to shareholders: “Expecting banks collectively to rapidly reallocate their portfolios may not be compatible with maintaining a profitable, diversified business model.”96 Writing for Foreign Affairs, Meghan L. O’Sullivan and Jason Bordof describe how: “BlackRock CEO Larry Fink championed ‘energy pragmatism’ in his most recent annual letter, and a few weeks later, a JPMorgan Chase report called for a ‘reality check’ about the transition away from fossil fuels. In April,
Haitham al-Ghais, the secretary-general of OPEC, wrote that the energy transition would require ‘realistic policies’ that acknowledge rising demand for oil and gas.”97 Australia Australia received one of the lowest scores for “climate action” – ranking fourth-last out of the 168 countries — that were scored in the 2024 Sustainable Development Report published by the Sustainable Development Solutions Network (SDSN). Australia was only ahead of Qatar, Brunei and the United Arab Emirates for climate action.98 In Australia since the 2022 election, the federal government has
approved seven new coal projects, approved the drilling of 116 new coal seam gas wells, defended in court the right of the coal industry not to consider the climate impact of opening new fossil fuel projects, and passed legislation designed to expedite the expansion of the gas industry, according to the Australia Institute.99 The federal government has approved new gas exploration permits in waters off South Australia, Victoria and Tasmania, along with carbon export permits to encourage CCS, a technology not proven at scale.100 And government subsidies to fossil fuel
producers jumped by 31% to $14.5 billion over the last year.101 06 Warming is Accelerating Towards 3°C or More We are potentially headed towards 3°C of global warming by 2100 if we carry on with the policies we have at the moment. Prof. Jim Skea, IPCC Chair, 6 October 2024102 The failure to reduce emissions fast and the intention of petrostates and big oil to continue expanding production puts Earth on a path to 3°C of warming or more, given the political inertia and the inertia of the energy system. Factors influencing future warming

Climate-warming greenhouse gas emissions have not yet peaked , but will likely do so soon as the economic advantages of
renewable power generation become even more obvious , and ageing coal-fired generators in the electricity sector reach their use-
by date. However , as discussed in sections 4 and 5 above, oil and gas production are likely to remain strong — and may increase —
until mid-century. On present indications, the emissions decline over the next three decades will be slow .
This pattern is completely at odds with policy-makers’ stated intention of holding warming to 1.5–2°C. In 2017, a “carbon law”
was articulated by a group of leading scientists who demonstrated that for a two-in-three chance of holding warming to 2°C, emissions would
need to be halved every decade from 2020 to 2050; CO2 emissions from land use reduced to zero by 2050; and carbon drawdown capacity of five
gigatonnes of CO2 per year be established by 2050.103 Clearly, given the current state of climate policymaking, we are not within cooee of
holding warming to 2°C, with annual emissions likely rising between 2020 and 2030, rather than halving.
Future emissions are a primary determinant of the warming path over the medium to longer term; but so are other factors including accelerated
sea-ice loss decreasing Earth’s reflectivity, and accelerating emissions from carbon stores (boreal and tropical forest fires, reduced ocean
efficiency, and permafrost feedbacks, for example), though these are not systematically incorporated into model projections of future warming.
Another factor is the Earth’s Energy Imbalance (EEI), which has grown consistently over the last two decades: a positive EEI “confirms the lag
of the climate system in responding to forcing and implies that additional global warming will take place even without further forcing
change”.104 One of the drivers of increasing EEI has been a reduction in sulfate aerosol emissions, which are a by-product of burning fossil
fuels, and have a strong cooling impact of 0.5–1°C, but are short-lived in the atmosphere. Aerosols have been “masking” some of the warming so
far.105
Declining coal use and clean air policies reduce the aerosol impact. This is our “Faustian bargain”:106 as fossil fuel use declines, so will aerosol
emissions which have been offsetting some warming, so that for the next two decades lower emissions will have little impact on the warming
trend.107 One example: A 5% annual reduction in emissions of a single greenhouse gas, from 2020 and based on a middle-road-emissions path,
has no statistically significant effect on warming for more than two decades, as compared to a no-mitigation pathway.108
Warming Projections
A clear majority of scientists expect warming of more than 3° C, and 82% expect to see catastrophic
impacts of climate change in their lifetime, according to a 2021 survey by the journal Nature. 109 And a survey of 380 IPCC scientists
by The Guardian in 2024 found 80% foreseeing at least 2.5°C of global heating, and half 3°C or more.110 Many of the scientists envisage a
“semi- dystopian ” future, with famines , conflicts and mass migration, driven by heatwaves , wildfires, floods and
storms of an intensity and frequency far beyond those that have already struck.
The Climate Scoreboard shows the progress that the national plans submitted to the UN climate negotiations will make in mitigating climate
change. Their analysis (at September 2024) shows that the national contributions to date, with no further progress post-pledge period, result in
expected warming in 2100 of 3.2°C (with a range of uncertainty of 1.9 – 4.4°C).111 Of course, nations may make further commitments, but on
the other hand many are not on the path to achieving those commitments they have already made. Similarly, the 2023 IPCC Synthesis Report said
that implemented policies result in projected emissions that lead to warming of 3.2°C, with a range of 2.2°C to 3.5°C”.112
A 2021 climate risk assessment by the pre-eminent UK international affairs think-tank Chatham House focused on a RCP4.5 scenario (which
may be too conservative on the future emissions path) and the impacts that are likely to be locked in for the period 2040–50 unless
emissions drastically decline before 2030 (which they are not!). The scenario had a mean temperature rise of 2.7°C and a “plausible
worst-case scenario” (10% chance) of 3.5°C or more. The report added that could be an underestimate if tipping points are reached sooner than
the orthodox science suggests.113
Emphasizing this point, a rating system to evaluate the plausibility of climate model simulations in the IPCC’s latest report shows that models
that lead to potentially catastrophic warming “are plausible and should be taken seriously”.114
There are big questions about the size of the aerosol forcing, and the related issue of how sensitive the climate is to changes in greenhouse gases,
which remain an issue of scientific contention. New climate history research published in December 2023, based on a study of the last 66 million
years, concluded that global temperature may be more sensitive to CO2 levels than current models estimate.115 It showed that the last time CO2
levels were as high as today was around 14 million year ago, which is longer than previous estimates, and that climate sensitivity — the amount
of warming resulting from a doubling of atmospheric CO2 — may be between 5°C and 8°C, compared to the IPCC orthodoxy of 1.5–4.5°C.
07 The Physical Risks are Cascading and Systemic
The evidence from tipping points alone suggests that we are in a state of planetary emergency: both the risk and urgency of the situation are acute
[…] If damaging tipping cascades can occur and a global tipping point cannot be ruled out, then this is an existential threat to civilization. Prof.
Tim Lenton and colleagues, “Climate tipping points — too risky to bet against”116
The physical risks are non-linear, cascading, systemic and largely irreversible on human time frames. Climate models do not adequately represent
all processes and are likely to underestimate the risks.
Many elements of the climate system exhibit tipping points or thresholds at which a small change causes a larger, more critical change to be
initiated, taking that system from one state to a discretely different state far less conducive to human survival and prosperity (see section 3
above). For example, a polar ice sheet may reach a temperature threshold beyond which continuing and accelerated ice mass loss occurs, even
without any additional rise in temperature, and such a change may take decades to centuries to be fully realized before a new (ice-free) system
stability occurs.
Such threshold changes may be abrupt and irreversible on relevant time frames. And once the threshold is passed, returning conditions to pre-
threshold conditions may not restore the system. This is known as hysteresis, or bifurcation of a system, where it may be more difficult, or
impossible, for a system to return to its previous state. Put more simply: the path from A to B is not the same as the path from from B to A. Ice
sheets are a good example.
A chilling 2015 report on Thresholds and closing windows: Risks of irreversible cryosphere climate change warned that the Paris commitments
will not prevent the Earth “crossing into the zone of irreversible thresholds” in polar and mountain glacier regions, and that crossing these
boundaries may “result in processes that cannot be halted unless temperatures return to levels below pre-industrial” (emphasis added).117 For
example, the tipping point for the most vulnerable West Antarctic glaciers is probably between 0.5°C and 1°C, but cooling the planet back to that
range would not create the conditions for their re-establishment.
And in 2024, researchers again warned of “overconfidence in climate overshoot”, that is, exceeding a temperature target for decades, on the
assumption that negative emissions technology will be able to later reduce the heating and restore conditions as if there had been no overshoot.
Extinctions caused by overshoot are one example. They show that “global and regional climate change and associated risks after an overshoot are
different from a world that avoids it… the possibility that global warming could be reversed many decades into the future might be of limited
relevance for adaptation planning today [because] temperature reversal could be undercut by strong Earth-system feedbacks resulting in high
near-term and continuous long-term warming… Only rapid near-term emission reductions are effective in reducing
climate risks .”118
Changes in one element of the climate system may also trigger an unforeseen chain or cascade of events in which one event in a system has a
negative effect on other related components. For example, the mutual interaction of individual climate tipping points and/or abrupt, non-linear
changes, may lead to more profound changes to the system as a whole, and interactions
between these climate systems could
lower the critical temperature thresholds at which each tipping point is passed.119
Together, tipping point thresholds, non-linear change and cascading events represent systemic risks, that is, the risk of a breakdown of an entire
system rather than simply the failure of individual parts:
More frequent and intense extreme weather and climate-related events, as well as changes in average climate conditions, are expected to continue
to damage infrastructure, ecosystems, and social systems that provide essential benefits to communities…Extreme weather and climate-
related impacts on one system can result in increased risks or failures in other critical systems , including water
resources, food production and distribution , energy and transportation, public health, international trade , and
national security . The full extent of climate change risks to interconnected systems, many of which span regional and national boundaries, is
often greater than the sum of risks to individual sectors.120
Systemic change means climate elements can tip from one state to an entirely different one with a sudden shock that may permanently alter the
way the planet works.121 In the physical interactions among the Greenland and West Antarctic ice sheets, the Atlantic Meridional Overturning
Circulation and the Amazon rainforest, the polar sheets are often the initiators of cascade events,122 with Greenland and West Antarctica at risk
of passing their tipping points within the 1.5°C–2°C Paris range (and there is evidence they have already done so).
Such changes are not adequately incorporated into climate models: “Change can come about abruptly and even catastrophically… Predictive
models are the lifeblood of climate science, and the foundation upon which political responses to the climate and ecological crisis are often based.
But their ability to predict such large-scale disruptive events is severely limited… The IPCC ’s estimates of how much CO2 we
can still emit to be on the safe side explicitly leave out many known large-scale disruptions or tipping points because of
insufficient understanding or because models cannot capture them.”123
Researchers have also investigated how changes in forest degradation and monsoon circulation are interlinked: “It turns out that forest loss
caused by direct deforestation, droughts, and fires might vastly contribute to a changing climate in South America and could drive the
coupled Amazon rainforest/ South American monsoon circulation system past a tipping point [and] suggest an upcoming regime
shift of the Amazon ecosystem.”124
self-
If cascades coalesce, there is the possibility of “Hothouse Earth”. In 2018, a group of eminent scientists explored the potential for
reinforcing positive feedbacks in major elements of the climate system feedbacks and their mutual interaction to drive the Earth
System climate to a point of no return , whereby further warming would become self-sustaining (that is, without further human
perturbations), and prevent temperature stabilization, driving the system to what they termed a “Hothouse Earth”.125 In plain terms,
humans would lose control and lack the capacity to stop cascading warming and the researchers warned that “we are in a
climate emergency… this is an existential threat to civilisation”.126 This planetary threshold could exist at a temperature
rise as low as 2°C , possibly even in the 1.5° C–2°C range.127 In other words, we may have already arrived at this point, but
conclusive evidence of this moment requires hindsight.

A convergence of cross-disciplinary models recognizes the potential for cataclysmic


impacts.
Undheim ’24 [Trond; April 2024; Ph.D. and Research Scholar at Stanford University; Progress in
Disaster Science, “An Interdisciplinary Review of Systemic Risk Factors Leading Up to Existential
Risks,” vol. 22]
Climate change has relatively recently been “upgraded” from catastrophic to existential risk [136,137], a
worrisome development particularly because the effects are most severe in the Global South. The reason is the
cascading effects that are being mapped by specialists yet are still poorly understood in wider circles. The overall impacts of
climate change and wider ecological risks (including biodiversity collapse) are not well understood. [[138], [139], [140], [141], [142]].
The emerging findings from the Intergovernmental Panel on Climate Change ( IPCC )’s work is one key reason but not the
only one. The so-called “integrated assessment models” (IAMs), scenarios that attempt to “harmonize assumptions about the
future across disciplines”, currently dominate climate change scenario methodology [33]. IAMs comprise a
wide range of models , all quantitative in nature, but with various levels of abstraction, complexity, and computer calculation
dependence [143]. Climate change as such could become a trigger for other catastrophic risks [136]. It could increase
international conflict [144] (possibly leading to world wars) or might exacerbate infectious disease spread (zoonosis) [[145],
[146], [147]]. Climate migration is already significant and will get worse; [[148], [149], [150], [151], [152]] this could
exacerbate vulnerabilities and cause multiple, indirect stresses, including health insecurity . [153] There is the
distinct possibility of economic damage, loss of land , and water and food insecurity. Extreme temperature
or humidity could hamper workers and damage or destroy crops . Climate change could irrevocably undermine
humanity's ability to recover from another cataclysm, such as nuc lear war [136].
The planetary boundaries framework [29] presents a nother and wider lens on the risk of ecological collapse. The
focus and worry here goes far beyond the carbon cycle and encompasses land use change, freshwater
availability , ocean acidification , atmospheric aerosol loading , stratospheric ozone depletion , biosphere integrity ,
and most recently “novel entities” (anthropogenic chemicals and products such as plastic and PFAs ) [32].
Scenario Two is ELECTRICITY.
IFRs turn waste to energy and phase out fossil fuels.
Blees ’12 [Tom; May 31; president of the Science Council for Global Initiatives, member of the
selection committee for the Global Energy Prize, considered Russia's equivalent of the Nobel Prize for
energy research, and a consultant and advisor on energy technologies on the local, state, national, and
international levels; BraveNewClimate, “Roads Not Taken (yet),”
https://bravenewclimate.com/2012/05/31/roads-not-taken-yet/]
France had already clearly demonstrated just how quickly a country could convert its electricity infrastructure
to nuclear power. In the Seventies that nation’s leaders realized the precariousness of relying on fossil fuel imports and decided that
nuclear power was a far more stable option. The reactors that they decided to build were relatively primitive compared to today’s new designs,
requiring much on-site fabrication. Yet in little more than a decade France had replaced nearly all its non-hydro generation with
nuclear, without even breaking a sweat. Today France has some of the cleanest air and lowest electricity rates in Europe, and
electricity is their fourth largest export.
How much easier would it be if a country were to embark on a similar path with the opportunity to use
modular systems that could be mass-produced in factories and shipped to the power plant sites ? This is the
promise of the PRISM system (and its somewhat larger offspring, the S-PRISM, which will hereinafter be simply referred to as PRISM).
In my book Prescription for the Planet, I analyze the economic impact that France experienced during their nuclear transition and demonstrate
that a developed nation could effect a similar conversion to nuclear power using PRISM systems within about a decade.
This is truly disruptive technology in the most wide-ranging sense. Half a century of nuclear power has left many countries not
only with substantial amounts of spent fuel, mischaracterized as “ nuclear waste ”, but with vastly larger amounts of depleted uranium, the
leftovers from the process of uranium enrichment that must be employed to fuel the light-water reactors (LWR) that comprise the vast majority of
today’s nuclear power stations. All of this, plus material from decommissioned nuclear weapons, can
be used to fuel PRISMs . Not
only is the fuel already available, but its cost is less than free , for those in possession of these vast inventories would
happily pay to get rid of them. (It should be noted here that there are some superb new LWR designs being built or planned that utilize
both passive safety and modular design. The IFR solves the problem of what to do with their spent fuel. Both LWRs and IFRs can be used to
speed the transition to nuclear and away from fossil fuels.)
Since PRISM reactors, like other nuclear power plants, operate very well at full power around the clock, imagine the impact on a
nation that opts for an all-out conversion to IFRs . Today’s electricity generating systems are sufficient to meet peak demand,
fueled mostly by coal and natural gas (except in France and a couple other countries that rely primarily on nuclear and hydro). Now think about
what it would imply if those fossil fuel sources were replaced by IFRs . Since the fuel is free they could run at full power all the time.
But peak electricity demand is 2-3 times average demand. This means that the system would have an excess of up to twice as much power as
what’s needed to provide for the entire grid.
This state of affairs would completely overturn the energy status quo. Liquid
fuels could be generated with the excess power
in a number of different ways. Hydrogen derived from electrolysis of water could be combined with
nitrogen from the air to produce ammonia , which is not only a widely-used fertilizer but can also be used
as a liquid fuel for automobiles or trucks. And of course the deployment of electric vehicles would be far more
desirable once their electricity wasn’t being generated by dirty coal.
After a decade-long conversion to IFRs , the fossil fuel industries would soon be on their way out . Coal
would be first , the direct victim of the conversion. But natural gas and oil wouldn’t have much time left
either. And let’s not forget that desalination projects (and the energy to move the freshwater to wherever it’s
needed) would be possible on hitherto unimagined scales , enabling semi-arid and even arid regions to
bloom. Had the IFR road been taken in 1994, we would be well along on this path, and greenhouse gas emissions would be diminishing
rapidly and on their way to a negligible level. Instead, emissions are rising precipitously, methane is bubbling out of the
tundra, and the prospects for a future of severe weather , population dislocation from rising sea levels ,
and even runaway greenhouse effects threaten our very survival .
Nuclear energy ensures widespread access to reliable electricity supply---anti-
nuclear campaigns are idealistic purity politics that condemn the world to energy
poverty.
Walters ’13 [David; November 26; B.S. in Ocean Engineering, Deputy Sector Lead at the DOE; The
Breakthrough Institute, “The Socialist Case for Nuclear Energy,”
https://thebreakthrough.org/issues/energy/the-socialist-case-for-nuclear-energy]
Title: The Socialist Case for Nuc lear Energy
No, it means this: it means the right to generate and use electricity . The right to the ubiquitous light switch we take for
granted in the West. It means electric light available day and night, whenever an individual decides he or she wants to read, whenever a student
wants to study. It
means at least a small refrigerator where leftovers can be chilled without spoiling. It means a laptop
computer and access to the Internet. It means, perhaps, a small television. It means some a ir c onditioning, perhaps only in one
room, so children don’t suffer diseases brought on by increasing temperatures in our world. It means an electric hotplate or stovetop
so the 30,000 women and children who die every year in India from cooking with charcoal indoors can live. That is what energy means, and that
is why we need more of it, a lot more, and why it has to be carbon free. This is what it will take to make all of Africa, India, and most of South
Asia “developed.”
The antinuclear movement condemns billions of people to decades’ more energy starvation because of
misplaced liberal guilt over greenhouse gas emissions. Rather than coming up with truly better ways to produce
energy, this movement wants us to down-gear and “ use less .” This is why antinuclear idealism should be
characterized as a reactionary response to the climate crisis, and it explains why Socialists who adapt to the Green
ideology have lost their bearings.
Small Modular Reactors (SMRs) of 30MW to 300MWs can be built in mass production lines , lowering their cost . They
can be set up in rural parts of any country and have an electricity grid literally built around them that could, eventually, be connected to and form
a national grid that would enhance development and raise people’s standard of living. There are only two things holding this up:
Imperialism, which has been breaking up countries, fomenting civil wars, and destroying the national economies of these countries.
The antinuclear / Green movement, which views any development in general as harming the planet, and
sees nuc lear energy as particularly evil .
For Socialists to recapture the true vision of Marxism and the early revolutionaries of the 19th and 20th centuries means learning the lessons of
the downside of the development of the productive forces such as climate change. It
means using sci ence and tech nology to
alleviate and reverse the environmental damage capitalism has created , so we can live up to Marx’s vision of a new
society:
“Society will take all forces of production and means of commerce, as well as the exchange and distribution of products, out of the hands of
private capitalists and will manage them in accordance with a plan based on the availability of resources and the needs of the whole society. In
this way, most important of all, the evil consequences which are now associated with the conduct of big industry will be abolished.

Unstable electricity access immiserates billions and causes catastrophe.


Hughes ’18 [Mike; December 10; head of Schneider Electric’s digital commerce operations globally;
Forbes, “Why Access to Energy Should Be a Basic Human Right,”
https://www.forbes.com/sites/mikehughes1/2018/12/10/why-access-to-energy-should-be-a-basic-human-
right/?sh=91fc6b545f2e]
According to the World Bank, nearly one billion people, mainly in Sub-Saharan Africa and Asia, do not have access to
electricity.
Along with making daily tasks near impossible – such as refrigerating food or working or studying after
dark – a lack of energy in the form of electricity fundamentally restricts the development of many countries.
Access to electricity reduces poverty , increases opportunity , and improves health , productivity and
living standards . It powers devices that make daily living more efficient , freeing-up the populations time to
grow the economy in other more creative ways.
Electricity is an essential part of innovation , progress and life . No access represents more than just a
loss of potential. It stands against the moral principles that govern our decisions at every level . And it goes
against the basic principles outlined in the Declaration of Human Rights promoting equality and the right to an adequate standard of living to promote health and well-
being.
The Business Solution
Many areas of the developing world lack access to electricity because they do not have the infrastructure to support it. This involves the physical infrastructure for the
generation, transmission and supply of energy, as well as the human skills and expertise needed to maintain and improve the system. Government spending is often
overstretched, sadly leaving many citizens quite literally in the dark.
With developing nations struggling, businesses must take an active role in electricity initiatives. There are three core areas of development required: the design and
creation of electricity distribution systems, investment in small energy businesses and agencies, and training and education in the electricity sector.
There is a massive opportunity to use renewable technologies, particularly solar to bring electricity to rural communities. In Kenya for example, the country’s
electrification authority is already working with businesses, including Schneider Electric, to implement off-grid solar systems. Schneider Electric has been involved in
a project to power 128 elementary schools. The systems provide electricity to classrooms 24 hours a day. This is a lifeline for some of these communities, given that
just over half of Kenya’s population has access to electricity.
In terms of investment, Energy for Africa is a foundation being set up by the former French minister of State, Jean-Louis Borloo. In collaboration with African heads
of state and governments, they are creating a pan-African energy agency that will focus investment specifically on growing access to energy, focusing largely on
renewables. Businesses can get involved and support the foundation and work it is doing.
The regional skills gap is another huge barrier to ubiquitous electricity. In Africa, the AAI (Africa America Institute) report just 2% of university graduates are
engineers. In order to grow the talent pool and meet demand, partnerships with educational institutions are vital.
At Schneider Electric, we have committed to training one million people and 10,000 teachers by 2025 as part of our "Access to Energy" programme. As part of the
programme, we recently partnered with Eastlands College of Technology (ECT) to equip their electricity and solar lab in Nairobi.
The partnership will see the creation of a three-year qualifying Diploma in Electrical and Electronics Engineering and Solar Installations. In addition, ECT will start a
micro-entrepreneurship module, raising awareness around entrepreneurship and equipping the trainees with skills to start their own enterprises.
The training strategy is in line with the Government of Kenya’s Big 4 Agenda and Kenya Vision 2030. The College now reaches out to unemployed and untrained
youth, middle-level labourers as well as micro-entrepreneurs between 18 and 35 years old in the greater Nairobi area.
There are several other governments and countries where similar schemes are being established or could be established to help bring sustainable energy to
communities if businesses step forward to help share technology, knowledge and skills.
Policy solutions
Energy for Africa believes that policymakers have an obligation to drive energy access.
If we look at global and national policies, in Africa and Asia in particular, electricity is discussed but has not received backing at the official level. The region needs
more than just pledges. They need concrete projects, urgently. Currently, there are a number of initiatives from a number of different funds, but very few are set up
with a dedicated target of bringing energy to every region.
This year an agency was unanimously approved by both the African Union, representing governments, and the pan-African parliament, representing citizens. The
agency will be headed and run by Africans, for Africans. Businesses can help not only with financing but also with implementing the energy projects that will be
funded by the agency.
Energy serves as a leveller for societies. While it gives a huge advantage to those who enjoy it, it provides the potential for
everyone to exercise power and improve their lives . This is why energy should be adopted as the next internationally-recognised
human right. Fortunately, with the right policy and business solutions in place, we may not be very far from a world

where electricity becomes a standard facet of life across the globe , not just in the developed world.
Plan---1AC
The United States federal government should adopt a market-based instrument that
provides financial support and incentives for the development of advanced nuclear
reactors, including relaxing pre-testing licensing requirements.
Solvency---1AC
Finally, SOLVENCY.

The plan’s market-based instrument catalyzes cost-effective reactor development


and widespread commercialization.
Stein ’22 [Adam, Jonah Messinger, Dr. Seaver Wang, Juzel Lloyd, Jameson McBride, and Rani
Franovich; July 6; Director of the Nuclear Energy Innovation program at the Breakthrough Institute,
published by the Electric Power Research Institute, presented to the Nuclear Regulatory Commission, and
contributed to many high-profile projects, including the first-ever license application for an advanced
nuclear reactor in the U.S., Ph.D. and M.S. in Engineering and Public Policy from Carnegie Mellon
University where his research focused on changing the paradigm for emergency preparedness and
response for nuclear facilities; a non-resident Senior Energy Analyst at the Breakthrough Institute, Ph.D.
student at the Cavendish Laboratory of Physics at the University of Cambridge, was a Visiting Scientist
and ThinkSwiss Scholar at ETH Zürich, Master’s in Energy and Bachelor’s in Physics from the
University of Illinois at Urbana-Champaign; Breakthrough Institute Co-Director of the Climate and
Energy team, PhD in Earth and Ocean Sciences from Duke University as well as a BA in Earth Sciences
from the University of Pennsylvania; climate and energy analyst at Breakthrough, Bachelor of Science in
Mechanical Engineering at Howard University; graduate student in Technology and Policy at MIT, and a
researcher at the MIT Energy Initiative. He studies the political economy of decarbonization, with a focus
on US energy and technology policy published in the New York Times, the Los Angeles Times,
Greentech Media, and the Columbia Political Review; Master of Science in Industrial and Systems
Engineering from Virginia Tech; Breakthrough Institute, “Advancing Nuclear Energy: Evaluating
Deployment, Investment, and Impact in America's Clean Energy Future,”
https://thebreakthrough.org/articles/advancing-nuclear-energy-report]
9. MECHANISMS FOR ADVANCED NUCLEAR PUBLIC POLICY SUPPORT
Policymakers possess numerous financial and non-financial opportunities to support the suc- cessful deployment of advanced nuclear power
plants at scale. Policy mechanisms for financial support can help lower costs and reduce the financial risk
associated with early projects while encouraging the growth of a robust industry that includes not only reactor developers but also
upstream manufacturers and suppliers. Meanwhile, non-financial policy support can help facilitate power plant siting, train a
skilled workforce, formalize management strategies for spent fuel, and improve the efficiency with which the United States advanced
nuclear industry can secure customers internationally. Proactive public policy support across this broad range of issue areas will
prove crucial for positioning the U nited S tates advantageously as a technology leader in advanced nuclear energy.
9.1 Direct Financial Support Mechanisms
Federal Loan Guarantees
Upfront capital investments will comprise much of the cost of advanced nuclear projects.
Due to the higher financial risk associated with backing emerging advanced nuclear reactor deployments, financiers will
likely expect higher interest rates for lent capital. Higher interest rates thus add to the cost of early deployment of advanced nuclear
technologies.
To encourage capital investment into US advanced nuclear projects and to reduce project costs, federal programs like those administered by the
US DOE’s Loan Programs Office (LPO) can guarantee repayment of loans for advanced nuclear projects, both reducing financial risks for
investors and allowing project developers to secure capital at lower interest rates. Such federal loan guarantees can thus play a highly influential
role in accelerating the domestic development of an advanced nuclear sector.
At the national level, the DOE LPO seeks to provide directed public support for energy innovation. The DOE LPO currently possesses the
capacity to issue up to $40 billion in loans and loan guaran- tees to support a wide range of groundbreaking energy and energy infrastructure
initiatives, with up to $10.9 billion in loan guarantees available for promising nuclear energy projects.177
This support has historically been extended to conventional nuclear projects such as the construction of Units 3 and 4 at the Vogtle Electric
Generating Plant in Waynesboro, Georgia.178
Demonstration and Cost Share
Publicly funded technology demonstration programs remain a primary driver to assist innovative and transformative
research to reach commercial scale.179 Over the last 80 years, the Department of Energy and the world-leading system of US national
laboratories have directly driven not only the development but also the demonstration of many new energy technologies
nationwide. Demonstration programs represent a critical step in the innovation process by bridging the research and development
process and full-scale commercialization of a technology. Public-private partnerships for demonstration projects reduce the burden on
the government to solely demonstrate the technology. By participating in project cost-sharing, the government facilitates
“ buying down ” financial risk, thereby reducing overall FOAK costs. Such demand-pull innovation
policies have a demonstrated track record of success in commercializing innovative technologies in a variety
of sectors.180
The DOE’s recent opening of a new Office of Clean Energy Demonstrations (OCED) emphasizes the value of this public sector role in driving
early deployment for emerging technologies.181 The OCED will seek to support a range of important technologies, such as carbon capture, clean
hydrogen, grid infrastructure upgrades, and advanced nuclear demonstration. The recently passed Bipartisan Infrastructure Law specifically
designated $2.5 billion in OCED funding to support two advanced nuclear reactor demonstration projects through the Advanced Reactor
Demonstration Program (ARDP).182
The ARDP is a more established but still recent program launched in 2020 to provide public support and help advanced nuclear developers secure
and build their first projects.183 The ARDP currently supports 10 projects, with two full scale demonstration projects. One demonstration project
will deploy four of X-Energy’s 80 MWe Xe-100 high-temperature, gas-cooled small reactors at the Columbia Generating Station in Washington
state, currently home to an existing conven- tional nuclear power plant.184 The other demonstration project will involve building TerraPower’s
Natrium 345 MWe sodium-cooled fast reactor, with an initial reactor slated for construction in Kemmerer, Wyoming at the site of the existing
Naughton Coal Plant. Others ARDP projects include the Kairos KP-X/Hermes 50 MWe test reactor intended for construction at the Oak Ridge
National Laboratory in Tennessee.185 Other deployment projects include the six-unit NuScale SMR project at INL186 and GE-Hitachi’s BWRX-
300 design, at the Clinch River site in Roane County, Tennessee.187
Tax Credits
Tax credits for renewable electricity generation are a well-established policy mechanism for encouraging the
greater deployment of new domestic wind and solar capacity.188 Power produced by conventional and advanced nuclear
reactors provides the same climate and air pollution benefits as other sources of clean energy.189, 190 To promote wider adoption
of clean electricity from a diverse array of sources, optimally-designed clean energy tax credits should be
available on a technology- neutral basis. A future low-carbon electricity grid will rely upon an array of technologies, so nuclear power
plants, geothermal facilities, and hydroelectric dams should similarly benefit from federal tax incentives intended to accelerate national clean
power gener- ation. Such federal tax incentives will further improve the economics of new advanced nuclear
projects.
Tax credits require the entity to have a sufficient tax burden for credit to offset. Small organiza- tions pursuing projects with no existing revenue,
therefore no tax liability, often have to partner with another organization, which in turn takes some of the tax credit for the service. One option to
avoid this issue is a direct payment of the tax credit to the entity. A proposed direct pay mechanism allows a taxpayer to treat tax credits that it
has earned as an overpayment of taxes, allowing the tax credit to be received as a direct payment of cash in the form of a refund.
Subsidies
It is our view that technology-neutral subsidies are best employed to promote the accelerated early deployment of
innovative clean energy technologies in a fair and efficient manner.
As a promising set of clean energy sources that offer unique strategic and economic advantages for the United States, domestic advanced
nuclear energy projects are strongly in the national interest and possess a good case for inclusion in any technology-neutral
clean energy subsidy program.
In the long term, inefficient subsidies may discourage innovation and further improvements in efficiency, so such policies might be reconsidered
in the future once these technologies have become more established.
Tax Incentives for the Development of Advanced Nuclear and HALEU Supply Chains
Successful commercial deployment of advanced reactors will also depend upon the expansion of robust upstream supply chains, including factory
manufacturing capabilities, production of specialized alloys, and components of the nuclear fuel cycle (uranium mining and milling
capacity, fuel fabrication, and spent fuel and high-level waste processing and reprocessing facil- ities). Establishing sufficient capacity across
these industries will support the commercial-scale buildout of advanced reactor designs, reducing technology costs due to faster rates of
technolog- ical learning in the factory, improvements in the cost and availability of components, and more affordable HALEU fuel inputs.
Tax incentives can spur growth in these upstream and downstream sectors, promoting the development of additional advanced nuclear supply
chain capacity beyond that the market alone would produce. Such policy measures would yield benefits not just during the early stage of
advanced nuclear deployment, but well beyond as the industry enters successive stages of maturity and scale.
9.2 Supporting Policies and Programs
Support for Environmental Impact Studies and Pre-Qualification of Proposed Sites
Another means of helpful policy support for advanced nuclear would involve federal spending to identify promising sites for near-term
deployment and conduct environmental impact studies (EIS) to assess potential candidate locations.
Such efforts would reduce costs for early deployment, first by obviating the need for the devel- oper to fund the EIS, and second by reducing
financial risks associated with potential rejection of an EIS. Additionally, pre-qualification of desirable sites for advanced nuclear reactors would
accelerate the power plant planning timeline, further reducing costs and minimizing potential delays. Performing EISs in advance is thus an
affordable policy measure that can meaningfully increase the efficiency of advanced nuclear deployment at no cost to environmental oversight.
Federal Procurement of and Pre-Orders for Advanced Nuclear Designs
Upstream supply chains for the domestic advanced nuclear industry will require a sufficient level of customer demand to become more
established. The federal government is one of the largest single electricity purchasers. Federal procurement of electricity through
contract or direct purchase of advanced nuclear projects will provide a strong market signal to industry, incentivizing the
production of HALEU fuel, reactor components, and associated services.
The public sector could drive early pre-orders of reactor projects by siting a number of small reactors or microreactors at military installations,
government facilities and infrastructure, public universities, and other locations in service of state and national needs. Such public sector projects
could help support the Biden administration’s Executive Order on Catalyzing Clean Energy Industries and Jobs Through Federal Sustainability,
which directs the federal government to procure 100 percent of all electricity for federal operations using clean electricity, with at least 50 percent
produced from 24/7 carbon-free generation.191

IFRs are cost-effective but comparatively underfinanced relative to intermittent


renewables. Shelving counterproductive regulatory requirements unlocks rapid
reductions in cost.
Snyder ’23 [Van; March 16; spent 53 years as a mathematician and engineer at the Caltech Jet
Propulsion Laboratory, MS in Applied Mathematics and System Engineering, spent seventeen years as an
adjunct associate professor; “Five Myths About Nuclear Power,” https://substack.com/home/post/p-
108860660?utm_campaign=post&utm_medium=web]
It's too expensive (no it isn't)
A 2009 MIT study concluded that nuclear power plants could be built for $4 per watt , and produce electricity for 6¢
per kWh. Reactors under construction in Finland and Sweden cost about $7.50 per watt; ones in China cost $1.50 per watt. Delays due to
lawsuits, difficulty certifying a new reactor, and licensing in a n ever-changing regulatory environment add significant
cost, especially interest on capital . It would be helpful if the Nuclear Regulatory Commission were to adopt the French system of
licensing reactor designs, instead of individual reactors.
The operating cost of a reactor is quite low because fuel cost is low. Using $30/lb for uranium ore and 4.5% enrichment, the contribution of the
cost of uranium to the price of electricity is 0.116¢ per kilowatt hour (kWh). This was the origin of Lewis Strauss’s infamous '“too cheap to
meter” quip, which ignored all other costs. Reducing oxide to metal, enriching the concentration of the fissile isotope (U-235) from the natural
state of 0.7%, to 5%, and fabricating fuel assemblies, increase the fuel price to 0.5¢ per kilowatt hour. Economic details are explained in Chapter
13 of Plentiful Energy.
The lowest-cost electricity in California, 5¢ per kWh, is produced by the Diablo Canyon Nuclear Generating Station. Fixed cost amortization
over the life of the facility contributes 74%, or 3.7¢ per kWh. Labor and other non-fuel recurring costs are 0.8¢ per kWh. The average California
delivered electricity price is 30¢ per kWh.
The 3.3 GWe Palo Verde nuclear generating station in Arizona was constructed for $1.79 per watt. Its delivered price for electricity is 4.3¢ per
kWh. It is the most profitable electric utility in the U.S.
Waste disposal is incorrectly cited as a social cost not internalized in the pricing structure. Since 1981, utilities had been paying 0.1¢ per kWh
into the Federal Nuclear Waste Disposal Fund for this purpose, until a Federal court ruled in 2013 they no longer needed to pay because the
Department of Energy had reneged on its legal responsibility to take custody of spent fuel . It was included in
the rate customers paid. The fund now stands at $43 billion. Nuclear power is the only industry that fully internalizes all
costs !
Another factor sometimes cited is subsidies. Federal subsidies for light-water reactors are larger than subsidies for gas or hydro generation,
but substantially less than for wind or solar photovoltaic (PV).

State and local subsidies vary. The additional California solar PV subsidy is 40% of the Federal subsidy.
The first full-scale instance of any new system is always expensive , but both construction and operating
costs always decrease with experience . A 300 MWe IFR -type reactor could be built for less than $8 per watt.
A GE/Hitachi consortium estimates they could build 380 MWe modular instances called S-PRISM (Super Power Reactor Innovative
Small Modular) for less than $2 per watt, if they were to have a stream of orders that is sufficiently secure to justify a factory to
construct essentially identical ones, instead of building each one, subtly different from any other, on site.
In Conceptual Design of a Pilot-Scale Pyroprocessing Facility, Argonne National Laboratory and Merrick & Company proposed a forty hectare
$398 million pilot-scale pyroelectric refining facility to process 100 tonnes per year of any type of spent fuel, a small fraction of the cost of a
PUREX facility. Operating cost would be 0.05¢/kWh. Because utilities paid into the Federal Nuclear Waste Disposal Fund, and because Yucca
Mountain has been canceled, this facility and similar larger-scale facilities ought to be constructed using those funds, not funded as part of the
construction of new reactors, and not from the general fund of the Federal treasury — but the Nuclear Waste Disposal Act prohibits using the
funds for reprocessing.
If the goal of modernizing the energy sector is to reduce or eliminate carbon dioxide (CO2) emissions, comparison to fossil fuels is irrelevant.
Several scientists calculated that the only renewable source that can in principle provide all current energy usage is solar. Wind cannot provide
more than about 15% of current total energy usage, which will surely increase (and wind won't). Conservation and all other schemes, alone or
together, are inadequate to close the gap between wind supply and energy demand.
Solar PV panels cost about $3 per peak installed watt of label capacity . Setting aside their inability to destroy spent
nuclear fuel, it seems attention ought to focus on them instead of new designs of nuclear reactors. The amount of electricity produced in a year,
divided by the amount that would be produced if the system ran continuously at full label power output, is the capacity factor. The Department of
Energy reported that the 2018 national average capacity factor for solar PV was 25%. Nuclear generating stations averaged 92.5%. With a 25%
capacity factor, the
cost of a solar panel, at $3 per peak watt, is $12 per average watt, about six times the expected cost of
S-PRISM modules .
Solar panels last about 25 years, but must operate more than four years to repay the energy invested in their fabrication, deployment, and
recycling. The capital cost of $12 per average watt, amortized over twenty-five years at 5%, deducting the four-plus year energy payback period,
is $26.61 per watt of average capacity.
The capital cost for solar PV panels does not include operating and maintenance costs, electricity storage, significant grid changes necessary to
exploit diffuse sources, and recycling.
Several independent studies have determined that renewable sources would need 390-800 watt-hours' storage per average watt of demand to
provide firm power, for which the industry definition is 99.97% availability. In Adequate Storage for Renewable Energy is Not Possible, using
twelve years of data for California, I calculated that more than 2,800 watt-hours’ storage per watt of average demand would be necessary. Using
five years of nationwide data, more than 800 watt-hours’ storage per watt of average demand is necessary. The May 2020 price for Tesla
PowerWall 2 batteries was $0.543, not including installation. The warranty period is ten years. For 800 watt-hours, the total cost would be more
than 3.8 times total USA GDP every year, for batteries alone, or “only” $49,000 per month for each of America’s 128 million households.
These amounts of storage will be entirely inadequate the next time Mount Tambora erupts and produces
another “year without a summer” such as 1816. This or something similar will happen again. The only
question is “when?”
These sorts of calculations never appear in arguments that renewable electricity is less expensive than nuclear power.

Integral reactors are the safest method for generating energy AND eliminate
nuclear waste---overwhelming statistical and comparative evidence.
Snyder ’23 [Van; March 16; spent 53 years as a mathematician and engineer at the Caltech Jet
Propulsion Laboratory, MS in Applied Mathematics and System Engineering, spent seventeen years as an
adjunct associate professor; “Five Myths About Nuclear Power,” https://substack.com/home/post/p-
108860660?utm_campaign=post&utm_medium=web]
Popular discussions about nuclear power eventually get around to at least one of five objections: It's not safe; no one knows what to do about
waste; it's too expensive; it leads to nuclear weapons proliferation; or there isn't enough uranium. All of these objections are
baseless .
Revised to incorporate valuable comments from Dr. Yoon Il Chang, Tom Blees, and Eric Loewen.
It's not safe (yes it is)
Their collection
The Paul Scherrer Institut in Villigen PSI, Switzerland is a frequent European Community consultant concerning safety.
of more than 33,000 records of accidents related to electricity production shows that nuclear power is
the safest-ever way to make electricity , by a very wide margin . Only 46 fatalities are directly attributed
to municipal nuclear power in its entire six-decade worldwide history, all due to one accident, ironcially a botched
safety test at Chernobyl.
Bigger means safer. There have been only 46 fatalities in the entire six-decade worldwide history of municipal nuclear power, all due to one
accident at Chernobyl
The Chernobyl reactor was the Hindenburg of nuclear reactor designs. It is irrelevant because another one with similar
defects will never be built . It is described here only because of the fear it engendered.
The Soviet Union’s safety culture was different from the west’s, and reactors were not licensed because there were no licensing criteria. The
Chernobyl reactor was a scaled-up version of one purpose-built to produce plutonium for weapons. As it got hotter, or if the cooling water boiled,
the fission reaction ran faster, so it got hotter faster. The operators lost control of it, ironically because they bypassed the inadequate shutdown
mechanisms during a safety test. The ensuing calamity was a steam explosion and a graphite fire, that dispersed several tonnes (1000
kilograms/tonne) of spent fuel near the reactor, and radioactive dust throughout the region.
The BORAX test reactor in Idaho was intentionally destroyed by prompt criticality, but licensed reactors are designed so that it is
impossible to cause a nuclear explosion , either by mistake or intentionally.
The United Nations Scientific Committee for the Effects of Atomic Radiation (UNSCEAR) reported to the General Assembly in 2008 that 134
plant operators and emergency responders at Chernobyl were exposed to sufficient radiation to develop acute radiation syndrome, which caused
28 deaths. Two others died from injuries not caused by radiation (falling debris), and one from coronary thrombosis. The report noted there is “no
there is “ no
scientific means to determine whether a particular cancer in a particular individual was or was not caused by radiation,” and
scientific evidence of increases in overall cancer incidence or mortality rates or in rates of non-malignant disorders
that could be related to radiation exposure.” Nonetheless, the report speculated that fifteen excess cases of fatal juvenile thyroid
cancer, compared to earlier decades, out of 6,000 cases reported between 1991 and 2005, might have been caused by the accident.
In the most affected areas (Ukraine, Belarus, and southwestern Russia) the average additional radiation dose to the general public over the period
1986-2005 was about nine millisieverts (mSv). Residents “need not live in fear of serious health consequences,” according to the report.
UNSCEAR reported in October 2013 that “Japanese people receive an effective dose of radiation from normally occurring sources of, on
average, about 2.1 mSv annually and a total of about 170 mSv over their lifetimes…. No radiation-related deaths or acute diseases have been
observed among the workers or general public exposed to radiation from the accident…. For adults in Fukushima Prefecture, the Committee
estimates [the increase in] average lifetime effective dose to be of the order of 10 mSv or less… discernible increase in cancer incidence in this
population that could be attributed to radiation exposure from the accident is not expected.”
The dose from one abdominal and pelvic CT scan with and without contrast is about 30 mSv. The annual dose on the Tibetan plateau is 13-20
mSv. The annual dose on Guarapiri Beach in Brazil is 1154 mSv.
Although radioactive materials from a municipal nuclear power plant outside the Soviet Union have never caused an injury, illness, or fatality,
Argonne National Laboratory (ANL) and Idaho National Laboratory (INL) believed they could develop an even safer design. During a meeting
on 26 April 1944 (see Appendix B of ANL report 49168), Enrico Fermi described an energy system consisting of fast-neutron reactors with fuel
reprocessing. A concept that led to the design of Experimental Breeder Reactor II, or EBR-II was described during that meeting by Leo Szilard.
Leonard J. Koch became the project manager, and filled in many details during subsequent years. EBR-II was a 20 megawatt (MWe) reactor near
Arco, Idaho. The Integral Fast Reactor, or IFR project, led by Charles E. Till and Yoon Il Chang, using concepts developed at EBR-II, had a goal
to tie up absolutely all loose ends related to nuclear power (Although Koch had retired from ANL before the official start of IFR, his vision for
the experimental program to develop it is clear in a memorandum dated September 2, 1953). An important new component was the fuel
processing system that uses pyroelectric refining, instead of the melt refining method that had been used in the early years (See Chapter 8 of
Plentiful Energy).
EBR-II design was
Detailed nuclear, thermodynamic, and mechanical calculations, and computer simulations, had shown that the
inherently safe . To demonstrate this, in 1986 Pete Planchon conducted a demonstration for an invited
international audience. Automatic safety interlocks were turned off. Coolant circulation was turned off (the cause of the
destruction of the Three Mile Island reactor). Core temperature rapidly increased from 1010 degrees (Fahrenheit) to 1430 degrees. Liquid sodium
coolant boils at 1621 degrees. Within seven minutes the core was below operating temperature, without action by operators, computers, valves,
pumps, auxiliary power, or any moving parts. The
operators were not injured. The reactor was not damaged . There
was no release of radioactive material. The reactor was restarted with coolant circulation restored, but the steam generator
disconnected. The same scenario recurred. Two months later, the operators at Chernobyl repeated the second experiment, using a very different
reactor, with tragic consequences.
The demonstrated safety of EBR-II depends only upon immutable laws of physics and thermodynamics,
and the geometry and materials of the reactor core.
No one knows what to do about waste (yes we do)
Pressurized light-water power reactors produce about twenty tonnes of spent fuel per gigawatt year (GWe-year) of electricity. Spent fuel is
dangerously radiotoxic for 300,000 years, an apparently intractable problem.
Examining the composition of spent fuel leads to a different conclusion. A tonne of spent fuel from contemporary light-water reactors (LWR)
consists of about 52 kilograms of fission products and 948 kilograms of uranium and metals with greater atomic number, i.e., unused fuel.

Fission products are less radiotoxic than uranium in nature after 300 years, not 300,000 years.
Fission products are produced at the rate of about one tonne per GWe-year. Custody of fission products separated from uranium and transuranic
metals would be much simpler than for spent fuel taken as a whole. After ten years' storage, two elements, strontium and caesium, produce 99.4%
of radiotoxicity, but constitute only 9.26% of the mass of fission products — five kilograms per tonne of spent fuel, or about 92 kilograms per
GWe-year. 47 kilograms per tonne, or 900 kilograms per GWe-year, are low-level waste, less radiotoxic than uranium in nature, and much
simpler custody is adequate.

The PUREX (Plutonium-URanium EXtraction) process separates chemically pure but isotopically mixed
plutonium and uranium from spent fuel , leaving transuranic metals with the fission products. It therefore
does not reduce the duration of custody . Process fluids are good moderators that efficiently adjust the average speed of neutrons
into the range that causes fission. The concentration of spent fuel must be kept very low to avoid criticality
accidents. A facility to process hundreds of tonnes per year occupies several thousand hectares, has several kilometers of pipes and large
numbers of pumps, mixing devices, and other components, and is very expensive.
An alternative method to separate fission products from unused fuel, based upon well-known metal
electrorefining methods, was developed as part of the IFR project . There is no moderator in the device, so larger
amounts can be processed in a much smaller space. The pyroelectric refining device consists of an anode, composed of spent
fuel, and a cathode, immersed in an electrolyte of molten lithium/potassium chloride salts.
Because of different chemical potentials, when a carefully controlled voltage is applied, uranium, transuranic metals, and active fission products
such as strontium and caesium diffuse from the anode into the electrolyte and are carried through it by the electric potential gradient. Nearly pure
fuel is deposited at the cathode. Active fission products remain in the electrolyte. Noble metals such as rhodium and palladium remain in the
anode.
Fission products can be separated from the electrolyte by absorption into zeolyte (similar to the active material of a water softener), thereby
cleansing the electrolyte for further use. Contaminated zeolyte can be mixed with powdered glass, and compressed and sintered into an
impervious insoluble ceramic, ideal for storage, called sodalite. The metals remaining in the anode basket are disposed separately. This leaves the
actinides.
Actinides in spent fuel consist mostly of uranium, but include significant amounts of plutonium and heavier transuranics. Odd-numbered isotopes
of uranium and plutonium can be fuel in existing reactors, but even-numbered isotopes do not fission, and heavier transuranics do not fission
efficiently, in light-water reactors. Using recycling, about 30% of the original fuel can be converted to fission products and electricity in an LWR,
before the only slightly-smaller problem is right back where it started: A substance needing 300,000 years' custody.
If the average speed of neutrons is higher, as in IFR, all uranium and transuranic isotopes are fissionable or can be transmuted to fissionable
isotopes by neutron absorption; they are fuel, not waste. Rather than 20 tonnes of intractable waste per GWe-year, a fast-neutron reactor produces
one tonne of substances that require much simpler custody. Spent fuel, the substance currently and mistakenly called “nuclear waste,” of which
the world is desperately eager to be rid, is effectively destroyed — and nothing else other than hideously expensive laboratory-scale toys can do
it.
Fast-spectrum metal-cooled reactors should be built instead of light-water reactors because of their demonstrated
safety , and their ability to destroy nuclear waste .

A U.S. first move solves by letting other countries copy the design AND it’s the best
starting point because we’ve sunk capital and testing into reactor technology.
Brook ’11 [Barry, Tom Blees, and others; February 24; Australian Laureate Professor and Chair of
Environmental Sustainability at the University of Tasmania in the Faculty of Science, Engineering and
Technology, formerly an ARC Future Fellow in the School of Earth and Environmental Sciences at the
University of Adelaide, Australia, where he held the Sir Hubert Wilkins Chair of Climate Change from
2007 to 2014, and was also Director of Climate Science at the Environment Institute; President of the
Science Council for Global Initiatives, member of the selection committee for the Global Energy Prize,
considered Russia's equivalent of the Nobel Prize for energy research, and a consultant and advisor on
energy technologies on the local, state, national, and international levels; Conference Paper from the 91st
American Meteorology Society Annual Meeting, “Advanced nuclear power systems to mitigate climate
change (Part III),” http://bravenewclimate.com/2011/02/24/advanced-nuclear-power-systems-to-mitigate-
climate-change]
There are many compelling reasons to pursue the rapid demonstration of a full-scale IFR , as a lead-in to
a subsequent global deployment of this technology within a relatively short time frame . Certainly the urgency of
climate change can be a potent tool in winning over environmentalists to this idea. Yet political expediency—due to widespread skepticism of
anthropogenic causes for climate change—suggests that the arguments for rolling out IFRs can be effectively tailored to their audience. Energy
security —especially with favorable economics—is a primary interest of every nation . The impressive safety
features of new nuclear power plant designs should encourage a rapid uptick in construction without
concern for the spent fuel they will produce, for all of it will quickly be used up once IFRs begin to be deployed. It is certainly
manageable until that time. Burying spent fuel in non-retrievable geologic depositories should be avoided, since it represents a valuable clean
energy resource that can last for centuries even if used on a grand scale.
Many countries are now beginning to pursue fast reactor tech nology without the coop eration of the
U nited S tates, laboriously (and expensively) re-learning the lessons of what does and doesn’t work. If this
continues, we will see a variety of different fast reactor designs, some of which will be less safe than
others. Why are we forcing other nations to reinvent the wheel ? Since the US A invested years of effort
and billions of dollars to develop what is arguably the world’s safest and most efficient fast reactor system in
the IFR, and since several nations have asked us to share this tech nology with them (Russia, China ,
South Korea, Japan, India ), there is a golden opportunity here to develop a common goal—a standardized
design, and a framework for international control of fast reactor technology and the fissile material that fuels them. This opportunity
should be a top priority in the coming decade, if we are serious about replacing fossil fuels worldwide
with sufficient pace to effectively mitigate climate change and other environmental and geopolitical crises of the 21st
century.

Independently, U.S. action on nuclear power is modeled globally.


Stein ’22 [Adam; July 6; Ph.D. and M.S. in Engineering and Public Policy from Carnegie Mellon
University; The Breakthrough Institute, “Advancing Nuclear Energy: Evaluating Deployment,
Investment, and Impact in America's Clean Energy Future,”
https://thebreakthrough.org/articles/advancing-nuclear-energy-report]
Advanced nuclear power presents a once-in-a-generation opportunity that has languished for decades. Yet the world is
increasingly interested in the energy security, economic, and climate benefits of reliable and clean nuclear energy, deployment efforts are accelerating. If the
U nited S tates does not also act soon to seize this opening, the probability is high that the American advanced nuclear industry will be
left behind . On the other hand, if America can execute a proactive, planned advanced nuclear deployment strategy, then the
U nited S tates can play a lead ing role in introducing a revolutionary new energy technology to the world at a critical
moment in the global clean energy transition .

Prioritize particularity. The complexity of global problems means you can’t


evaluate nuclear power in isolation, only as compared to achievable alternatives---
that makes nuclear energy the least-bad option, by far.
Boemeke ’22 [Isabelle and Jules Terpak; June 2; nuclear activist who organized protests against the
closure of Diablo Canyon nuclear plant and was called a Next Generation Leader in Times Magazine;
contributing advice columnist at the Washington Post; Forward, “Why nuclear energy is key to
combatting climate change,” Spotify]
Isabelle Boemeke: Obviously, the biggest one is the how dangerous it is. The normal perception is that it is the most dangerous way to make electricity.
That’s what people think. They automatically think of Chernobyl, of Fukushima, Three Mile Island, even though there weren’t any deaths related to it. It was an event
that impacted the community. You said you grew up there you grew up listening you know those negative things so that's where people's minds automatically go to
and that's normal. That's a normal human bias. It is called actually the availability heuristic which means whenever, um, which it's
kind of similar to how people fear airplanes. Even though airplanes are some of the safest forms of transportation

however, when an airplane goes down you know you see it on the news , it makes the headlines it sounds scary

and so people irrationally fear airplanes more than they fear driving their cars even though they're like the likelihood of
them dying from a car crash is much much higher. Yeah um so it's a very. It's a very common bias and of course like I said if your only

exposure to a technology is the negative things it's the bad headlines . It makes sense to think of it in
those terms however we thankfully have data that tells us what's actually true in the world. When you
look at the data around forms of energy, you know ways to make electricity, nuclear power is consistently one of the
safest ones. It’s actually comparable it's very very minimally higher than solar, wind, or hydro and of course much much lower than fossil fuels. So again that
was one of the biggest things I wanted to address was this myth that it's actually that it's dangerous or that is the most

dangerous way to make electricity. A curious thing, an interesting thing is that we’ve had much larger hydropower dam
accidents than we’ve had nuclear power accidents. There was a dam in China that collapsed and flooded a bunch of small villages that went down stream from it
and it killed hundreds of thousands of people. Yet, it didn’t make the headlines, we don’t have a Netflix TV show about it, you

know we don’t have like a HBO special on this dam in China. So there’s something that’s very intriguing and interesting . It’s
culturally sexy in a way to talk about nuclear power. It’s a weird interesting phenomenon. But that was one of the main things.
Isabelle Boemeke: The second thing I really wanted to address was the waste perception.
Jules Terpak: Yeah that’s a big one.
Isabelle Boemeke: It’s a big one. People always bring up waste . I’m ashamed to say I’ve been working on a video about waste for two years.
Jules Terpak: Not ashamed you got to get that messaging right.
Isabelle Boemeke: Yes, and the format and all of it. I’ve been playing a lot with different formats. But you know, the waste is also something that took me a while to
come around because I kept thinking, “but wait this sounds really bad.” I think where I landed with it is, you can’t analyze a technology in
isolation , right. You can’t just be like “is nuclear power bad? ” Well, compared to what? And in what
way? Every technology that we create has some downside , right. Renewables are great for somethings, they have
downsides . Fossil fuel have allowed our civilization to get to where it is today. You know, we’ve had tremendous improvement of quality of life for people
across the world. A high energy life is the same as a high-quality life. Fossil fuels have enabled that. However, it comes with all these problems. So, it’s all a
matter of A, comparing the technologies with what we currently have and being able to analyzing context .
So, the waste issue is, okay, we have, you know, this incredible technology that makes a lot of electricity without

any carbon emissions . But you have a small amount of very radioactive waste that comes out of it.
Jules Terpak: Yep.
Isabelle Boemeke: The question is, is that bad. And of course, it sounds bad because it’s so radioactive that if you and I came and touched it when it comes out of a
reactor we would die. But, how is it handled? Has anyone ever been injured from it? How could somebody be injured from it? When you start looking to those things,
you start realizing that the
fear is more of like this hypothetical , apocalyptic , end of the world scenario where
civilization loses the ability to communicate and then all of the sudden you come across this cast, then
you open them, and you touch the thing, and you eat it , and then you die . In reality, what it is, is this ceramic
pallet that go into reactor to make electricity. After they are done, they come out. They are like heavy, you know, it’s something that’s like super technologically
advanced. The people who work at the power plant take it out of the reactor, put it in a pool to cool for like five years. And then take it out of the pool and put it in like
this giant concrete cask where it’s like bolted and weighs tons. You can’t just like roll up to a nuclear power plant and grab one of those things. This
is like
serious heavy , concrete shielding casks that are so good at blocking radiation that you and I can literally
stand right next to it and touch it and be totally fine . We would not be, radiation would not come out of it. And by the way, the
nuclear energy industry is the only energy generating industry that is actually responsible for its waste. So,
when you compare to fossil fuels, it’s just spewing into the air, like killing nine million people a year from
air pollution, plus causing climate change which we don’t even know how many people it’s going to kill, but just affecting the whole world.
So you know, what should I choose? This thing that’s literally killing millions of people and causing climate change or this thing that sounds scary but it
actually has never hurt anybody or the environment and is being very well managed. That’s how I ultimately landed on it.
Isabelle Boemeke: Yep, um and you know there’s a like a deeper conversation about what we do with it long term, but we don’t need to get into those details if you
don’t want to.
Jules Terpak & Isabelle Boemeke: Oh no no, so what I found interesting, you brought up quality of life and when I first came across you and it was like a few months
later, I had gone to your website and you know how you have some um kind of basic breakdown videos and the one was a I always get this Youtube name wrong, it’s
such a good channel, it’s like I don’t know how to do that you know. I want to create a petition to change the name because I’m like this cannot be good for you guys.
Like yeah, so all that, like so I have such a good channel, but I just cannot like pitch you to other people but the video was talking about you know, because you know
again for me climate change is super overwhelming, but it kind of brought it down to four different things. The first two are you know the hard it’s hard because it’s
human nature at this point and population size, and population growth. And population growth doesn’t like economic growth then the two other things of the four
factors are things that you know we can change and nuclear energy can have a big part of it as energy intensity and emissions per energy unit produced. And I think a
lot of people get kind of mixed up quality of life and kind of standard of living. So standard of living you know deals with the access we have to things. You know
humans aren’t going to take a step back when it comes to innovation. That’s just like in our nature, but more comfort yes. It’s like this thing now, like you know you
don’t hate on your wind or solar. Or any of that stuff. You know that that stuff that there’s already a positive sentiment. Towards so that’s why you’re focusing on
nuclear. And I don’t think people realize how much we are innovating. You know it’s not going to stop against. It’s in our human nature and like we have to fight
when it comes to fossil fuels and everything like that we have to fight the problems that arose from you know human tech with today’s tech. That’s where nuclear
comes in. So I’d love to hear about that like balance of you know us still being able to be human focused on innovation because you’re really big on you know
pushing against all the negative sentiments towards tech. And you know the balance of how nuclear is there to help that.
Isabelle Boemeke: Well I think there’s
a very common belief in the environmental community or in the climate
change community that the way to solve climate change is basically to dismantle our society, reduce energy
consumption , and go back to a way of living that consumed less energy. I think it’s attractive to some
people, I think good luck convincing seven billion people to do that, and I also think it’s a bit of a privileged out
of touch position that usually comes from people who grew up in developed countries who don’t understand
what a low energy life looks like. They have a fantasy in their mind that it’s going to be this beautiful
utopia and you know solar panels…
Jules Terpak: Rolling in the grass
Isabelle Boemeke: Yeah rolling in the grass and whatever, and it’s not. It’s
a lot of hard work, and it’s not having proper heating in
your home when it’s freezing outside, so you have to sit in front of a fire place and inhale p articulate m atter.

And it’s not having a ir c onditioner during the summer and basically not being able to sleep because it’s too hot.

Reject evidence that doesn’t assume new reactors---the nuclear renaissance unlocks
industry revolutions in safety, nuclear waste, and efficiency. It’s comparatively
better than every alternative.
Jayanti ’23 [Suriya; December 4; LL.M. in Energy & Environmental Law from KU Leuven, Ph.D. in
Energy, Environment, and Natural Resources Law, energy policy expert, former U.S. diplomat; Time,
“Nuclear Power Is the Only Solution,” https://time.com/6342343/nuclear-energy-climate-change/]
Wedged between energy crises and climate change natural disasters, there is no longer the luxury of choice. The industry has responded by
seeking to develop new technology that can assuage public concerns about safety . Some are designing
micro reactors or SMRs. Others are working with new materials or techniques, such as replacing water in
cooling systems with molten salt , or using boiling water instead of pressurized water to make the NPP more
efficient . Still others are working on new safety systems, or fuel fabrication innovations, or new
approaches to storage of nuclear materials. In the U.S., top tier research outfits like the Electric Power Research Institute are finding their
expertise in demand all round the world, creating something resembling nuclear diplomacy. The U.S., U.K, Canada, and South Korea are leading the pack on
investment in nuclear.
The nuclear industry has been riding high on a wave of enthusiasm for a few years. In recognition of the cost
savings of “going nuclear,” smart companies are already making plans to transition to nuclear power. This
includes Microsoft, which announced in September that it will use nuclear plants to power its artificial intelligence operations. With electrification the foundation of
any coherent energy transition plan and grids struggling to balance themselves with an abundance of non-dispatchable renewables, nuclear is increasingly
acknowledged to be the solution. Just as apex science fiction writer Isaac Asimov fantasized in his 1940-50s Foundation books, nuclear energy may save humanity.
And yet, recent headlines have revealed some major setbacks. Small modular nuclear reactor (SMR) company NuScale, once lauded as the leading SMR developer
and despite receiving almost $2 billion in U.S. government support, has cancelled its flagship project due to rising costs and mismanagement. It is now facing investor
lawsuits for fraud. TerraPower, Bill Gates’ SMR company, was delayed several years by the Russian invasion of Ukraine—Russia was the only country that produced
the nuclear fuel needed for TerraPower’s SMR design. X-Energy has walked back its plans to go public. The U.K.’s Rolls Royce SMR is plagued by financial
problems. France’s EDF is posting record low power outputs and financial status reports. Others are also delayed, struggling, or facing bankruptcy.
Setbacks are normal for new technologies and emerging markets, but for nuclear power such bumps in the road have outsized potential to disrupt because many
people are still hesitant or downright hostile to nuclear power. The Chornobyl, Fukushima Daiichi, and Three Mile Island catastrophes loom large in the imagination.
“Meltdown” itself has entered idiom to mean falling apart rapidly and irrationally and beyond control. The world’s preoccupation with Russia’s attacks on Ukraine’s
Zaporizhzhye nuclear power plant (NPP), the largest in Europe, shows how gripped we can be by nuclear disasters. In keeping, a March 2023 Gallup poll found that
although support for nuclear is increasing slowly, 44% of Americans still somewhat or strongly oppose it, down from 54% in 2016. Similar polls in Switzerland and
the U.K. peg support for nuclear at just 49% and 24%, respectively. In Germany, despite still being in the middle of an energy crisis and desperate for additional
power sources, 50% of people under 34 want nuclear power eradicated.
With the exception of France, which is 69% nuclear, many of the developed world’s leading economies and governments have been too scared of nuclear power to
allow it to flourish. Germany was so spooked by Fukushima it completely phased out its nuclear power program, finally turning off its last three (of an original 17)
reactors on April 15, 2023. Belgium and Switzerland decided not to build new plants and to phase out those existing, although the 2021-2023 energy crisis has forced
a reconsideration. In the U.S. the trigger was the March 28, 1979 partial meltdown of Three Mile Island in Pennsylvania. No one died or even suffered negative health
effects, in the aftermath dozens of planned NPPs were cancelled and almost nothing has been built in decades.
Unfortunately, unencumbered by popular opinions against nuclear, the Western world’s great geostrategic rivals are years if not decades ahead. There are sixty nuclear
projects in various stages of construction around the world, and 22 of them are in China; and 22 use Russian technology, and 18 use Chinese technology, or
technology China stole from other countries and rebranded. Some European countries, notably Hungary and Serbia, and some NATO countries, such as Turkey, are
planning new NPPs using Russian designs and supply chains. Ironically, and tragically, even all four of Ukraine’s NPPs are Russian VVER models, entirely reliant
until quite recently on Russian fuel. And Russia controls much of nuclear supply chains.
The Western world ended up so far behind because of fear. Governments around the world are now struggling to catch up, slowed by still-high public opposition rates
and regulatory regimes that institutionalized fear of nuclear into licensing and permitting processes. In countries that never had nuclear power, such as Poland and
Egypt, opposition is not baked into law, and so they can paradoxically move faster than some countries with longstanding nuclear programs.
In the U.S. the opposite is true; it keeps tripping over the fear-based regulatory regimes that govern its nuclear industry. Tasked by Congress in the 2019 Nuclear
Energy Innovation and Modernization Act with liberalizing the licensing process to foster innovation and accelerate the commercialization of nuclear power, the U.S.
Nuclear Regulatory Commission (NRC) in 2022 released draft rules and processes for consideration of new nuclear technologies that managed to take all the worst
and most burdensome aspects of existing rules and, instead of reducing them, added some new hurdles and standards, some of which nuclear engineers say are
scientifically impossible to meet. The draft is twice as long (1252 pages) as the one it was supposed to simplify. Many requirements, both old and new, shouldn’t
apply to SMRs and other advanced nuclear designs. The result was decried by experts and companies as a complete failure that will continue to hobble the industry for
decades, adding further time and expenses to the already billion-dollar licensing process. The Nuclear Energy Institute, an industry trade group, said the proposal will
“increase complexity and regulatory burden without any increase in safety and reduce predictability and flexibility.”
Meanwhile the U.S. is trying to export this same cumbersome nuclear regulatory regime, including to Saudi Arabia. Calling it the “gold standard” of nuclear
regulation, the U.S. has refused to allow Saudi, much like it did with the United Arab Emirates, to use U.S. nuclear technology unless the Kingdom also adopts
prescribed U.S. safety regulations. What a surprise that Saudi is actively considering Russian nuclear technologies instead.
The risk
Yet, the scientific reality is that the rising generation of nuclear innovation doesn’t need to be subjected to a crippling approval process—it is safe.

profile of new reactors and other technologies in development is very low . This is especially true
relative to the risks of climate change fallout or, for example, the health risks of burning fossil fuels or inhaling
combustion engine exhaust. And the waste from a new nuclear plant is far less problematic than that of spent solar

panels , for example. The nuclear renaissance is not just more nuclear power, it’s also better, cleaner , safer ,
more efficient .
SMRs, for example, are much safer than full sized NPPs. They have outputs of 50-300 MW depending on design, compared to
800-1600 MW for traditional NPPs. Microreactors , “pocket nukes” with 1-50 MW outputs, are even more resilient because simple

physics means they are harder to damage and so it’s less likely that an accident could result in a
radioactive release . Whereas seismic activity is a grave concern for large NPPs like Fukushima, smaller technologies soon to be built do not require the
seismic cushions that were needed under previous plants to protect them from even the smallest earthquakes. Micros and SMRs can also be
manufactured in a factory —that’s what the “modular” in small modular reactor means — allowing for standardization and
systematized security measures, as well as sealed transport. And smaller amounts of radioactive fuel in smaller reactors mean less
that could go wrong even in the case of an accident.
One persistent concern opponents of nuclear power often voice is the risk of reactor cooling systems failing, but this not an issue with the new generation of nuclear
designs. Fukushima Daiichi NPP’s water-based cooling system stopped when a tsunami disabled the electricity source powering the circulation. This is the same risk
Ukraine’s Zaporizhzhye NPP is facing thanks to Russia bombing the dam that held the water that kept the plant’s water cooling system operating. In emerging
advanced reactor technologies, however, this vulnerability is eliminated entirely. Many of the new designs have entirely reconsidered systems with passive safety
features that maintain cooling without reliance on external power. Others use water in innovative ways. GE Hitachi’s BWRX-300 SMR is designed for the water
inside to boil, creating its own convection that in turn powers its own cooling circulation. This eliminates the need for an extensive circulation system of pipes and
keeps all potentially contaminated water inside the plant. Some also use materials other than water, such as molten salts.
Another common objection to nuclear power is the disposal of radioactive material. But new tech nological innovations are mostly
eradicating the need to store spent fuel at all. New fuels have a lower enrichment level , which is less
radioactive and thus safer . And there’s no such thing as nuclear waste unless the material is wasted .
Canada’s Moltex, for example, is developing a fuel recycling “waste to stable salt” technology that repurposes spent fuel into new fuel,
reducing waste by over 75% and cutting its radioactive half life to approximately 300 years, down from thousands. Moltex is also designing an SMR, the Stable
Salt Reactor-Wasteburner, to run on the recycled fuel , which will cut down the transport of radioactive materials.

Other technologies are reducing risk in parallel .


Nuc lear energy will never be absolutely , perfectly, guaranteeably safe because nothing is. Wind turbines
can fall over, and they can kill birds and negatively impact marine life. Solar panels produce significant volumes of toxic waste, and they take up space that
impedes whatever is trying to live under them. Both wind and solar rely on minerals and manufacturing mostly controlled by China, and neither is entirely reliable as a
Hydropower only works with abundant water, and droughts are eviscerating
power source. They’re also not dispatchable at times of peak electricity demand.

Coal is killing our children and our planet . So is oil . So is natural gas . Geothermal , biofuels ,
rivers across the world.

hydrogen , et cetera—these aren’t able to satisfy even a fraction of the demand for energy.

Use consequentialism. Critiques of nuclear power are politically hollow if the neg
can’t identify a material alternative that should supplant nuclear energy. The
injustice of fossil fuels and renewables is an advantage to nuclear power.
Howard ’20 [Don; January 2020; Professor of Philosophy, University of Notre Dame Department of
Philosophy, former director and a Fellow of the University of Notre Dame's Reilly Center for Science,
Technology, and Values; Notre Dame Journal on Emerging Technologies, “The Moral Imperative of
Green Nuclear Energy Production,” vol. 1]
VII. The Moral Perspective
We must totally decarbonize our energy economy within at most a few decades if we are to avert a climate
catastrophe that will kill billions of people and render large parts of our planet uninhabitable . We have seen that this
cannot be done with solar and wind power alone. Nuclear power is the only technologically feasible solution . It is comparatively
safe and shovel ready. Putting irrational fears aside, what could be the objection to a crash program to replace fossil fuels in electricity generation by nuclear power?
Critics of nuc lear power have raised a number of ethical objections to nuclear power. But before we review them,
let us recall that the decision to be made is not about nuc lear power alone . Instead, the decision is a
comparative one. The ethical issues concern a program of rapidly expanding nuclear power in comparison with other courses of action of
which there are really only two:
Do nothing , which dooms the planet.
Continue on the path called for in the Paris Agreement by expanding renewables as rapidly as possible, reducing energy
consumption, and developing new approaches to CO2 capture and sequestration , all the while hoping for
some miraculous technological breakthrough . This approach will mitigate the impacts of climate change, but will still leave
our children and grandchildren with a planet dramatically different from the world we know, a planet far less

hospitable , where those who survive will lead seriously diminished lives .
There are no other alternatives.
The more serious ethical objections to expanded nuclear power fall into two categories, social and environmental justice and obligations to future generations. Many
of those who criticize nuclear power on grounds of social and environmental justice base their arguments on serious but commonplace errors about the health and
environmental risks and the comparative safety of nuclear power, errors that have already been addressed in this paper, as well as misunderstandings of the science
and engineering of nuclear power. As an example, consider the Unitarian Universalist Association's 1976 Social Justice Statement on nuclear power, which starts from
three premises:
The high probability that many people would be exposed to low-level radiation, an eventuality which is known to cause birth defects and to result in deaths from
cancer, usually after time delays of years.
The highly probable exposure of future generations to the lethal effects of the long-lived radioactive waste products inevitably produced by nuclear power plants.
Uncertainty in regard to the likelihood of catastrophic release of radioactive materials into the environment, because estimates of "nuclear safety" published by the
nuclear power industry and its government sponsors neglect the dangers resulting from defects in construction and the lack of an adequate program for training
nuclear power plant inspectors and operators.8 6
Where do those probability estimates come from? Whence the uncertainty in those estimates of the likelihood of catastrophic releases of radioactive materials? No
documentation is given and none could be given, because there are no such high probabilities, and the suggestion that risk analyses ignore possible construction
defects and deficient training is hard to credit.
Or consider a recent paper on "Emerging Environmental Justice Issues in Nuclear Power and Radioactive Contamination" from the International journal of
Environmental Research and Public Health, where the authors, one a sociologist from the University of Texas Rio Grande Valley, the other a sociologist from Arizona
State, write:
Commercial nuclear power has also proved repeatedly to lack the "absolute safety" guaranteed by industry proponents (see [3]). As a series of catastrophic reactor
accidents has shown, the commercial uses of nuclear fission materials to generate electricity are not without potentially severe multiscale risks.... This potential has
been variously displayed at Chernobyl in Russia (1986), Three Mile Island (TMI) in U.S. (1979), and most recently at Fukushima in Japan (2011). Indeed, on the 30th
anniversary of the Chernobyl disaster (2016), no technology yet exists to handle the melted highly radioactive 2000-ton core of the failed reactor, a core that will be
lethal to humans for thousands of years [5].87
When one follows the s, the first leads to an article in the Bulletin of the Atomic Scientists that discusses the myth of "absolute safety" only in the context of the
Fukushima accident, the point being about the practice inJapan of overselling the safety of nuclear power to allay concerns deriving from Japan's experience with
nuclear weapons in World War II, a practice that, the authors argue, contributed to a lack of preparedness.88 The paper did not generalize beyond the unique situation
in Japan, because there is no global myth of "absolute safety. j89 The second leads to an article at McClatchyon the thirtieth anniversary of the Chernobyl accident that
includes claims such as that the "death toll estimates run from hundreds to millions," when the actual death toll as determined by the United Nations Scientific
Committee on the Effects of Atomic Radiation is, as was mentioned earlier, 58.90 And what is this supposed series of catastrophic reactor accidents? Chernobyl is the
only accident that qualifies as a "catastrophe," and there only 58 people died. At Three Mile Island, there was no significant radiation release, and no one died or even
got sick as a result of the Fukushima accident, aside from those who died because of the hasty and questionable evacuation.
Still, there are legitimate and serious questions of social and environmental justice concerning nuclear power. A good place to turn for more sober and well-grounded
scholarship on the topic is the 2015 collection, The Ethics of Nuclear Energy.9' Foremost among the significant social and environmental justice concerns about
nuclear power are issues such as siting decisions, exemplified by government and industry's callous disregard of the interests of local communities in deciding to
locate the Yucca Mountain storage facility on land sacred to the Shoshone and Paiute peoples or by regulatory agencies neither seeking nor respecting public input, an
all-too-common occurrence.92 It goes without saying that the interests and opinions of all who are potentially affected by a decision on the location of a nuclear power
plant or nuclear waste depository must be respected.
But thereis nothing unique to the nuclear power industry in this regard. Exactly the same kind of insensitivity and
indifference is encountered in every industry , as evidenced by disasters like the 2008 coal ash spill at the Tennessee Valley Authority's
Kingston Steam Plant that unleashed more than a billion gallons of highly toxic fly ash slurry, destroying homes, farms, and the lives of some 30 clean-up workers.13
Any form of large-scale power generation will raise the same questions. If anything, the problem could be much worse with solar and wind, for, as we saw above,
the land use needs for dramatically expanded solar and wind power are immense . While they do not
produce waste on the scale of a coal-fired power plant, there are toxic materials in those solar panels ,
and significant toxic waste is generated in their manufacture . Currently, we do not reprocess and recycle
that waste, instead mostly just shipping it to less developed [ other ] parts of the world and hoping, against the
evidence , that it will be dealt with safely . 1 A big problem with wind farms, aside from the huge amount of land they require, is their
aesthetic impact on the environment. Owners of the land where towers are sited are compensated usually with annual payments, either a fixed rent or a fixed rate
based on the power produced. Neighbors are not compensated. There are also growing expressions of concern about impacts on wildlife.95
While nuclear and other forms of power production are on par regarding social and environmental justice issues such as siting and waste disposal, one might argue
that nuclear poses unique challenges of intergenerational justice, for the obvious reason that nuclear waste can remain a hazard for 100,000 years or more. When we
start burying nuclear waste at the Onkalo facility in Finland, we will be imposing risks and obligations on generations to come. Can we even imagine the task of
securing and monitoring these sites for, say, 5,000 years? It seems at first glance as if it were the height of arrogance and the callous disregard of the interests of
unborn generations to do this.
But consider again that we cannot pose the question of intergenerational justice about nuclear power alone. These are also comparative questions. Viewed through that
lens, nuclear power might appear more respectful of the rights and interests of future generations. Nuclear waste, for all of the risks it poses, decays
over time. The toxic stew in the oceans of fly ash slurry adjacent to coal-fired power plants all around the world
will remain toxic forever . This is a crucial difference between many forms of chemical toxicity and radiological toxicity. The same is true
of the chemical toxins associated with the production and disposal of solar panels . One recent study claims to show
that photovoltaic solar panels produce three hundred times more waste (measured by volume), per unit energy
generated, than does nuclear.96 While some chemical toxins are eventually broken down naturally, by sunlight, water, bacteria, and other processes,
many of the worst, including heavy metals, are not. Lead today is lead 1,000,000 years from now. They remain toxic

forever . And, yet, we virtually never hear of this toxic burden being represented as a problem of intergenerational justice. Why not?

Mining is inevitable BUT, fossil-fuel extraction and renewables are worse.


Wang ’24 [Seaver; May 23; PhD in Earth and Ocean Sciences from Duke University, Co-Director of
the Climate and Energy Team at the Breakthrough Institute; The Breakthrough Institute, “It's Settled,
More Nuclear Energy Means Less Mining,” https://thebreakthrough.org/issues/energy/its-settled-more-
nuclear-energy-means-less-mining]
After the clean tech mining arguments
I do have some good news—both for wind and solar advocates and for all pragmatic environmentalists. First, blanket
demonization of
mining as a concept is misguided to begin with, given mining’s necessity to modern society and for
global human development. Second, the extractive intensity of any clean energy system will likely prove
considerably better than our existing fossil-based energy system, given the gargantuan scale of coal
mining globally. Third, the mining impacts of solar , wind , or nuclear energy are flexible , not fixed, and
can improve significantly as technology and practices advance.
GRAPH OMITTED
To the pragmatic technological optimist, the moderately
higher mining footprint of solar and wind energy is just one
of many variables in the calculus and politics of achieving a better future, one in which society will probably employ
both renewable and nuclear technologies together.
Rather, the harshest rebuke from this data-driven discussion of clean energy mineral needs falls upon entrenched, incoherent environmental
dogmas. International green groups and decorated sustainability scholars cannot claim to worship at the altar of
empirical science while invoking the material footprint of nuclear energy to classify it “unsustainable”. Nor are calls
from degrowth, ecosocialist, or traditional environmentalists to limit extraction through unrealistic and
convoluted social measures logically reconcilable with their continued rejection of nuclear power’s
potential to
alleviate the mining impacts of the clean energy transition.
2AC
Case
Nuclear Energy---AT: Waste/Safety---2AC
Modernized reactors eliminate the potential for radioactive release, don’t require
enrichment, and solve storage problems---that’s Jayanti.
<<FOR REFERENCE>>
Yet, the scientific reality is that the rising generation of nuclear innovation doesn’t need to be subjected to a crippling approval process—it is
safe. Therisk profile of new reactors and other technologies in development is very low . This is
especially true relative to the risks of climate change fallout or, for example, the health risks of burning
fossil fuels or inhaling combustion engine exhaust. And the waste from a new nuclear plant is far less problematic
than that of spent solar panels , for example. The nuclear renaissance is not just more nuclear power, it’s also
better, cleaner , safer , more efficient .
SMRs, for example, are much safer than full sized NPPs. They have outputs of 50-300 MW depending on design, compared to 800-1600 MW for
traditional NPPs. Microreactors , “pocket nukes” with 1-50 MW outputs, are even more resilient because simple physics
means they are harder to damage and so it’s less likely that an accident could result in a radioactive
release . Whereas seismic activity is a grave concern for large NPPs like Fukushima, smaller technologies soon to be built do not require the
seismic cushions that were needed under previous plants to protect them from even the smallest earthquakes. Micros and SMRs can also
be manufactured in a factory —that’s what the “modular” in small modular reactor means — allowing for standardization
and systematized security measures, as well as sealed transport. And smaller amounts of radioactive fuel in smaller
reactors mean less that could go wrong even in the case of an accident.
One persistent concern opponents of nuclear power often voice is the risk of reactor cooling systems failing, but this not an issue with the new
generation of nuclear designs. Fukushima Daiichi NPP’s water-based cooling system stopped when a tsunami disabled the electricity source
powering the circulation. This is the same risk Ukraine’s Zaporizhzhye NPP is facing thanks to Russia bombing the dam that held the water that
kept the plant’s water cooling system operating. In emerging advanced reactor technologies, however, this vulnerability is eliminated entirely.
Many of the new designs have entirely reconsidered systems with passive safety features that maintain cooling without reliance on external
power. Others use water in innovative ways. GE Hitachi’s BWRX-300 SMR is designed for the water inside to boil, creating its own convection
that in turn powers its own cooling circulation. This eliminates the need for an extensive circulation system of pipes and keeps all potentially
contaminated water inside the plant. Some also use materials other than water, such as molten salts.
Another common objection to nuclear power is the disposal of radioactive material. But new tech nological innovations are mostly
eradicating the need to store spent fuel at all. New fuels have a lower enrichment level , which is less
radioactive and thus safer . And there’s no such thing as nuclear waste unless the material is wasted .
Canada’s Moltex, for example, is developing a fuel recycling “waste to stable salt” technology that repurposes spent fuel into
new fuel, reducing waste by over 75% and cutting its radioactive half life to approximately 300 years, down from thousands. Moltex is also
designing an SMR, the Stable Salt Reactor-Wasteburner, to run on the recycled fuel , which will cut down the transport of
radioactive materials. Other technologies are reducing risk in parallel .

2. It links harder to all other forms of energy.


Sunkara ’21 [Bhaskar; June 21; American political writer, founding editor of Jacobin; The Guardian,
“If we want to fight the climate crisis, we must embrace nuclear power,”
https://www.theguardian.com/commentisfree/2021/jun/21/fight-climate-crisis-clean-energy-nuclear-
power]
Elsewhere around the world, even where we’ve been investing in renewable technology, without nuclear or the right geography that allows hydroelectricity,

we’ve had no choice but to rely on fossil fuels to fill the gap.
So why, given the stakes of global warming, is there still so much hostility to nuclear power?
Some of the paranoia is no doubt rooted in cold war-era associations of peaceful nuclear power with dangerous nuclear weaponry. We can and should separate these
two, just like we are able to separate nuclear bombs from nuclear medicine. And we should also push back against popular narratives around Chernobyl and other
disasters that simply aren’t replicable with modern technology. Advanced reactors and many existing ones are designed with passive safety systems – they don’t need
active intervention by humans or a computer to deactivate in case of emergencies. Instead these plants use natural forces such as gravity to disable them, while
maintaining active monitoring as a backup. As the science journalist Leigh Phillips puts it, “it is no more physically possible for them to melt down than it is for balls
to spontaneously roll up hills.”
concerns about nuclear waste, but the public perception is driven by outdated info rmation. The
There are some legitimate

amount of waste produced by plants has been reduced dramatically , and most of what remains can be
recycled to generate more electricity. These worries are not particularly unique to nuclear, either. Renewable
energy produces waste of its own – solar, for example, requires heavy metals like cadmium, lead and arsenic, which
unlike nuclear waste don’t lose their toxicity over time. As an article in Science points out: “Current electric vehicle batteries are really not designed to be recycled”
and could pose public health problems as battery cells decay in landfills.

AND waste will decay to 1% the usual---that’s Rehm.


<<FOR REFERENCE>>
Introduction
Renewables are considered by many to be the solution to global warming. Yes, they can contribute. However, without advanced
nuc lear energy , we will not solve global warming .
Nuclear energy is much safer than solar and wind renewables and has a lower life cycle carbon footprint.
The disadvantage of nuclear is its long-lived nuclear waste. To decay to a nominal background level, legacy spent-nuclear fuel requires tens of
thousands of years. This paper argues for advanced
nuclear, whose much smaller amount of nuclear waste (about 1%
of legacy) will decay to background levels in about 400 years [1].

3. It’s the safest form of energy, proven by careful analysis of over 33,000 records---
that’s Snyder.
<<FOR REFERENCE>>
Popular discussions about nuclear power eventually get around to at least one of five objections: It's not safe; no one knows what to do about
waste; it's too expensive; it leads to nuclear weapons proliferation; or there isn't enough uranium. All of these objections are
baseless .
Revised to incorporate valuable comments from Dr. Yoon Il Chang, Tom Blees, and Eric Loewen.
It's not safe (yes it is)
Their collection
The Paul Scherrer Institut in Villigen PSI, Switzerland is a frequent European Community consultant concerning safety.
of more than 33,000 records of accidents related to electricity production shows that nuclear power is
the safest-ever way to make electricity , by a very wide margin . Only 46 fatalities are directly attributed
to municipal nuclear power in its entire six-decade worldwide history, all due to one accident, ironcially a botched
safety test at Chernobyl.
Bigger means safer. There have been only 46 fatalities in the entire six-decade worldwide history of municipal nuclear power, all due to one
accident at Chernobyl
The Chernobyl reactor was the Hindenburg of nuclear reactor designs. It is irrelevant because another one with similar
defects will never be built . It is described here only because of the fear it engendered.
The Soviet Union’s safety culture was different from the west’s, and reactors were not licensed because there were no licensing criteria. The
Chernobyl reactor was a scaled-up version of one purpose-built to produce plutonium for weapons. As it got hotter, or if the cooling water boiled,
the fission reaction ran faster, so it got hotter faster. The operators lost control of it, ironically because they bypassed the inadequate shutdown
mechanisms during a safety test. The ensuing calamity was a steam explosion and a graphite fire, that dispersed several tonnes (1000
kilograms/tonne) of spent fuel near the reactor, and radioactive dust throughout the region.
The BORAX test reactor in Idaho was intentionally destroyed by prompt criticality, but licensed reactors are designed so that it is
impossible to cause a nuclear explosion , either by mistake or intentionally.
Solvency---Particularity---2AC
Analyzing energy in isolation provides an alibi for fossil fuel---that’s Boemeke.
<<FOR REFERENCE>>
Isabelle Boemeke: Yes, and the format and all of it. I’ve been playing a lot with different formats. But you know, the waste is also something that
took me a while to come around because I kept thinking, “but wait this sounds really bad.” I think where I landed with it is, you can’t
analyze a technology in isolation , right. You can’t just be like “is nuclear power bad? ” Well, compared to
what? And in what way? Every technology that we create has some downside , right. Renewables are great for
somethings, they have downsides . Fossil fuel have allowed our civilization to get to where it is today. You know, we’ve had tremendous
improvement of quality of life for people across the world. A high energy life is the same as a high-quality life. Fossil fuels have enabled that.
However, it comes with all these problems. So, it’s
all a matter of A, comparing the technologies with what we
currently have and being able to analyzing context . So, the waste issue is, okay, we have, you know, this
incredible technology that makes a lot of electricity without any carbon emissions . But you have a small amount
of very radioactive waste that comes out of it.
Jules Terpak: Yep.
Isabelle Boemeke: The question is, is that bad. And of course, it sounds bad because it’s so radioactive that if you and I came and touched it when
it comes out of a reactor we would die. But, how is it handled? Has anyone ever been injured from it? How could somebody be injured from it?
When you start looking to those things, you start realizing that the fear is more of like this hypothetical , apocalyptic , end of
the world scenario where civilization loses the ability to communicate and then all of the sudden you
come across this cast, then you open them, and you touch the thing, and you eat it , and then you die . In
reality, what it is, is this ceramic pallet that go into reactor to make electricity. After they are done, they come out. They are like heavy, you
know, it’s something that’s like super technologically advanced. The people who work at the power plant take it out of the reactor, put it in a pool
to cool for like five years. And then take it out of the pool and put it in like this giant concrete cask where it’s like bolted and weighs tons. You
can’t just like roll up to a nuclear power plant and grab one of those things. This
is like serious heavy , concrete shielding
casks that are so good at blocking radiation that you and I can literally stand right next to it and touch it
and be totally fine . We would not be, radiation would not come out of it. And by the way, the nuclear energy industry is the
only energy generating industry that is actually responsible for its waste. So, when you compare to fossil
fuels, it’s just spewing into the air, like killing nine million people a year from air pollution, plus causing
climate change which we don’t even know how many people it’s going to kill, but just affecting the whole world. So you know, what
should I choose? This thing that’s literally killing millions of people and causing climate change or this thing that sounds scary but it
actually has never hurt anybody or the environment and is being very well managed. That’s how I ultimately landed on it.

AND other sources can’t satisfy even a fraction of demand.


<<FOR REFERENCE>>
Nuc lear energy will never be absolutely , perfectly, guaranteeably safe because nothing is. Wind turbines
can fall over, and they can kill birds and negatively impact marine life. Solar panels produce significant volumes of toxic waste, and they take
up space that impedes whatever is trying to live under them. Both wind and solar rely on minerals and manufacturing mostly controlled by China,
Hydropower only works
and neither is entirely reliable as a power source. They’re also not dispatchable at times of peak electricity demand.
with abundant water, and droughts are eviscerating rivers across the world. Coal is killing our children and our planet . So is oil .
So is natural gas . Geothermal , biofuels , hydrogen , et cetera—these aren’t able to satisfy even a fraction of
the demand for energy.
Humanism good
10. Humanism turn---Universal frames of humanity are good despite their historical
framings.
Davis, 23—Postdoctoral Fellow in the Department of African American Studies at Saint Louis
University (Benjamin, “On Conceptual Sufficiency: Humanity in Du Bois’s Black Reconstruction and
John Brown,” Philosophy and Global Affairs 3:1, 2023, pp. 120–148, dml)
And in his final chapter, he writes in poetic terms about a future where “we are going . . . with regard to all social problems, to be able to use human experience for the
guidance of mankind” (quotes in this paragraph from 1935, xix, 8, 16, 325, 382–383, 722). From these few but representative examples—Du Bois employs “human”
so much in Black Reconstruction that you can almost open a page at random and find it20—we see the different ways that Du Bois invokes the concept of humanity
not simply as a universal already in existence so much as to call for universalizing practice: invoking “ humanity ” can put a historical event
in the perspective of our species; it can ask us to see someone who looks different from ourselves as part of the same species, and
thus worthy of fair treatment and possessing rights ; it can be used negatively, as a standard to condemn social

structures that violate human rights and, specifically, to call for the abolition of ( racial ) capitalism ; it can be used positively,
as an aspirational standard, the reaching of which would require new social, economic, and political organization (e.g., an expanded system of public
education, a universal right to vote); it can be used to describe our shared fallibility and vulnerability while
challenging undemocratic assumptions and rejecting any form of organizing a polity that maintains ignorance and poverty; and it can
remind the reader that there are moral duties we’ve always owed one another.
Writing from Atlanta in the 1930s about Reconstruction, Du Bois was more than aware of insidious uses of “human” and

“ humanity ” that maintained hierarchies over time. Many of his writings, from The Souls of Black Folk to The World and Africa, could be read

as tracking the fallacies that are part and parcel of any racialized hierarchization of our species. Nevertheless , he did not see violence and

oppression as “ intrinsic ” to, or domination a sure result of, the operation of “the human .” Rather, he maintained the
concept in Black Reconstruction, leveraging it alongside the focus on class and labor he gained from his 1933 turn to Karl Marx, who also maintained a sense of
species-being.21 Given these methodological moves, and in light of recent criticisms of “the human,” one might then ask: Why would Du Bois maintain a term that he
knew, firsthand and historically, contributed to hierarchy, violence, and terror? I suggest that one answer can be found in Du Bois’s biography of John Brown.
In his short Preface to John Brown, Du Bois notes that the only reason he is conducting yet another study of Brown is to put emphasis on a different element of his life
(1909, xxv). In the final chapter, Du Bois cites both a letter of Brown’s saying, “I cannot believe that anything I have done, suffered, or may yet suffer, will be lost to
the cause of God or of humanity,” and another letter Brown wrote to his younger children, saying, “I feel just as content to die for God’s eternal truth and for suffering
humanity on the scaffold as in any other way” (1909, 187–188, 188). Du Bois then summarizes Brown’s conceptual influences:
Was John Brown simply an episode, or was he an eternal truth? And if a truth, how speaks that truth to-day? John Brown loves his neighbor as himself. He could not
endure therefore to see his neighbor, poor, unfortunate or oppressed. This natural sympathy was strengthened by a saturation in Hebrew religion which stressed the
personal responsibility of every human soul to a just God. To this religion of equality and sympathy with misfortune, was added the strong influence of the social
doctrines of the French Revolution with its emphasis on freedom and power in political life. And on all this was built John Brown’s own inchoate but growing belief
in a more just and a more equal distribution of property. From this he concluded,—and acted on that conclusion—that all men are created free and equal, and that the
cost of liberty is less than the price of repression. (1909, 190)22
In making his moral judgements, Brown was influenced by a variety of sources, his faith probably the most important. But then again, as Ted Smith has noted,
perhaps it is “[b]ecause commentators have not seen him as fighting for ‘his own’ people” that “his religious motives come into sharper focus”; in other words, too
often commentators miss the fact that “Brown . . . sincerely regarded enslaved people as his sisters and brothers” (2014, 8, 161). Du Bois saw in Brown someone who
embraced “suffering humanity” as his own people. One lesson Du Bois drew from his study of Brown—and that, I am suggesting, he carried into Black
Reconstruction—is that a
belief in shared humanity over and against using race to create divisions of labor and
distributions of property was, for Brown, sufficient not only to conclude that the state-sanctioned violence of his
time needed to be challenged , but also to act personally on that conclusion. “Humanity” was also sufficient for those Brown
inspired, especially the multi-racial group that carried out the illegal raid with him—“not men of culture or great education,” Du Bois says of the group, but
“intellectually bold and inquiring” and “skeptical of the world’s social conventions” (1909, 144). For Du Bois, in matters of “vast human import” there is a “great
parting of the ways—the one way wrong, the other right, in some vast and eternal sense” (1909, 172). Regarding the matter of slavery, Du Bois says, “John Brown
was right” (1909, 172). One concept that led Brown to the right position was humanity.
I have argued that Du Bois’s maintenance of “the human” illustrates Hall’s idea of politics without guarantees. For those who concluded that their society was unjust
and acted upon it by raiding Harper’s Ferry at the cost of their lives, “humanity” was what Hall called “that term.” On my reading, one reason Du Bois maintained a
vocabulary of “human,” “humanity,” and “human rights” in Black Reconstruction is that he knew, from his earlier study of John Brown’s raid, how a
notion of
humanity inspired Brown and how it could inspire others . My sense is that some contemporary philosophers who
maintain a radical humanism do so based on a similar historical understanding of how the concept has
functioned: Paul Gilroy’s call for a “ planetary humanism ” informed by Black Atlantic political struggles—including the rights
claims of David Walker , Frederick Douglass , and Ida B. Wells —comes to mind, as does Lewis Gordon’s maintenance
of a humanism based on his reading of Frantz Fanon , as does Sylvia Wynter’s foundational reconstruction of the
human that draws on Césaire, Fanon , and Michel Foucault (see Gilroy 2000, Gordon 2015, Scott 2000). Such constructive or re-
imaginative practices of reading and conceptualization are, in my view, examples worth
following today, because our movements
still very much need tools to make judgments and demands as we strive to build more just societies.
Is “Humanity” a Sufficient Concept Today?
There remain great partings of the ways. We, as U.S. citizens, permit the imprisonment of some in Angola State Prison; permit the flooding of Lakota land via the
damming of the Missouri River and continue to threaten that land with the building of the Dakota Access Pipeline (DAPL), through which oil now flows; and permit
access to healthcare to be tied to employment and a market economy such that many do not have access to health insurance or cannot afford needed (and available)
medication. In this context, I suggest that the
constellation of concepts that Du Bois invoked in Black Reconstruction— human ,
human rights, and humanity —remains sufficient both for explaining how partial or non-universal social structures violate

fundamental decolonial values and for inviting actors into struggles to make those structures more
universal in providing healthcare, education, access to voting, a right to move across borders, and so on.23
In merely one part of Article 25 of the Universal Declaration of Human Rights—“Everyone has the right to a standard of living adequate for the health and well-being
of himself and of his family”—we have a tool that, if gendered and otherwise flawed, nevertheless allows us to condemn the social arrangements of Angola, DAPL,
and non-universal health care, all of which actively prevent health and well-being in myriad well-documented ways. That the language of human rights is
already institutionalized imperfectly or part of a severely limited Declaration that Du Bois himself condemned for not addressing colonization,
does not mean that such a tool is ipso facto oppressive , and therefore worth leaving behind.24 It is the argument of this
essay that we should only abandon a concept if it has been articulated in a way that prevents social actors

from using it to read their realities or from legibly inviting others into political struggle—that is, if even
imaginative uses of it no longer work toward our goals in a specific situation.25 For this reason, analyzing the limits of a concept alongside
how the concept functions does remain of critical importance —as long as that analysis is tied to strategy ,
experimentation , and constructive practice , what Hall called “the building of human alliances.”26
Recognizing conceptual sufficiency is important for political theory because it entails changes to the kind of
inquiry that follows from such recognition. It points critical theory in a direction different from spinning around
in aporias . That new direction would include (1) clarifying and making more accessible what the concept implies and (2) inquiring into the barriers that prevent
actors from carrying out its implications.
Regarding (1) clarification, we can consider the
claim—in the face of a differentially borne climate emergency and amid
an ongoing global pandemic— that the earth as one “ country ” that humanity shares 27 is a better
understanding of the reality of our species than divisions of planet earth and humanity into nation-states and races
and thereby into different civic and ascriptive statuses.28 If we believe this humanist understanding is correct, then there remains work to be
done to clarify its implications. For instance, if a country currently allows its citizens to come and go as they please, then it should also allow all people to travel freely
across its current borders. If a country provides healthcare and education for all its citizens, then it should provide both to whoever lives within its borders. And thus
over time these categories and borders might loosen and dissolve. In other words, what
such a “ planetary humanism ” would look like in
practice would include eliminating in policy all hierarchies of personhood tied to civic status (Gilroy 2000, 2).
Further, it would mean that currently predominant conceptualizations of obligation , particularly around debt —around who

owes whom and why—would need to shift to reckon with historical and ongoing (colonial) hierarchies of humanity. A
clear example of the need to re-frame obligation lies in the fact that Western financial bodies are willing to forgive some of Ukraine’s debt in the face of war but not
that of Barbados in the face of climate emergency exacerbated by war.29 These are simple examples of my larger point: the
work to be done
regarding the concept of humanity includes theorists’ clarifying its implications for broader publics, using
it to raise political consciousness , and explaining how its partial (national, gendered, racialized, etc.) use is always a
false use. As Césaire taught us, any “true” use of “humanity” must be a universalizing one (1950, 73).
Regarding (2) inquiring into the barriers that prevent people from acting upon their understanding of a concept, there are moments when the social problem is not that
we have an insufficient concept, but that we are not willing to take the risks required in living it out. Sometimes the fault is not in our stars or in our concepts. The
example of John Brown is striking precisely because most of us—while agreeing both that slavery still exists and that it violates our most deeply held principles (see
Gordon 2020)—do not give up our lives fighting against it. Brown thus serves as a “touchstone” that tests not just how we understand violence, but also the strength
of our will (see Smith 2014, 15–39). In other words, sometimes the social problem is not a theoretical or conceptual problem, but rather one of political commitment.
Once we recognize that the problem is not with the concept, then we are able to ask one another the questions that are crucial to transforming the structures in which
If
we live, the kinds of critical questions that Du Bois put on the table in his reading of John Brown, and that Du Bois carried forward into Black Reconstruction.30
we agree that a prison or a pipeline or a for-profit school violates how others around us should be treated
as part of humanity, or violates their human rights, or violates our sense of dignity or justice , then what risks are we
willing to take?31 What comforts are we willing to give up?32 From what are we willing to divest ?33 What
public spaces are we willing to reclaim?34 How will we connect these risks, refusals, divestments, and
reclamations to ongoing work across borders?35 How can we remind one another that “it may be important to pause over the awareness of
dissatisfaction just where it starts to seem futile” (Terada 2009, 200)? What kinds of kin structures , and ways of taking care of one
another, would not only affirm our dissatisfaction with the present order of things, but also allow us to take more
risks and make needed international connections (see TallBear 2018; see also Gumbs , Martens, and Williams 2016)? How can
forms of resistance become “more accessible to people with varying debilities, capacities, and disabilities” (Puar 2017, xiii)? What forms of agency

can we outline, given that many of us experience our situations as ones of tremendous constraint (bills to pay, debts
racking up, kids to care for) and extremely limited possibility (climate change, police violence, low wages
outside of the corporate world)? How can our mobilizations not only make demands on those in power, but also

build power in a way that makes a world of decolonized land , as well as of universal healthcare and
education (and so on), a more widely desired and thus more feasible option?36 Along the way, how can we see, and explain for
wide audiences, what Wynter calls “the performative enactment of all our roles, of our role allocations as, in our contemporary Western/Westernized case, in terms of,
inter alia, gender, race, class/underclass, and, across them all, sexual orientation” (McKittrick 2015, 33)? If we understand that all our roles are “praxes . . . rather than
nouns,” then how, concretely, do we practice differently—how do we perform our humanity as an affirmation of a different future, no matter the success of each of
our mobilizations and demands (McKittrick 2015, 33)? Could raising these questions in community and conversation itself be part of re-describing “humanity” and
thus thinking alongside Wynter’s task-setting lines that
the struggle of our new millennium will be one between the ongoing imperative of securing the well-being of our present ethnoclass (i.e., Western bourgeois)
conception of the human, Man, which overrepresents itself as if it were the human itself, and that of securing the well-being, and therefore the full cognitive and
behavioral autonomy of the human species itself/ourselves? (2003, 260)
Wouldn’t this be, as Wynter reads Glissant, “a specific mode of uprising, one which calls into question , rising up
against, our present mode of being, of subjectivity, of Self” (Wynter 1989, 640)?
Conclusion: Humanity and Political Commitment
I wrote this essay in the summer of 2022, just after the U.S. Supreme Court overturned Roe v. Wade. I was thinking hard about the kind of political responsibility
Lewis Gordon says should be “borne by every member of society, for the actions of their government” (2022, 158). I was also then visiting my ageing parents in rural
Minnesota. When I would “head into town,” as we rural people say, I would come across an anti-choice billboard with a photo of a fetus along-side text in all caps:
“ALL HUMANS HAVE HUMAN RIGHTS.” I am not, then, unaware of how both “the human” and “human rights” are deployed against women’s rights.
Our post-Roe moment recalls Hall’s conclusion to his 1979 essay, “The Great Moving Right Show,” where he notes that conservative
politicians have
“change[d] the nature of the terrain itself on which struggles of different kinds are taking place” (186). But “[t]hat,” he says,

“is exactly the terrain on which the forces of opposition must organize , if we are to transform it” (186). If you
want to move a concept, “to rearticulate it another way,” Hall says in his penultimate lecture on cultural studies in 1983, “you are going to come across all the grooves
that have articulated it already” (1983, 143). What Hall said there of Jamaica is true of the U.S. today: “[N]o political movement in that society can become popular
without negotiating the religious terrain” (1983, 144).
I have suggested that “humanity” remains a valuable concept for negotiating not just political but also legal, educational, and religious terrains today. That “the
human” and “human rights” have been degraded and continue to be used for anti-egalitarian ends simply illustrates how all concepts travel. With Du Bois and Wynter
regarding “the human,” but drawing a conclusion different from their critique and setting aside of “human rights,” my view is that a
degraded concept
need not become one we abandon outright (q.v. nn. 18, 19, 24). Such (mis-)uses of concepts will continue to happen because politics
never comes with guarantees. One skill of reactionary political programs is their ability to appropriate
radical uses of concepts, with the result that radical actors hesitate to use powerful terms and end up
without a shared vocabulary . But again: that such appropriation happens does not change the fact that we
need useful concepts to leverage “in confrontation with elites , authorities , and opponents ,” as the political scientist
Sidney Tarrow says of successful social movements (2011, 6). We still need, in Alexander Weheliye’s terms, a “ speculative blueprint
for new forms of humanity” (2014, 14; cf. James 2019). Such a blueprint is precisely what Du Bois’s reading of John Brown’s bold, skeptical,
abolitionist, and humanist crew, carried into Black Reconstruction, offers us. Ultimately, Du Bois shifts the question from one of theory to one of will, highlighting the
importance of what Césaire calls “the weapon of unity, the weapon of the anticolonial rallying of all who are willing” (1956, 148). In this way, Du Bois offers an
ethical and political theory in which one key value is courage.37
To think from what is today called Minnesota is not just to think alongside anti-choice billboards, but to think with Indigenous-led organizations, such as Honor the
Earth, whose mission states:
We believe a sustainable world is predicated on transforming economic, social, and political relationships that have been based on systems of conquest toward systems
based on just relationships with each other and with the natural world . . . [W]e are committed to restoring a paradigm that recognizes our collective humanity and our
joint dependence on the Earth. (See Honor)
For Honor the Earth, “ humanity ” is also that term. It is
a term that can explain the violations of the Line 3 pipeline and invite
actors to mobilize against the pipeline. It is a term that, even after oil began flowing through the pipeline, has been used to make
connections between Line 3 and other anti-pipeline struggles up and down the Mississippi River, from Minneapolis to St. Louis to New Orleans. For
Honor the Earth, “ humanity ” is a tool that is part of a larger practice of being human; this re-imagined practice
receptively situates itself within right relations to the Earth .
In cases when we recognize the concept is not the problem, our practices could change in response to the

actual problem, which is often one of political commitment . The legal scholar Patricia Monture-Angus makes this point clear regarding how
citizens of so-called Canada often choose to address questions of Indigenous rights:
It boggles my mind to think that all of this constitutional debate, the number of conferences, the amount of federal money and federal energy spent trying to figure out
what Aboriginal people want is merely the struggle to accept that we want to be responsible as Peoples. At the centre of our demands is one simple thing: I want and I
need and I have the right to live as a responsible person in the way that the Creator made me, as a Mohawk woman. That is the only right I need. When I have the right
to live in my territory as a Mohawk woman then I will have justice. (Monture 1994, 230)
As I read Monture, she is not saying that the problem preventing her from living on her land as a Mohawk woman is that the concepts of people, responsibility,
territory, or justice are inadequate. She treats those concepts as sufficient. Because those concepts are not inadequate, the
solution to recognizing Indigenous
rights is not really one of debates , conferences, or increased grant money from non-profit organizations—it is not about performing

mere discursive exercises , as if justice were a game .38 The actual social problem is that settlers want to
hold on to their settler states; the actual social solution is “the significant letting go of . . . government power over the lives
of Aboriginal citizens” (1994, 230). What Monture teaches resonates with what Hall diagnosed in his 1966 essay “Political Commitment”—that the work of
politics needs to “connect experience with demands in a meaningful relationship” and “connect awareness
of the nature of the system to aspiration , and aspiration for change to the agencies of change” (94). Read in
dialogue, Hall, Du Bois, and Monture ask us to consider whether the most pressing limitation to decolonial politics in our time might

not be that we lack sufficient concepts , but that we lack political commitment —that we lack the will to carry out what we
know is necessary for our species, for the land that teaches us, and for the planet that hosts us. It remains the task of critical theory to make explicit the forms of
agency we have to realize our political commitments—to outline the paths we can still walk together—and in doing so, to imagine and communally construct
alternative forms of human life.

This understanding better honors the history of Black women’s labor.


Harvell, 10—Professor in the theology department at Pennsylvania State University, Abington College
(Valeria, “Afrocentric Humanism and African American Women’s Humanizing Activism,” Journal of
Black Studies, Volume 40, Number 6, July 2010, 1052-1074, language modifications denoted by
brackets, other brackets in original, dml)
Delores Williams (1997) is correct when she asserts that African American women “[struggled] along with Black men and children for the
liberation, survival and positive quality of life for our entire oppressed Black community” (p. 97). But Black women’s liberation-humanization
struggles did not always follow the same pathways as those travelled by Black men: always the same destination, perhaps but, at times, by a
different route. Tiyi Morris (2003) captures what made Black women’s activism distinctive when she notes that it was based on
their “ experiences as women and mothers in their families and communities” and that it “[evolved] from a
perspective that [recognized] the multiple oppressions ” they faced as Black women (p. 49). Through the
simultaneous interaction of gender , race , and class , Black women have acquired what has been termed a
“ multiple consciousness ,” a triad of cognitive processing skills based on experience, from which has
emerged a multifaceted critique of oppression and, in turn, a multidimensional view of liberation and
humanization (Collins, 1990, 1998; Joseph, 1995; King, 1988/1990; Riggs, 1994).
To illustrate, Harding’s (1983) narration of the “Great Tradition of Black Protest” is brimming with examples of how Black men, both in word and deed, defended
such principles as the rights of self-determination and dignity, presumably on behalf of all African Americans. There are other historical studies (e.g., Adler, 1992;
Austin, 2006; Carby, 1998; Collins, 1992; Dubey, 1994; Gaines, 1996; Giddings, 1984; Jordan-Zachery, 2007; Ling & Monteith, 1999), however, that indicate that the
men from this “Great Tradition” fought for [Black people’s] rights, needs, and interests, to be sure, but mainly in terms of patriarchal prerogatives alone.
In contrast , African American women declared that the rights of all of human beings were “ sacred and
inviolable ,” regardless of the person’s sex , race , country , or condition (Cooper, 1893/1976, p. 330). Some of the
earliest examples on record of how African Americans, in general, used the “power of Word” to guide and justify their quest for liberation, in
fact, came from religious Black women who denounced efforts to make God’s graces selective. Beginning
in the 19th century ,
there were Black women who, in the midst of fierce opposition and with “ heavenly zeal ” (Elizabeth,
1863/1988, p. 11), preached an egalitarian Gospel , with emphasis on the belief that “the Saviour died for the woman as well as the
man” (Lee, 1836/1986, p. 36). Historian Betty Collier-Thomas’s (1998) analysis of sermons by African American women preachers of the 19th
and 20th centuries highlights the fact that some of these women attempted to demonstrate that neither the Gospel nor the Holy Spirit showed any
race or gender partiality.
Relying on “a plethora of supernatural abilities as sources of their spiritual and social empowerment,” Gayle Tate (2003, p. 172) claims that these
early 19th century Black women evangelists went forth “with the dual mission of converting souls and spearheading a new social consciousness”
(p. 177). Through their “proselytizing and protest in the larger society,” these Black women evangelists challenged both racial and gender
oppression (p. 177), and they were “openly combative in challenging... ecclesiastical sexism” (p. 181).
Throughout the 19th and 20th centuries, Black women continued to challenge those religious institutions where, purportedly, the equality,
Black Baptist
uniqueness, and worthiness of each individual were considered sacred (Mitchell & Lewter, 1986, pp. 93, 112).4 In studies of
churchwomen during this period, Evelyn Higginbotham (1993, p. 120) reports that they developed an inclusive theology that
stressed equal gender participation. They attempted to apply the “ principle of human equality and dignity
under God ” to encompass both racial and gender concerns (Higginbotham, 1996, p. 150). Moreover, these women
“found scriptural precedents for expanding women’s rights” in both the secular and sacred arenas (Higginbotham, 1993, p.
120). Similar
trends have been found in other denominations as well (e.g., Dodson, 1988; Gilkes, 2001, chaps. 3 to 5), all of which reflect
not only the influence of religion on Black
women’s unique interpretation of the liberation-humanization central theme but also the
specific forms of political activism aimed at transforming one institution—the church.
Black women have well understood that the liberation of the whole person and the humanization of the entire society require the transformation of institutions and the elimination of race and sex discrimination. Writes Rossetta Ross
(1997), they relied
on what they understood as God’s provisions in their lives, the women demonstrated fidelity to God by routinely working hard and taking risks as they sought to change repressive traditions, institutions and social conventions that
generally hampered the well-being of African Americans. (p. 41)
Beyond the mere rhetoric of inclusiveness, however, Black women’s clubs, societies, associations, and auxiliaries usually reflected in fact both a “race conscious and gender-centered” approach to community service. “For club
women, race and women’s rights were one issue” (D. G. White, 1999, p. 68). They insisted on a more holistic interpretation and application of “liberation,” one that included men, women, and children (e.g., Gilkes, 2001;
Higginbotham, 1993; Riggs, 1994; Springer, 1999). As activist Josephine St. Pierre Ruffin, a national leader of the 19th century Black women’s club movement, declared, “Our woman’s movement is a woman’s movement in that it is
led and directed by women for the good of women and men, for the benefit of all humanity [italics added]” (quoted in Carby, 1987, p. 117).
Examples abound which show that Ruffin was expressing a general sentiment felt and acted on by most Black women club participants. For instance, according to author Lillian Williams (1995), the Black women’s clubs operating in
Buffalo, New York, between 1900 and 1940
addressed the subordinate status to which [Black people], especially women and children, had been consigned and so they worked with black men to redress their grievances as [Black people] and women. These women saw
themselves as critical links in a social movement designed to liberate the black community from second-class citizenship. Their participation in community liberation struggles was a means of empowerment for them as individuals. (p.
521, italics added)
Speaking on the roles of the Independent Order of Saint Luke women of Richmond, Virginia—a group spearheaded by Maggie Walker—Elsa Brown (1989/1990) noted that they “challenged notions in the black community about the
proper role of women; they challenged notions in the white community about the proper place of [Black people]” (p. 194). Most importantly, they “strove to create an equalitarian organization” (p. 193), one that emphasized
“collective consciousness and collective responsibility” to the family and community (p. 182).

Among the first and most significant contributions African American women made to the humanistic
motif was their enduring effort to reclaim and reassert its holistic meaning , one that resonated with the
original equalitarian principles and social inclusivity of African humanism . Jacqueline Royster (2000) contends that
Black women were also able to sustain a “ wholeness ”—“a psychic wholeness forged at great cost but in the
interest of the health and prosperity of community for which they felt responsible. Hard-won strengths of
spirit became a well-spring for activism , advocacy , and action ” (p. 113). Inclusivity of vision and practical
activities were their hallmarks of the past and their bountiful legacy for the future.
Analyzing the ethical teachings from several worldwide religions but with a particular emphasis on the meaning of the “ethical practice of
service” as presented in the sacred writings from Africa, Karenga (2008) draws several noteworthy conclusions. He opines that the ethics
of
service means bringing , sustaining , and increasing the good in the world and the “ building and
maintaining of structures ” or frameworks that advance “the practice of service , especially to the poor ,
less powerful , the ill , the aged , the disabled and those disadvantaged by a system of oppression in the
world” (pp. 34-35, 45). No other group in America has had less of the material world but worked harder at
fulfilling these imperatives than African American women. As Fannie Williams (1896-1897/1997a) makes abundantly
clear, Black women were “ bent on making their goodness felt wherever good influence is needed ” (p. 114):
Benevolence [was] the essence of most of the colored women’s organizations. The humane side of their natures has been cultivated to recognize
the duties they owe to the sick, the indigent and ill-fortuned. No church, school, or charitable institution for the special use of colored people has
been allowed to languish or fail when the associated efforts of the women could save it. (F. B. Williams, 1893/1976, pp. 273-274, italics added)
Writing on “The Awakening of [Black] Women” to the “their importance in all things” pertaining to “the social betterment of the Negro race,” F.
B. Williams (1896-1897/1997a) explains the attitudes and beliefs that guided their service to the race:
The supreme thing is the spirit of duty, the enthusiasm for action, and the untrammeled sympathy. By sympathy is not meant that far-away, kid-
gloved and formal something that enables women merely to know of those who need them, but that deeper more spiritual impulse to helpfulness
that will enable them to find delight in working with, rather than for, the unfortunate of their sex. In social reforms we must see but one thing, and
that is the vivid soul of humanity—that divinity which neither rags, dirt nor immoralities can entirely obscure. (p. 114)
F. B. Williams (1900/1997b) also vividly details the
inhumane conditions visited on the poor, the weak, and the most
vulnerable, which, in fact, propelled Black women into action :
The incentive [to organize] in most cases was quite simple and direct . How to help and protect some
defenseless and tempted young women; how to aid some poor boy to complete a much-coveted
education ; how to lengthen the short school term in some improvised school district; how to instruct and interest
deficient mothers in the difficulties of child training are some of the motives that led to the formation of the great majority of
these clubs. These were the first out-reachings of sympathy and fellowship felt by women whose lives had been narrowed by the petty concerns
of the struggle for existence and removed by human cruelty from all the harmonies of freedom, love and aspiration. (p. 121)
After examining a host of Black women’s clubs and voluntary associations that developed at the local, state, and national levels, Gerda Lerner
(1974) summarizes,
The impulse for organizing arose wherever an urgent social need remained unmet [italics added]. Most frequently, women’s clubs were
formed in order to provide kindergartens, nursery schools or day care centers for black children. The virtual absence of social welfare institutions
in many Southern communities, and the frequent exclusion of [Black people] from those that existed, led black women to found orphanages, old
folks’ homes and similar institutions. (p. 159)
What Lerner describes was
characteristic of the type of broad-based humanistic activities in which Black
women frequently engaged during the 19th and 20th centuries, an important part of Black history that has been
well documented for decades . Darlene Clarke Hine (1994) notes,
In virtually every city and rural community in the twentieth-century America there existed an organized grouping of black women, often led by a
cadre of elite educated black middle-class matrons. . . . Almost every black women’s club, regardless of who founded it or the ostensible reason
for its establishment, focused to some extent on alleviating one or more of the many social problems afflicting an increasingly urban,
impoverished, politically powerless, and segregated black population. (p. 15)
Hine (1994) goes on to conclude that the “philanthropic work of nineteenthand early-twentieth-century African American women ensured the
survival of many of the most vulnerable members of the black population” (p. 109).
Although many African America women’s clubs and associations frequently framed their raison d’être around home and
family, they were equally involved waging broader battles for political and economic justice on behalf of the
entire Black community . Among the prominent issues undertaken were economic exploitation , political
disenfranchisement , segregation , lynching , educational inequities , slum landlords , rent gouging ,
deficient sanitation systems , and inadequate health facilities (e.g., A. D. Gordon, 1997; Hine, 1994; Hodges, 2000;
Perkins, 1981; Scott, 1990; Springer, 1999). Any new development or event with the potential of alternately
improving or adversely affecting the quality of life for [Black people] became an area of concern and an
arena of activism for African American women. Paradoxically, these early reform initiatives designed and intended to
help the Black community later proved beneficial and supportive to poor and working-class White families as well.
The wide-ranging , ever-expanding record of activities of the African American woman—to labor both within
the home and without, to educate, to nurture, to empower, transcending adversity and improving, with limited resources, her environment and
community—is unmatched in history . Jualyne Dodson and Cheryl Gilkes (1986) argue convincingly,
The ties that bind the black community together exist primarily because of the vigilant action of black
women . . . . The overwhelming importance and pivotal position of black women in all aspects of community
organization— education, civil rights, organized labor, business, politics, religion, the professions, and club work—earned them the greatest
accolades and the most pernicious stereotypes. . . . [The] black women of the twentieth century were the source of community endurance—“the
something within that holdeth the reins”—enabling black people to resist and to hope. (p. 81)
By combining the communal and humanistic ethos from their African past with their own interpretation of the Scriptures,
African American women were able to fashioned that “ something within ” that fueled their zeal to serve and uplift
the community. Dodson and Gilkes (1986) reinforce this point well when they state,
If anything characterizes the role of black women in religion in America, it is the successful extension of their individual sense of regeneration,
release, redemption, and spiritual liberation to a collective ethos of struggle for and with the entire black community. (p. 81)
African American women saw themselves in relationship to the whole community and directed their
sociopolitical activities accordingly . They created countless organizations and religious associations committed
to the collective survival of Black people , including willy-nilly their families, their children, their men, and their neighbors. In
their role as caretaker of the home, family, church, school, and community, African American women were the ones who
nurtured the race and helped build life itself in the mist of bondage , Jim and Jane Crowism , lynching ,
chronic unemployment , poverty , rape , assaults , insults , and rampant discrimination . Perhaps better than their
brethren, they understood that “if survival were to be on a human and not merely an animal level ” (Harding,
1983, p. 51), then life must be adorned and infused with integrity , love , hope , dreams , and joy , in spite of
the despotic system and cruel conditions they confronted. They labored not only to sustain but also to
embellish life with whatever meager resources they had at their disposal or could wrest from a selfish “land of
plenty.” They also labored mightily to bestow on their families and race hope, dignity, and pride and to preserve and
transmit the same life-affirming values to the next generation.

Voting for their hasty refusal of humanism overlooks the legacy of Black humanist
thought and perpetuates its marginalization.
Hartmann, 24—assistant professor of American studies at Paderborn University (Alexandra,
“Introduction,” The Black Humanist Tradition in Anti-Racist Literature A Fragile Hope, Chapter 1, pp. 3-
5, dml)
The lived humanisms of the West , however, have been humanism at its worst for most of humankind. Tony Davies

cautions, “ All Humanisms, until now, have been imperial. They speak of the human in the accents and the interests of a class, a sex, a race, a
genome. […] It is almost impossible to think of a crime that has not been committed in the name of humanity” (2008: 141). Western humanisms have historically
never lived up to the humanistic ideal. To the contrary, the development of Enlightenment humanism is inextricably tied to the emergence of antiblack racism in the
modern world and white supremacy in the US. A modern conception of race, Michael Omi and Howard Winant argue, does not take shape until the European arrival
in the Americas (1994: 61).3 Racialization is thus closely linked to the rise of Western humanism.4
Davies’ claim holds true for the dominant tradition of liberal humanism but he does not consider alternative
humanisms lived by those excluded by Western humanism. These are often overlooked because of the
dominance of the Eurocentric tradition. Unlike Western humanism, which has been particularly invested in a privileging of white humanity
over other humans, Black humanism makes no hierarchical difference between humans . It does, however, pay heightened
attention to African Americans and their lived realities. Starting from small , contingent assumptions, it focuses on human

vulnerability and suffering as experienced by Black humanity for generations. A commitment to human well-being
entails wrestling with the existential experiences of living as Black in an antiblack world. Challenging the normative position of whiteness in the Eurocentric tradi-
tion, Black
humanism pushes to see all humans as equal , yet as bound and shaped by the material and
social circumstances of their existence. For Cornel West, humanism in the Black intellectual tradition resists all attempts of hierarchically
ranking people and cultures and instead posi- tions African Americans and their culture among other groups (2002: 87). It thus highlights the equality of all humans
and their cultural expressions while stressing African American uniqueness.
Black humanism, as an existential philosophy, deals with “philosophical questions premised upon concerns of freedom, anguish, responsibility, embodied agency,
sociality, and liberation” (Gordon 1999: 3). However, even
though Black humanism is evidently concerned with similar
questions and concepts as liberal humanism, it calls many of liberal humanism’s central beliefs into
question: where liberal humanism poses a mind/ body dualism that privileges rationality over emotions, Black humanism regards humans as
embodied beings whose corporeality matters greatly , and where the former is optimistic about human
progress by offering a triumphant account of human ability, Black humanism recalls the horrors of the past and present
moment as the backdrop of any progress. As Pinn notes, Black humanism “understands an unchecked anthropology of progress as intimately
connected to the dynamics of white supremacy, sex- ism, homophobia, environmental destruction” (2015: 25). Finally, where liberal humanism has either privileged
whiteness or furthered colorblind- ness, Black
humanism has placed the racialized subject at the center of its
consideration and consequently highlights many blind spots that Eurocentric humanism entails.5 For Black humanists, true humanism must be
committed to denouncing the racial status quo. Colorblindness has no room in Black humanism, neither do sexism and other discriminatory
and hierarchical projects.6
Acknowledging the fact that liberal humanism has had many deficits, Edward Said argues in Humanism and Democratic
Criticism that the humanist project must continue nonetheless because “it is the abuse of humanism that

discredits some of humanism’s practitioners without discrediting humanism itself ” (2004: 13). He is further convinced
that it “is possible to be critical of humanism in the name of humanism and that, schooled in its abuses of

Eurocentrism and empire, one could fashion a different kind of humanism” (10–11). For Said, true humanism must serve as
critique of the status quo and constantly aim at human better- ment. Thus, it is a hasty mistake to overlook alternative humanisms.

Viewing the past, present, and future of humanity from the perspective of the marginalized can lay the ground for
what Sylvia Wynter envisions as the “ re-enchantment of humanism” (qtd. in Scott 2000: 120).
Black humanism has much to offer for such a re-enchantment. Not only is it a worldview historically found in African American culture but also an ongoing, yet
constantly evolving, discourse on what it means to be human. Norm Allen asserts that “ Black
humanists can continue to make important
contributions to humanist theory by challenging many of the assumptions made by Eurocentric, middle-class
humanists” (2001: 159). Residing halfway between conservative and radical Black outlooks, it presents an alternative to several contemporary positions such as
post-Blackness and Afropessimism, and it anticipates, corrects, and contributes to (criti- cal) posthumanism in many ways. Precisely because it centers around struggle
against racism, Black
humanism is a way to remember the past horrors, criticize the present ones, and warn
about future dangers while remaining convinced that human efforts and commitment to action matter . This
painful certainty is drawn from the (embodied) memories of an oftentimes violent and traumatic past. Especially in times where the trendy discourses of
Afropessimism and posthumanism have begun to take center stage in Black studies and the humanities respectively, this book intentionally refocuses the conversation
to (re)discover a different angle.
Rather than seeking inclusion into the category of the human, Black humanists have sought to transform
the limited definitions of the human. Alexander Weheliye , for example, asserts that “alternative versions of
humanity” can be found in “cultural and political formations outside the world of Man” (2014: 9–10). In this light,
Wynter maintains that what is needed for transformation is liberation from the ideal of Western ‘Man.’ She writes that
“the struggle of our new millennium will be one between the ongoing imperative of securing the well-being of our present ethno- class (i.e., Western bourgeois)

conception of the human, Man, which overrepresents itself as if it were the human itself, and that of securing the well-
being, and therefore the full cognitive and behavioral autonomy of the human species itself/ourselves” (2003: 260). Black humanism, then, is both a historical
worldview in African American culture as well as a criti- cal and theoretical lens for contemporary scholarship of the human.
Voting aff circulates anti-humanist theorizations that preclude any notion of a
common humanity, which is essential for planetary survival.
Brennan and Odeh, 23—professor of comparative literature, cultural studies, and English at the
University of Minnesota; assistant professor of Comparative Literature and Literary Criticism at Al-
Ahliyya Amman University of Jordan (Timothy and Tayseer, “A conversation with Timothy Brennan,”
Journal of Postcolonial Writing, January 25, 2023, DOI: 10.1080/17449855.2022.2163420, dml)
The tragedy of the post-structuralist counter-reformation is that it popularized the idea – widespread in neo-

Orientalist discourses like decoloniality and late subaltern studies – that humanism is the core logic of Eurocentrism. On the
contrary, it is really post- or anti-humanism that is the defining credo of modern capitalism and, by extension, the
swaggering Eurocentrism of the American empire and its Natopolitan handmaidens. What do the would-be masters of
the universe – western heads of state, International Monetary Fund (IMF) directors, Wall Street power brokers, and the army of journalists and think-tank
pundits who serve them – believe in if not the disposability of human life, the unfreedom of social actors, the

insignificance of experiential holism, or the irrelevance of non-instrumental learning, critical thought, higher ideals: the
humanities , in short? One of their principal initiatives is the perfection of artificial intelligence ( AI ), which at its lowest levels
means automation (and therefore mass unemployment) but at its most utopian heights means the dream of a post-human: a programmable
substitute for actual humans that does not complain, makes no demands, and needs no food. Far from these contentions,
humanism, we would have to say, is on the dissident Left. It is, as far as I can see, the only Left there is ; or, put differently, there can be
no genuine Left that is not humanist.
One of the refuges of critiques like Hallaq’s is its deceptive use of the term “ modernity ”, which can, of course, mean different things. He depends on our

hearing in the term (and rightly so) the violence of conquest , the forced conversions, the bumptious posing of Mind and Text over bodies and beliefs,

the supercilious scientific swagger of the western war machine . But what Hallaq really wants is to dispense with modernity in a

quite different sense: the critique of religious thought, the questioning of traditional authority, the creation of

one’s own values out of oneself according to shifting contexts – Enlightenment reason , in short, in its
broadest sense: that is, the revolutionary Enlightenment of Baruch Spinoza, Marx, Vico, Hegel, and Rousseau, not only the
racist rationalisms of Enlightenment figures like David Hume, François-Marie Arouet, known as Voltaire, or Edmund Burke. He does this because of his
own religious convictions, which he then wishes to equate (because they are purportedly non-western) with resistance to western oppression.
Humanism is a non-western philosophy every bit as much as a European one. It belongs to the maverick secularity
of Thales of Miletus and Anaxagoras of Clazomenae, the philological study of Roman law in Marcus Terentius Varro, the preservation of oriental learning in the
Islamic Golden Age (Averroes of Cordoba, Avicenna of Persia), the great rediscovery of Egypt in Neoplatonism, the creation by scholasticism of the first
European universities, the madrasas of the Maghreb and the Levant , and the triumph of reading in the Italian renaissance of Poggio

Bracciolini and Desiderius Erasmus, the great philological sociologies of Ibn Khaldun , and later, in an identical spirit, Vico. I call

“ decoloniality ” (in the work of Walter Mignolo and Aníbal Quijano) and late subaltern studies (Chakrabarty) “ Orientalist ” advisedly. It

is not a gratuitous insult but a sober description . Both believe, as Orientalists do, that the non-western subject is
epistemologically other vis-à-vis the western subject as a matter of ontology . They – we – are
fundamentally incongruent , in their view. This pernicious notion renders the common humanity of the world’s
peoples impossible , and confuses geopolitical conflicts , perpetrated not by all Europeans (or Americans)
but by their wealthy leaders, with existential differences between every individual of the respective
camps.
TAO:
Ecocriticism is a vibrant and inexhaustible discipline of World Literature, but it has been more or less co-opted by high-profile literary acolytes and practitioners.
Despite the vital geopolitical role played by oil within the global system, Amitav Ghosh reminds us that the oil encounter has been overlooked in World Literature
compared, say, with the early spice trade. In a similar fashion, Rob Nixon (Citation2011) maintains that oil has been central to global US imperial anxiety (72). Apart
from Munif’s Cities of Salt and a few other neglected literary works, it seems that Nixon is right when he points out that most of the environmental literary
anthologies of World Literature function “as an offshoot of American studies” (235). How far are these analyses relevant to the canonization of environmental studies
and World Literature?
TB:
In addition to Nixon, I can think of several critics – Graeme MacDonald, Mike Niblett, Sharae Deckard, Imre Szeman – who have focused on the “oil novel” within
the context of World Literature. In the pages of their work, one will find a list of novels that are doing what Ghosh says is not being done. All the same, Nixon is right
that, despite its overfamiliarity, oil is still the prime mover, and one might extend his comment to call it also the proximate cause of the violent redrawing of the
Middle East map by US military brutality and its Israeli client state, as well as the NATO [North Atlantic Treaty Organization] invasion and destruction of Libya (and
the assassination of the defiant Muammar al-Qaddafi), the criminal blockade of Venezuela, and the stoking of Nigeria’s civil war. Simply because an explanation is
obvious does not mean it is untrue.
The biggest danger, though, in the drift of ecocriticism is not so much its depoliticization at the hands of the overly “literary” or its
colonization by American studies, but an apocalyptic anti-humanism that considers all members of our species the insouciant

culprits of planetary decline rather than agribusiness, logging companies, weapons manufacturers,
chemical plants, and – of course – the oil industry. “We” do not destroy the planet, even if we currently drive cars, fly in airplanes, put our laundry
in dryers, or take long showers. Capitalism has every incentive not to address ecological catastrophe, and every motive to exacerbate it. So long as this
criminal operation is allowed to continue, the planet will die . I share the analysis of Harrison Fluss and Landon Frimm (Citation2021) in their
Prometheus and Gaia. Apart from offering a remarkable map of every byway of recent eco debates, it observes that Earth-first Gaians, on the one hand, and techno-
enthusiasts, on the other, who think technology will solve all the problems technology created in the first place, are two sides of the same misanthropic coin.
Humanism, on the other hand – with its sense of having an active , ethical choice – offers a much better
alternative . The only one, in fact.

Radical humanism should be the horizon that guides your decision—voting aff turns
away from it.
Banerjee, 20—assistant professor in the program on Science, Technology, and Society at MIT
(Dwaipayan, “Anthropology’s Reckoning with Radical Humanism,” Anthropology Now, Volume 12,
Issue 3, 2020, pp. 50-55, dml)
If we are turning toward radical humanism , we must be clear that we are turning toward it again , in the wake of its spectacular

failures and its defeat by the postwar liberal settlement. In a now famous essay, Edward Said posed the question of how theories
travel : “What happens to it when, in different circumstances and for new reasons, it is used again and, in still
more different circumstances, again ? What can this tell us about theory itself—its limits, its possibilities, its inherent problem …?”Footnote6 This question

resonates here: radical humanism has recurred as a response to varying sets of circumstances; what does it mean to
resurrect it now? And why resurrect it at all, when we seem to be approaching something like a resolution to the question—a present anthropological reaching toward
the post- or even trans-human?
at
In the heady years right after decolonization, anticolonial insurgents asked themselves the same question. This curious fact never ceases to amaze me:
precisely the moment the violence of humanism had been revealed and defeated , those that felt its
violence sought its resurrection . The list is long: C.L.R. James, W.E.B. Dubois, Frantz Fanon, B.R. Ambedkar, Aimé Césaire, Léopold Senghor,
Edward Said, M.N. Roy, to name just a few. Roy was perhaps the first to have self-consciously developed the term and its philosophy. After having founded both the
Mexican Communist Party and instrumental in the development of the Indian Communist Party, late in his life Roy turned away from his communist roots and toward
a theory of radical humanism.
Roy began a trend that has so far been underthought: radical humanism often appeared at the end of its proponents’ lives: as fighter-philosophers (Fanon and Said are
other prominent examples) chose to leave us with this philosophical legacy. One might easily critique radical humanism as a philosophy that seems to appear in old
age, as revolutionary fire gives way to reconciliation, often in view of the failures of socialist projects in the mid-20th century. But it is precisely against this idea of
old age as a moment of settlement that Said militates. He asks, what if we abandon the common notion that old age demands a spirit of resolution and compromise and
this
instead take the opposite approach: that artistic lateness disavows easy resolutions in favor of intransigence, difficulty and contradiction? To me, it is precisely
intransigence that gives radical humanism its force : its unrelenting commitment to a humanism rethought
from the point of view of its greatest failure , even as all hope for the concept seems lost . It is this same
intransigence that draws me to Uberoi: to rethink European modernity from the point of view of the colonized ,

trying to rescue Europe from itself even as it hurtles down the path of its own unraveling .
To be more specific about this politics of refusal, let me turn to one of its strongest exponents—Frantz Fanon. Fanon’s work makes clear that this new humanism was
not to a slight tweak to its older form: the psychic violence of colonization did not leave the colonized with any easy entryway to the category of humanness. Put
differently, by the mid-20th century, liberal humanism had already succeeded in its project to dehumanize the colonized; decolonization had come too late. The
victory of liberal humanism was that it had set the terms of what now counted as human—calmness without anger,
rationality freed from belief, progress without reparation. Fanon thus recognized that the struggle had not ended but only just begun. What then did a radical
reconceptualization of humanism look like for Fanon? Radical
humanism was a way to continue the anticolonial project
without simply adopting the false humanisms of contemporaneous socialism (the French and Russian, in this case). Fanon’s decolonial
humanism demanded a permanent revolution , one that would never fall prey to the false belief that the
project of humanism could be resolved . Radical humanism then was not a method but a political horizon
and a revolutionary-critical practice .
To return then to the idea of whether theories appropriate to one place can and should travel, and what they reveal in their traveling. As Gili Kilger puts it, this new
humanism “did not involve an effort to expand black subjectivity into a wider, universal subject, which would have done no more than reinstate this new subject on
the grounds of the old.”Footnote7 Rather, it was predicated on overturning a humanism founded by the relation between reason, and those without the capacity to be
reasonable. Thought through in this sense, the claim that “Black lives matter” asks of us to not fight for an inclusion of Blackness in an
expanded category of human; rather, it asks for a permanent revolution that acknowledges the never-
finished power of racism, nationalism and casteism.
Framework
FW---2AC
3. PREDICTABILITY---The ‘United States federal government’ is the three federal
branches. That’s the agent of the resolution
U.S. Legal ’16 [U.S. Legal; 2016; Organization offering legal assistance and attorney access; U.S.
Legal, “United States Federal Government Law and Legal Definition,”
https://definitions.uslegal.com/u/united-states-federal-government/]
The U nited S tates F ederal G overnment is established by the US Constitution. The Federal Government shares sovereignty over the United
Sates with the individual governments of the States of US. The Federal government has three branches: i) the legislature , which is
the US Congress, ii) Executive , comprised of the President and Vice president of the US and iii) Judiciary . The US
Constitution prescribes a system of separation of powers and ‘checks and balances’ for the smooth functioning of all the three branches of the
Federal Government. The US Constitution
limits the powers of the Federal Government to the powers assigned to it; all
powers not expressly assigned to the Federal Government are reserved to the States or to the people.

4. Debating the conseuqences of govern action. It develops detailed advocacy and


research skills.
Dybvig and Iverson 00 [Kristin Chisholm Dybvig, Ph.D. in Interpersonal Communication/ Research
Methods from Arizona State University; Joel Iverson, Associate Professor of Communication at the University of
Montana, Ph.D in Communication from Arizona State University Relations at the University of Sydney; Debate
Central, “Can Cutting Cards Carve into Our Personal Lives: An Analysis of Debate Research on Personal
Advocacy,” https://debate.uvm.edu/dybvigiverson1000.html]
Mitchell (1998) provides a thorough examination of the pedagogical implication for academic debate. Although Mitchell acknowledges that
debate provides preparation for participation in democracy, limiting debate to a laboratory where students practice their
skill for future participation is criticized. Mitchell contends:
For students and teachers of argumentation, the heightened salience of this question should signal the danger that critical thinking and oral
advocacy skills alone may not be sufficient for citizens to assert their voices in public deliberation. (p. 45)
Mitchell contends that the laboratory style setting creates barriers to other spheres, creates a "sense of detachment" and causes debaters to see
research from the role of spectators. Mitchell further calls for " argumentative agency [which] involves the capacity to
contextualize and employ the skills and strategies of argumentative discourse in fields of social action , especially wider
spheres of public deliberation" (p. 45). Although we agree with Mitchell that debate can be an even greater instrument of empowerment for
students, we are more interested in examining the impact of the intermediary step of research. In each of Mitchell's examples of debaters finding
research conducted for competition is
creative avenues for agency, there had to be a motivation to act. It is our contention that the
a major catalyst to propel their action, change their opinions, and to provide a greater depth of understanding of the issues
involved.
The level of research involved in debate creates an in-depth understanding of issues. The level of research conducted during a year of debate is
quite extensive. Goodman (1993) references a Chronicle of Higher Education article that estimated "the level and extent of research required of
the average college debater for each topic is equivalent to the amount of research required for a Master's Thesis (cited in Mitchell, 1998, p. 55).
With this extensive quantity of research, debaters attain a high level of investigation and (presumably) understanding of a topic. As a result of this
level of understanding, debaters become knowledgeable citizens who are further empowered to make informed opinions and energized to take
action. Research helps to educate students (and coaches) about the state of the world.
Without the guidance of a debate topic , how many students would do in-depth research on female
genital mutilation in Africa, or U nited N ations sanctions on Iraq ? The competitive nature of policy
debate provides an impetus for students to research the topics that they are going to debate. This in turn fuels students’
awareness of issues that go beyond their front doors . Advocacy flows from this increased awareness. Reading books and articles about
the suffering of people thousands of miles away or right in our own communities drives people to become involved in the community at large.
Research has also focused on how debate prepares us for life in the public sphere. Issues that we discuss in debate have found their way onto the
national policy stage, and training in intercollegiate debate makes us good public advocates . The public sphere is the arena in which
we all must participate to be active citizens. Even after we leave debate, the skills that we have gained should help us to be better advocates and
citizens. Research has looked at how debate impacts education (Matlon and Keele 1984), legal training (Parkinson, Gisler and Pelias 1983,
Nobles 19850 and behavioral traits (McGlone 1974, Colbert 1994). These works illustrate the impact that public debate has on students as they
prepare to enter the public sphere.
The debaters who take active roles such as protesting sanctions were probably not actively engaged in the issue
until their research drew them into the topic. Furthermore, the process of intense research for debate may actually change the
positions debaters hold. Since debaters typically enter into a topic with only cursory (if any) knowledge of the issue, the research process provides
exposure to issues that were previously unknown. Exposure to the literature on a topic can create, reinforce or alter an individual's opinions.
Before learning of the School for the America's, having an opinion of the place is impossible. After hearing about the systematic training of
torturers and oppressors in a debate round and reading the research, an opinion of the "school" was developed. In this manner, exposure to
debate research as the person finding the evidence, hearing it as the opponent in a debate round (or as judge) acts as an
initial spark of awareness on an issue. This process of discovery seems to have a similar impact to watching an investigative news
report.
Mitchell claimed that debate could be more than it was traditionally seen as, that it could be a catalyst to empower people to act in the social
arena. We surmise that there is a step in between the debate and the action. The intermediary step where people are inspired to agency is based on
the research that they do. If students are compelled to act, research is a main factor in compelling them to do so. Even if students are not
compelled to take direct action, research still changes opinions and attitudes.
Research often compels students to take action in the social arena. Debate topics guide students in a direction that allows them to explore what is
going on in the world. Last year the college policy debate topic was,
Resolved: That the U nited S tates F ederal G overnment should adopt a policy of constructive engagement, including the immediate
removal of all or nearly all economic sanctions, with the government(s) of one or more of the following nation-states: Cuba, Iran, Iraq, Syria,
North Korea.
This topic spurred quite a bit of activism on the college debate circuit. Many students become actively involved in protesting for the
removal of sanctions from at least one of the topic countries. The college listserve was
used to rally people in support of various
movements to remove sanctions on both Iraq and Cuba. These messages were posted after the research on the topic began. While this topic
did not lend itself to activism beyond rallying the government, other topics have allowed students to take their beliefs outside of the laboratory
and into action.
In addition to creating awareness, the research process can also reinforce or alter opinions. By discovering new information in the research
process, people can question their current assumptions and perhaps formulate a more informed opinion. One example comes from a summer
debate class for children of Migrant workers in North Dakota (Iverson, 1999). The Junior High aged students chose to debate the adoption of
Spanish as an official language in the U.S. Many students expressed their concern that they could not argue effectively against the proposed
change because it was a "truism." They were wholly in favor of Spanish as an official language. After researching the topic throughout their six
week course, many realized much more was involved in adopting an official language and that they did not "speak 'pure' Spanish or English, but
speak a unique dialect and hybrid" (Iverson, p. 3). At the end of the class many students became opposed to adopting Spanish as an official
language, but found other ways Spanish should be integrated into American culture. Without research, these students would have maintained their
opinions and not enhanced their knowledge of the issue. The students who maintained support of Spanish as an official language were better
informed and thus also more capable of articulating support for their beliefs.
The examples of debate and research impacting the opinions and actions of debaters indicate the strong potential for a direct relationship between
debate research and personal advocacy. However, the debate community has not created a new sea of activists immersing this planet in waves of
protest and political action. The level of influence debater search has on people needs further exploration. Also, the process of research needs to
be more fully explored in order to understand if and why researching for the competitive activity of debate generates more interest than research
for other purposes such as classroom projects.
Since parliamentary debate does not involve research into a single topic, it can provide an important reference point for examining the impact of
research in other forms of debate. Based upon limited conversations with competitors and coaches as well as some direct coaching and judging
experience in parliamentary debate, parliamentary forms of debate has not seen an increase in activism on the part of debaters in the United
States. Although some coaches require research in order to find examples and to stay updated on current events, the basic principle of this
research is to have a commonsense level of understanding(Venette, 1998). As the NPDA website explains, "the reader is encouraged to be well-
read in current events, as well as history, philosophy, etc. Remember: the realm of knowledge is that of a 'well-read college student'" (NPDA
Homepage,http://www.bethel.edu/Majors/Communication/npda/faq2.html). The focus of research is breadth, not depth. In fact, in-depth research
into one topic for parliamentary debate would seem to be counterproductive. Every round has a different resolution and for APDA, at least, those
resolutions are generally written so they are open to a wide array of case examples, So, developing too narrow of a focus could be competitively
fatal. However, research is apparently increasing for parliamentary teams as reports of "stock cases" used by teams for numerous rounds have
recently appeared. One coach did state that a perceived "stock case" by one team pushed his debaters to research the topic of AIDS in Africa in
order to be equally knowledgeable in that case. Interestingly, the coach also stated that some of their research in preparation for parliamentary
debate was affecting the opinions and attitudes of the debaters on the team.
Not all debate research appears to generate personal advocacy and challenge peoples' assumptions. Debaters must switch sides, so they must
supporting and researching both
inevitably debate against various cases. While this may seem to be inconsistent with advocacy,
sides of an argument actually created stronger advocates . Not only did debaters learn both sides of an argument, so
that they could defend their positions against attack , they also learned the nuances of each position. Learning and the
intricate nature of various policy proposals helps debaters to strengthen their own stance on issues.
Non-Resolutional Procedurals---2AC
Imagining exaggerated extinction impacts and solutions to them challenges fatalism
and numbing.
van Munster, 23—Senior Researcher and Coordinator of Research Area on Peace, Risk and Violence,
Danish Institute for International Studies (Rens, “Nuclear weapons, existentialism, and International
Relations: Anders, Ballard, and the human condition in the age of extinction,” Review of International
Studies, First View, pp. 1-19, DOI: 10.1017/S0260210522000638, dml)
Anders and Ballard agreed with classical Realists that the advent of nuclear weapons marked an epochal change, and both argued that the atomic
bomb disrupted linear or progressive forms of temporality. With the bombing of Hiroshima, Anders argued, the human species had

entered a new and final stage of its existence: ‘On August 6, 1945, the Day of Hiroshima, a New Age began: the age in which at any given moment we
have the power to transform any given place on our planet, and even our planet itself, into a Hiroshima.’Footnote 72 Hiroshima now
signalled a ‘worldwide condition’ from which there was no escape, neither spatially nor temporally. This is ‘“The Last Age”’, he concluded, ‘for there is no

possibility that its “differentia specifica”, the possibility of our self-extinction , can ever end – but by the end
itself.’Footnote 73 Ballard likewise referred to the postwar period as the ‘Pre-Third’, a kind of protracted purgatory where the meaning of human existence was
graspable solely in terms of what would come after: World War Three.
In fact, Ballard considered the idea of the future as anything else but catastrophic deeply anachronous: ‘Probably the first casualty of Hiroshima and Nagasaki was the
concept of the future. I think the future died some time in the fifties. Maybe with the explosion of the hydrogen bomb.’Footnote 74 To Ballard, the
death of
the future also heralded the death of the American Dream and the belief in a better tomorrow . The science-
infused dreams of yesterday had mutated into today's technological nightmares. To Anders, too, a catastrophic future

was ‘already throwing a shadow on the present’ challenging the ubiquitous-but-unwarranted faith in never-ending
progress , a belief that foreclosed any conception of endings.Footnote 75 We go on, he lamented, under an illusion of eternity.Footnote 76 In short, Anders and
Ballard developed new temporal categories to capture human existence as suspended in time, a new condition in which previous distinctions – between present and
future, peace and war, continuity and finality, utopianism and conservatism – had collapsed.
Despite their rendition of postwar being as a being-towards-extinction , neither Anders nor Ballard fully succumbed to
fatalism . Although both towards the end of their lives grew increasingly pessimistic about the conditions for transformation and individual
freedom, their respective framings of the future as the final stop on the road to nothingness remained firmly based on the hope that a

different world was possible. Both men vested such hope in the transformative powers of the individual
imagination . Footnote 77 Ballard, who viewed himself as a detached scientist exploring extreme hypotheses and issuing warnings, insisted that his fiction was
‘affirmative’ and ‘positive’ rather than the work of ‘a decadent, celebrating the pleasures of the evening light’.Footnote 78 He was adamant that his message should
not be mixed up with the décor in which it was delivered. While often set in bleak man-made landscapes, he viewed his fiction as tales of psychic fulfillment,
metamorphosis, and transcendence. While Ballard was sceptical about sociopolitical utopianism and the emancipatory
potential of science and technology, he considered the ‘ inner space ’ as a site of ‘ utopian desire ’, where
individuals could obtain some sense of psychic fulfillment . Anders pursued a similar analytical strategy in his writings, which he
consciously categorised as philosophical exaggerations . Footnote 79 Exaggeration functioned like a microscope , he argued, and

could bring into view dimensions of reality that would otherwise remain hidden . The false reality
imprinted upon the world by the media landscape also framed people's understanding of nuclear weapons and
concealed their true, horrific nature. Time and again, he admonished his readers not to tolerate the application of ‘honest sounding, “keep smiling”
labels’ to the H-bomb.Footnote 80 Exaggeration, Anders suggested, could instill a ‘ courage to fear ’ in human beings, and he

implored his readers to contemplate the real possibility of the end of the world in their mind as a first step

towards preventing it from happening in real life .


State Good—1NC
Law can be remade in ways that challenge violence against Black women—reject
singular histories of the state.
Nash, 19—Professor of Gender, Sexuality, and Feminist Studies at Duke University (Jennifer, “love in
the time of death,” Black Feminism Reimagined: After Intersectionality, Chapter 4, 121-126, dml)
This book began with substantial engagement with intersectionality’s origin stories, examining how the question of where the analytic came
from, who coined it, and who deserves “credit” for its rise and circulation have come to predominate in black feminist scholarship. Curiously,
though, none of these widely circulating origin stories contend with intersectionality’s connections to the juridical, or think deeply about
intersectionality as a legal project. Though this book eschews simple origin stories that presume that intersectionality has a
singular history , in this section, I advocate for remembering intersectionality’s connections to critical race theory, and thus its intimate
relationship with remaking law . I invest in this project because intersectionality has been swept into a larger black feminist
conversation that presumes the violence of the juridical, ignoring both intersectionality’s loving investment in the
juridical and the juridical as a potential site of loving practice . Put differently, in this section, I emphasize intersectionality’s
location in critical race theory, in Left legal projects, to move beyond the now knee-jerk Left (and black feminist) sense that
radical and transgressive projects are necessarily antistate . In place of this now familiar political terrain, I seek to ask
different questions: Is it simply collusion or “ cruel optimism ” for black feminists to seek engagement with the
state ?31 Can we imagine black feminist engagements with the state as taking forms other than seeking
redress and demanding visibility ? Are there ways to imagine black feminist legal engagement that circumvent the uncomfortable
and problematic position of being “at home with the law”? How can black feminists reimagine law as a site for staging
productive intimacies and enacting radical vulnerabilities ?
In its juridical iteration, intersectionality emerged in a moment where critical race theorists offered analytical tools to upend prevailing fictions of
law’s objectivity, to reveal the quotidian nature of racism and sexism, and to
argue for fundamental transformations in legal
pedagogy. Critical race theory, then, was born of a sustained attention to law’s failures, even as it contained —at
times—certain kinds of faith in law’s potentiality and promise . Critical race scholars were a post–Brown v. Board of
Education generation who witnessed the end of the Warren court’s promises of integration and inclusion. They saw affirmative action rolled
back, transformed from a substantive remedy for past and ongoing discrimination to a promise of “diversity” to benefit white students who would
be changed into global citizens ready for corporate employment thanks to their “exposure” to socalled racial difference.32 They witnessed the
ratcheting up of standards for proving employment discrimination from racially disparate effects to discriminatory intent, effectively making it
harder for minoritarian plaintiffs to prevail in discrimination suits. They emphatically asked, then, whether the goal of antiracist legal scholars
should be inclusion in white institutions or whether it should be, for example, the creation of robustly funded and supported black institutions.
They interrogated whether the Warren court’s landmark decision in Brown would have better served its black plaintiffs if it equally funded black
schools, rather than championing desegregation and then mandating integration at “all deliberate speed.” They debated whether affirmative action
should be supported if the only logic to support it is “diversity,” where students of color provide a pedagogical value for white students. Critical
race theory, then, was never an embrace of an ethic of inclusion, or even a form of advocacy for new forms of redress. Instead, it was undergirded
by an investment in revealing that racial progress was the result of “interest convergence” rather than a genuine investment in antisubordination,
and by a fundamental belief that law would look and feel different if it “looked to the bottom.”33
While critical race theorists offered critical interrogations of law’s imagined progress, treating it as evidence of US self-interest
rather than a genuine investment in racial redress, they also routinely offered ways of imagining law otherwise , refashioning
antidiscrimination law, conceptions of evidence, property, and contract. They imagined a form of law that eschewed color blindness and argued
that any legal regime that sought to contend with American racial violence had to be deeply color-conscious to exact meaningful remedies.
They advanced new methods — narrative , parable , allegory , speculative fiction , storytelling —in an effort to
jam the fictions of objectivity and neutrality and to expose that law is itself a racial project, never removed from the racial
regimes it purports to disrupt. In other words, they sought to use their locations in the legal academy and in the legal
profession to radically remake law , to push the boundaries of how legal doctrine could be written , imagined ,
and enacted . They aspired to make law into something unrecognizable and unimaginable , to push at its very
parameters in the pursuit of a “jurisprudence of generosity.”34
My entry point for thinking through law as a site of black feminist love-politics is through the work of Patricia J. Williams. Her book The
Alchemy of Race and Rights is complex in its form and its argument—it is memoir, “diary,” legal treatise, and critical theory at once. Williams
presents herself as professor, consumer, daughter, granddaughter, train rider, and “crazy” black woman exhausted from the ordinary and
spectacular raced and gendered brutalities of American life and the project of teaching law at a historically white law school. The project, then, is
a rumination on the felt life of racial and gendered violence, and a critical analysis of the myriad spaces where this violence unfolds, from the
media onslaught against Tawana Brawley to the experiences of being a black female faculty member at a law school.
Williams’s inquiry, though, is not simply about documenting the ubiquity of racial and gendered violence but also about engaging and describing
the lived experience of racialized and gendered vulnerability, what she terms “spirit murder.” For Williams, “spirit murder” is the psychic and
spiritual wounding that unfolds as a result of racial violence. “Spirit murder” describes the wounds left on the flesh, psyche, and even soul of
those who experience violence and the wounds, often invisible, that haunt perpetrators of violence, including a willingness to accept, and to
render unseen, those who are dispossessed. Williams’s task, then, is to imagine what law could look and feel like if it accounted for “spirit
murder,” a form of violence that she argues includes “cultural obliteration, prostitution, abandonment of the elderly and the homeless, and
genocide. . . . What I call spirit murder—disregard for others whose lives qualitatively depend on our regard—is that it produces a system of
formalized distortions of thought.”35 Williams argues that “we need to elevate spirit murder to the conceptual—if not punitive— level of a
capital moral offense. . . . We need to eradicate its numbing pathology before it wipes out what precious little humanity we have left.”36
Williams’s conception of “ spirit murder ” imagines law’s capacity to remedy forms of violence against the
psyche and soul, a terrain that has been unimaginable to law precisely because of its commitment to remedying only
visible and legible harms, and law’s ability to be mobilized “conceptually”— but not punitively—to respond to violence. In other words, the
endeavor of the text is to imagine a legal project capacious and creative enough to attend to what it has always
ignored: the violence inflicted on the psyche . Williams effectively invites us to imagine how we might feel differently
toward each other, and toward law itself, if we had legal obligations toward mutual regard , if we knew that law
took seriously spirit murder.
If Williams seeks to use law to exceed what it aspires to do, to respond to the “cultural cancer” of spirit murder , her book also
contains a resounding, and even surprising, redemption of rights as a key strategy for reforming law. An embrace of rights might sound like
a deeply conventional strategy, mobilizing law to do what it has long claimed to do on behalf of racialized and gendered minorities:
confer rights. Despite her lengthy engagement with state violence, her exacting critique of how law permits rather than redresses spirit murder,
Williams ends not with an abandonment of the state but with a deep affection for what rights could
accomplish . She writes:
The task is to expand private property rights into a conception of civil rights, into the right to expect civility from
others. . . . Instead, society must give them [rights] away. Unlock them from reification by giving them to slaves. Give them to trees. Give them
to cows. Give them to history. Give them to rivers and rocks. Give to all of society’s objects and untouchables the rights of privacy ,
integrity and self-assertion ; give them distance and respect . Flood them with the animating spirit that
rights mythology fires in this country’s most oppressed psyches, and wash away the shroud of inanimate-object-
status , so that we may say not that we own gold but that a luminous golden spirit owns us.37
If critical legal studies called for the abandonment of investment in rights, treating rights as relatively unsuccessful in securing social change and
as promoting problematic conceptions of individualism, Williams makes a plea for a dramatic expansion of rights and a surprising
reconceptualization of the labor of rights. Rights, she argues, should not be the purview of those who can explicitly and legibly name harm.
Cows, history, and rocks should have rights, including rights to “privacy, integrity and self-assertion.” Rights should not be “reified” but
generously bestowed upon everyone and everything; rights should not be used to shore up ideas of property and ownership, to allow us to claim
that “we own gold,” but instead to ensure a deep spiritual connection between us. In so doing, law
could remake “ society ,”
transforming its investments in rights as something that protects property holders into rights as something
that can ensure our mutual accountability , and reminds us of the “luminous golden spirit [that] owns us” all.
It is easy to read Williams as optimistically rehabilitating rights from the critical legal studies’ critique of rights, and problematically investing in
precisely the doctrinal formulation that has consistently failed minoritarian subjects. In this reading, Williams is imagined as paradoxically
investing in precisely the site of violence she carefully documents with far too little explanation for how rights can circumvent the problems of
racism and sexism she delineates. Yet I read Williams’s visionary account of rights differently. For her, law can be mobilized not to
produce new causes of action , to simply make visible new wounded subjects who can make appeals to
redress, but to imagine new and radical vulnerabilities . As it is currently structured , property deeply
organizes sociality , and law operates to protect property from trespass and theft. Thus, law operates to create categories like
property holder (owner) and trespasser (thief), and to organize the social world around proximities to ownership. Williams uses her
capacious conception of rights to imagine another way of organizing sociality: around vulnerability. Indeed,
Williams asks: How are we bound up with others? What is our responsibility to ensuring the vital “ spirit ” of others, and
to demanding the protection of our own “spirits”? What happens when we harm things that can’t articulate injuries (trees, rocks,
rivers) but can only make that injury visible and oftentimes in ways that we refuse to recognize, or that might even make that injury visible in
another time, in decades or centuries when we are not even here to be accountable? What happens when we take responsibility for our capacity to
wound and for the histories of wounding and violence that have unfolded, often in our names? And what happens when law becomes a
critical tool in making visible mutual vulnerability , in insisting that we recognize that we can “ undo each
other ,” and in demanding that we take seriously our indebtedness to each other? For Williams, then, expanding
rights becomes a strategy for transforming law to be a space that enshrines a vision of interdependence
and shared vulnerability .
I begin my investigation of the possibility of rooting black feminist lovepolitics in law with Williams’s visionary work because it reveals the
potential of black feminist legal scholarship that fundamentally reorients law around ethics of vulnerability. This is work that expresses
a
fundamental faith in law’s capacity to perform different kinds of justice work, even as it recognizes how
law is often mobilized as an agent of inequality and injustice . Like Williams’s radical remaking of rights, Crenshaw’s
conception of intersectionality tugs at the seams of law, working within its confines to radically unleash its transformative capacity. As I
explained earlier in the book, intersectionality is primarily remembered for its now widely circulating accident metaphor, where discrimination is
imagined as traffic flowing through an intersection. It can move in one direction, another direction, or both, and an “accident” can occur on either
street or in the intersection. According to this logic, discrimination can be race-based, gender-based, or race-and-gender-based, yet the possibility
of raced and gendered discrimination is rendered impossible by antidiscrimination law that actively refuses to account for this form of violence.
As Crenshaw notes, “Judicial decisions which premise intersectional relief on a showing that Black women are specifically recognized as a class
are analogous to a doctor’s decision at the scene of an accident to treat an accident victim only if the injury is recognized by medical
insurance.”38 Intersectionality, then, spotlights law’s refusal to see black women’s race- and gender-based injuries.
Many have envisioned intersectionality’s mandate as the insertion of black women into existing antidiscrimination law, as a call for
antidiscrimination law to abandon its race or gender logic and instead embrace a race and gender logic. Yet, as Crenshaw’s second metaphor
reveals, antidiscrimination law is constructed around leaving the multiply marginalized in the proverbial basement. Put differently,
antidiscrimination law itself is constructed around remedying only certain forms of discriminatory activity and is designed to refuse to recognize
and redress discrimination against the most vulnerable. Intersectionality, then, is not a call for inserting black women into a preexisting legal
regime, precisely because that regime is designed to refuse to see black women. Instead, it is a tactic of making visible black women’s status as
witnesses who can name and describe the basement, which is not merely a social location but a space produced by law’s doctrinal failures.
embraces black women’s social location as a juridical starting point , it also advocates for
If intersectionality
tailoring law to address injuries in particular ways . In other words, it offers a vision of law that is rooted in
flexibility and customization , in responding to particular lived experience . In her second article on intersectionality,
“Mapping the Margins,” Crenshaw reveals not only that law ignores black women’s experiences of injury but also that intersectionality
compels state interventions that more appropriately respond to black women’s particular experiences of injury.
In the context of domestic violence, for example, Crenshaw shows that meaningful legal intervention requires an attention to race, gender, class,
and immigration status, and thus state intervention might need to take different and multiple forms to produce substantive justice.
requires a commitment to witnessing, to empathic looking, that responds not with the messy bluntness that law so often
Intersectionality, then,
imagining legal action that can be
deploys in the name of fairness and uniformity. Instead, intersectionality calls for
individualized , intimate , and rooted in lived experience . This work has been expanded by other scholars, especially those
working in the context of domestic violence law, including Linda Mills and Elizabeth Schneider, who have considered how mandatory arrest/no-
drop policies ignore the particular experiences of women of color who may have to weigh their own distrust of the state, the necessity of a
partner’s income to survive, and the potential stigma, shame, or violence of calling law enforcement against a desire for bodily integrity and
safety. As Mills suggests, a vision of legal intervention that is survivor-centered and survivor-guided, that recognizes the differently situatedness
of each subject who engages with the state, is the only way to ensure justice, particularly in the context of intimate life. Similarly, Crenshaw’s
work asks for law to witness violence as it unfolds and to respond contextually, to recognize that uniformity might not be the hallmark of fairness
and equity. Ultimately, Crenshaw’s vision of the demands of intersectionality in the context of violence has underscored the importance of law as
a tool that sees, witnesses, and even willingly inhabits the social locations of the multiply marginalized.
If it
is easy to dismiss Williams’s embrace of rights as overly optimistic in the face of ample description of law’s failures, it is all too
easy to treat Crenshaw as an inclusionist, one who imagines intersectionality as
a strategy that grants black women entry into
the problematic logics of antidiscrimination law . Yet in my reading of intersectionality, Crenshaw’s vision is not one of
including black women in existing legal doctrine, or simply expanding legal doctrine to make space for
black women’s particular experiences of discrimination. Indeed, Crenshaw ends “Demarginalizing the Intersection” with a
personal account that underscores her deep commitment to unsettling inclusionary politics. She describes an experience in which, as a law school
student, she was invited to a prestigious Harvard men’s club, one that was formerly all white, to celebrate the end of first-year exams. Upon her
arrival, her friend—a member of the club—quietly mentioned that he had forgotten to share an important detail: Crenshaw would have to enter
the club through the back door because she was a woman. She and her friends had long assumed that it was their blackness that would bar them
from the club, but it was her womanhood that required her to use the back door if she wanted entry into the club. Crenshaw ruminates on this
experience as emblematic of the importance of intersectional analysis, noting that “this story does reflect a markedly decreased political and
emotional vigilance toward barriers to Black women’s enjoyment of privileges that have been won on the basis of race but continue to be denied
on the basis of sex.”39 Yet what interests me about this account, and how it animates the end of the article, which borrows from Paula Giddings’s
work to conclude “when they enter, we all enter,” is that intersectionality is not a tool Crenshaw uses to advocate access and entry. In other
words, she does not suggest that an intersectional analysis demands her inclusion—and all black women’s inclusion—in a structure constructed
around black women’s exclusion. Instead, the story reveals that battles for entry are always imperfect , exclusionary , and
problematic . To be granted entry to a space because of blackness and to be barred entry to that same space because of womanhood
speaks to the flimsiness of entry as a form of politics, precisely because inclusion always hinges on a system of exclusion, hierarchy, and
valuation. Ultimately, intersectionality reveals
both the limits of juridical projects and the possibility of mobilizing
law to exceed law’s own critical desires. In Crenshaw’s hands, intersectionality invites a legal project that takes seriously black
women’s witnessing (and black women as witnesses, something crucial in a juridical system that continues to disbelieve black women), that
invites an attention to a literal, material space—the intersection, the basement—that black women know, experience, and inhabit.
In this section, I ask what might happen if black feminists treated intersectionality’s legal roots not as an embarrassment but as a crucial site of
the analytic’s transformative potential . Indeed, in reading Crenshaw’s conception of intersectionality alongside Williams’s work
on rights, and in emphasizing intersectionality’s roots in critical race theory, I treat intersectionality as an analytic that radically occupies
law, takes hold of legal doctrine and refuses its conceptions of neutrality and uniformity as performance of justice. It is, then, a
strategy of demanding that law move otherwise , that it center witnessing and vulnerability , that it encourage
forms of relationality and accountability that jettison logics of contract and property . My reading insists that
black feminists refuse well-rehearsed dismissals of intersectionality as an inclusionary project (dismissals that are all the
more possible to rehearse because this is how intersectionality so often circulates in the university) that seeks to insert black
women’s bodies into otherwise problematic structures, and instead advocates treating intersectionality’s juridical project as
the very heart of its radical political agenda. It is intersectionality’s capacity to index vulnerability and witnessing, to imagine legal
doctrine as centering those ethics (even as law might refuse those efforts), that makes intersectionality a space that
resonates deeply with black feminism’s ongoing efforts to construct a political agenda rooted in love.
Risk and Promise
What if we refuse d the lure of negative affects , the tendency to grieve and mourn black feminism and its
analytics? What if we reject ed both the notion that blackness is synonymous with death and the idea that black
feminism is dead or dying? My call for this rejection is not meant as a wholesale rejection of afropessimism, and its
attendant affects of grief , loss , mourning , and despair . Nor is my plea here rooted in a sense that negative affects are per se
problematic; indeed, the work of a host of scholars including Ann Cvetkovich, Heather Love, and Sianne Ngai has been to reclaim negative
affects and to mine these feelings for their productive, world-making potential. Instead, my call is for us to consider why the position of death has
become so alluring in this moment, particularly for black feminists who have made a practice of lamenting the slow and steady demise of our
tradition. This chapter, then, aspires to perform letting go by suggesting another way to feel black feminism, one rooted in love rather than
territoriality and defensiveness. Indeed, I argue that remembering intersectionality’s
juridical orientations , and recovering
them rather than eschewing them (even in a moment where law is treated as the paradigmatic site of
antiblack violence ), might allow black feminists to encounter the broad sweep of our transformative call for love-politics. In so doing, I
emphasize that law might be a space of black women’s survival rather than simply the site of black
women’s wounding . Moreover, I underscore that a space that black women did not author , and that was created
largely with the interest in enshrining black women as property rather than as subjects , might become a site
that allows us to imagine other ways of being and feeling black feminist. As I argue, black feminism’s long-standing
commitment to lovepolitics, to ethics of mutual vulnerability and witnessing, is echoed by critical race feminist legal
practices , including Williams’s expansive investment in rights and Crenshaw’s engagement with intersectionality as a critique of
inclusionary politics. What both share are demands that law imagine itself otherwise , that it unfold and move in ways
that might seem contrary to its fundamental project. These are demands that law acknowledge the failures and short-
sightedness of inclusion and redress projects, and that law instead imagine its radical work to be an embrace of ideas of
intimacy , proximity , vulnerability , and mutual regard . Reanimating black feminist engagement with law
is particularly important because it upends the long-standing tenet that black women’s freedom comes
exclusively through spaces that we self-authored , and, correlatively, that sites historically constructed to
secure our status as property can never become locations where we stage our liberation . My inquiry shows
otherwise and argues that freedom and radical black feminist politics can be rooted in myriad sites , including
spaces that have been rife with our own subordination . Indeed, my engagement with law seeks to rescue law’s status of death in
black studies, tracing how it can be a location of radical freedom-dreaming and visionary world-making rather
than simply a death-world and the paradigmatic site of antiblackness .
DA
A global renewable transition is inevitable BUT won’t occur fast enough to avoid
warming.
Ahmed 23, D.Phil., International Relations, Sussex University, Director of the Futures Lab at Unitas
Communications (Nafeez Ahmed, March 29, 2023, “America’s Fossil Fuel Economy is Heading for
Collapse – It Signals the End of the Oil Age,” Resilience,
https://www.resilience.org/stories/2023-03-29/americas-fossil-fuel-economy-is-heading-for-collapse-it-
signals-the-end-of-the-oil-age/)
6. Clean energy transformation will be critical to stabilise the global economy and restore prosperity
The only viable pathway through this crisis will be to accelerate the clean energy transformation focused on
the deployment of exponentially improving technologies which are already scaling because they are cost-competitive
with fossil fuels – namely, solar, wind and batteries. This will lay the groundwork for other potential applications such as e-fuels or
green ammonia from green hydrogen. This transformation is already underway , and provides the opportunity for the
US and others to produce larger quantities of energy at a fraction of the costs of fossil fuels . In Rethinking
Climate Change, a RethinkX report for which I was contributing editor, we found that even in the absence of appropriate policy-
decisions and major institutional barriers, economic factors will inevitably drive incumbent industries to collapse by
2040 as they are replaced by new solar, wind and battery systems. Unfortunately, while this is far faster than conventional
analysts acknowledge, this is not fast enough to avoid dangerous climate change.
7. Oil demand is going to haemorrhage, because the clean energy transformation is now unstoppable
The data examined by RethinkX implies that oil demand is likely to peak far earlier than incumbent energy agencies
predict, and decline far more rapidly following the peak. The RethinkX report suggests that oil demand will likely peak sometime between
2025 and 2030, followed by an escalating drop out to 2040. It’s critical to recognise that the economic drivers of this approaching
decline in oil demand are not confined to disruptive energy technologies, but include the disruption of the transport and
food systems by e lectric v ehicle s , autonomous electric vehicles, precision fermentation and cellular agriculture. This also shines a
light on the knife-edge civilisation is moving into this decade: as the incumbent energy industry declines, bringing with it the economy, there is
a risk that it derails the economic factors currently driving the exponential adoption of clean energy
technologies. Which means that we need to accelerate adoption this decade .
8. High volatile oil prices will be followed by crashing oil prices once demand peaks and declines
In the late 2020s , then, we will likely see oil demand begin to peak . This will be exacerbated by the fact that the
global oil industry is going to become economically unsustainable by around 2030 , when it will begin
consuming a quarter of its own energy just to keep pumping out more oil. Even the Journal of Petroleum Technology published by the Society of
Petroleum Engineers is taking this prospect seriously. As oil demand declines , oil prices will also decline . At this point,
assuming the accuracy of the latest EROI studies, the collapse of the global industry will begin to accelerate because once prices go below a
certain point and with EROI levels already unsustainable, the industry will simply become impossible to sustain economically.
What to do?
A big question that emerges here, of course, is how to accelerate the transformation .
The main task is simple: we need to raise awareness of the fact that the end of the Oil Age is fast approaching and will arrive within the next two
decades. This
inevitable arrival will not in itself mean that we avoid dangerous climate change . But it will
mean that oil and gas assets are stranded – they have been vastly overvalued and therefore investments
in them will never incur the projected returns , resulting in trillions of dollars of losses . This is not simply due
to the prospect of climate policy action, but the reality of unfolding technological disruptions of energy, transport and food, and the internal EROI
dynamics within the industry itself.
But while the immediate implications of this for conventional investments in incumbent industries are dire, the wider implications are mind-
blowing. It means that the most lucrative areas of new investments where the
highest potential for returns can be found will
ultimately not be in the dying fossil fuel industries but in exponentially improving tech nologies which are on
track to transform our societies for the better.
Fossil fuels hinder African economic growth and development.
Alleyne '13 [Trevor; February; Ph.D., Economics, University of Maryland, Researcher of Economic
Development and Monetary Policy; IMF; "Energy Subsidy Reform in Sub-Saharan Africa,"
https://www.imf.org/external/pubs/ft/dp/2013/afr1302.pdf]
SSA = Sub-Saharan Africa
Energy Subsidies and Economic Efficiency
An important aspect of energy subsidies is their impact on economic efficiency, competitiveness and growth, the environment, and
macroeconomic management. Although some countries have rationalized their use of energy subsidies as a way to
enhance competitiveness and the development of certain economic activities, this section argues that energy subsidies in
their various forms can have a detrimental impact on growth and efficiency by misallocating resources ,
reducing investment , creating significant negative externalities and unintended distortions , and
complicating overall macroeconomic management .
Energy subsidies generate welfare deadweight losses. Figure 13 illustrates the deadweight loss from a fuel subsidy of size s , under the
assumption that the supply of fuel is infinitely elastic, as is likely to be the case for a small economy. The subsidy lowers the market price of fuel
and increases the quantity consumed. Note that the increase in consumer surplus (represented by areas A+B) falls short of the subsidy’s fiscal
cost (A+B+C). The difference (area C) is the deadweight loss of the subsidy.
As shown in Figure 13 , the most significant example of misallocated resources owing to energy subsidies is
overconsumption of energy owing to distorted price signals . The extent of overconsumption depends on the elasticity of
demand, for which cross-country empirical estimates vary widely in the literature. 9 Figure 14 offers some evidence of overconsumption in SSA
countries in which consumer fuel prices fall short of appropriate benchmark levels. The figure suggests that lower fuel taxes (and correspondingly
higher fuel subsidies) are associated with higher per capita energy consumption by the road sector in SSA countries. An additional effect comes
from changes in the nature of energy demand: Burke and Nishitateno (2011) and Beresteanu and Li (2011) find that lower gasoline prices induce
consumers to switch to less fuel-efficient vehicles. The dynamic effects of overconsumption should also be considered: it leads to faster depletion
of nonrenewable resources, necessitating higher prices in the future than would otherwise be the case.
Underpricing and subsidies have negative effects on energy supply through various channels . If the cost of
the subsidy is borne by the energy companies , which are forced to consistently sell below cost (including normal returns on
investment), this will affect the entire supply chain , both in the short and the long term. Low profitability
leads to underinvestment and poor maintenance , and this in turn results in persistent shortages , reduced
quality, and deteriorating infrastructure along the entire energy supply chain. Nigeria’s and Ghana’s dilapidated
petroleum refining infrastructure are examples , as is the huge electricity supply shortage across SSA . While proponents
of energy subsidies argue for the need to lower costs to boost competitiveness, inadequate or unreliable supply of electricity has
forced customers across SSA to invest heavily in self-generation , raising the effective cost above the
subsidized price. In many cases, it is the inadequate supply of electricity rather than its price that weighs most heavily on competitiveness.
Indeed, in countries that have undertaken reforms, evidence from surveys shows that customers are willing to pay higher tariffs if better service
can be guaranteed. In the AFR survey, 28 countries had frequent or significant electricity shortages (such as load shedding or blackouts), while
only 4 had infrequent or insignificant electricity shortages. 10 Figure 15 shows that it
takes longer to get an electricity
connection (a form of rationing) in SSA countries in which electricity firms cannot recover their costs .
Even if the cost of subsidies is borne directly by the government , the problem of undersupply and inefficiency
may not be resolved. First, direct government transfers to refineries and power companies (e.g., to compensate for
underpricing) can lead to soft budget constraints and reduce the incentive for restructuring and efficiency
improvements, including efforts to improve collection rates. Refineries, in particular, tend to be subsidized for a variety of
reasons, including job protection and supply security. However, oil refining is a capital-intensive industry (i.e., the number of jobs at
stake is small). Relying on poorly maintained refineries might actually reduce the security of energy supplies.
Second, subsidizing refineries is also sometimes times justified as a way to reduce fuel prices, particularly in oil-exporting countries. However,
the small market size in most SSA countries makes it difficult to achieve scale efficiencies . 11 Third, in oil-
exporting countries there might be a misconception about the true cost of producing refined products: crude oil (their main input) is mistakenly
valued at its actual production cost, rather than at its opportunity cost (i.e., its export value). Using the former creates an incentive to run
subsidizing fuel to lower the cost
inefficient refineries cushioned by the wedge between opportunity and productions costs. Finally,
of thermal power–generating plants may reduce the incentive to explore more economical options for
producing power, including regional power pools.
Deficient power infrastructure and shortages dampen economic growth and weaken competitiveness .
Escribano, Guasch, and Peña (2008) find that in most SSA countries infrastructure accounts for 30–60 percent of the
adverse impact on firm productivity , well ahead of factors like red tape and corruption. Moreover, in half the countries analyzed in
that study, power accounted for 40–80 percent of the infrastructure effect. Kojima, Matthews, and Sexsmith (2010) estimate that potential
efficiency gains in electricity generation and distribution could create savings of more than 1 percentage
point of GDP for at least 18 SSA countries . Calderón (2008) uses simulations based on panel data to show that if the
quantity and quality of power infrastructure in all sub-Saharan African countries were improved to that of a
better performer (such as Mauritius), long-term per capita growth rates would be 2 percentage points higher . The
scarcity of power in sub-Saharan Africa also affects the delivery of social services and the quality of life :
without electricity, clinics cannot safely deliver babies at night or refrigerate essential vaccines. Similarly, lack of illumination restricts
the ability of children to study at night and fosters crime .
The argument in favor of energy subsidies as a way to foster competitiveness and encourage private investment in certain sectors
(e.g., manufacturing) often fails to fully account for the full implications of these policies. Subsidies have to be
financed somehow, by either higher taxes or lower spending (including on infrastructure or human capital). High taxes,
poor infrastructure, and low stocks of human capital reduce a country’s attractiveness to private
investors .
Energy subsidies might crowd out more productive government spending . Figure 16 shows a negative relationship
between fuel subsidies and public spending on health and education. In Nigeria, fuel subsidies exceeded federal capital expenditure by 20 percent
in 2011. Ad
hoc government interventions in energy pricing can result in heightened uncertainty , which
makes business planning more difficult and turns away investors.
Competitive advantages gained through fuel subsidies are likely to be temporary and unsustainable . Because these
subsidies tend to be strongly correlated with world prices, some sectors or industries would benefit from the subsidy in the presence of high world
prices. However, this advantage would disappear as soon as international prices fall . This is what happened to some
(large-scale) resource-based industries (e.g., aluminum, steel) in oil exporting countries during and after the 1970s oil booms (Gelb, 1988). In
addition, even if oil prices were to remain high, those sectors or industries would be vulnerable to the reduction of the subsidies (e.g., because of
fiscal constraints).
Subsidies may invite rent seeking and their removal might become politically difficult. Examples include the
copper-mining industry in Zambia and the aluminum-smelting industries in Cameroon, Ghana, and South Africa, where the offer of
low, subsidized energy prices was meant as a temporary policy to lock in large-scale energy projects. Given the growth in energy demand in these
countries, these arrangements are no longer needed and are extremely costly but are politically difficult to terminate. Given the potential
large benefits from rent seeking, there is also a risk that it encourages corruption that substantially
inflates the fiscal costs of the subsidy, as demonstrated by recent revelations of widespread abuses in
Nigeria’s fuel subsidy regime.
Energy subsidies often misallocate resources to unintended beneficiaries or with unintended consequences. In Burkina
Faso, fuel subsidies appear to exist mainly to sustain the truck transport sector, which is cartelized and less
efficient than rail. Large foreign-owned hotels in the Seychelles and international airlines in Equatorial Guinea seem to be the main
beneficiaries of subsidized fuel. Given that kerosene (typically subsidized for equity reasons) is a perfect substitute for jet fuel, significant
amounts of it get diverted for alternative uses. Kerosene might also be mixed with diesel, for which it is only an imperfect substitute, resulting in
damage to diesel engines. Fuel subsidies may have significant unintended cross-border spillover effects. The existence of large
gasoline subsidies in Nigeria has encouraged widespread smuggling to other countries in West Africa. For
example, it is estimated that official gasoline sales accounted for only 10–15 percent of total sales in Benin in 2011. While the informal fuel trade
between Nigeria and Benin generates a transfer to the consumers of Benin, the
Beninese government lost revenue of about 2
percent of GDP . Energy subsidies might generate perverse labor shifts , given that urban populations tend to be their
main beneficiaries. Reduced labor supply in agriculture could lead to higher food prices —this happened in several oil-
exporting countries during the 1970s boom (Gelb, 1988). Spending on agricultural infrastructure would be much more productive than energy
subsidies in these countries.
Energy subsidies produce negative externalities and have a significant environmental impact. Some of these externalities could
be local: traffic congestion and accidents, road damage, air pollution, and urban sprawl. Their adverse effect on human health and
productivity constrains long-term economic growth . These externalities could also be global—emissions of CO 2 and other
greenhouse gases contribute to global climate change. Figure 17 suggests that lower fuel taxes (and correspondingly higher fuel subsidies) are
associated with higher CO 2 emissions in SSA countries. 12

CCS is an abysmal failure---capture impossible or miniscule and causes leaks---


results in moral hazard that spikes emissions.
Robertson ‘22 [Bruce; September 1; Energy Finance Analyst – Gas/LNG at the Institute for Energy
Economics and Financial Analysis; “Carbon capture has a long history. Of failure”; Bulletin of the
Atomic Scientists; https://thebulletin.org/2022/09/plagued-by-failures-carbon-capture-is-no-climate-
solution/#post-heading]
As the climate change movement gained momentum, the oil and gas industry wisely rebranded enhanced oil recovery as a “climate-friendly”
process with a new name: carbon capture utilization and storage. Today, over 70 percent of carbon capture projects are, in fact,
enhanced oil recovery projects used to produce more oil and/or gas , resulting in yet more greenhouse
gas emissions.
The Institute for Energy Economics and Financial Analysis has estimated that most of the total captured carbon throughout
history found its use in enhanced oil recovery—approximately 80–90 percent. Only a small proportion of carbon capture
projects (approximately 10–20 percent) have stored carbon in dedicated geological structures without using it for oil and gas production.
Despite its long history, carbon capture is a problematic technology. A new IEEFA study reviewed the capacity
and performance of 13 flagship projects and found that 10 of the 13 failed or underperformed against their
designed capacities, mostly by large margins.
The natural gas processing sector dominates the application of carbon capture technology. The gas
production process requires the removal of carbon dioxide. While many gas companies now claim to
produce “carbon-neutral” gas or liquefied natural gas, this is little more than marketing hubris.
The “carbon-neutral” tag has been obtained by using carbon capture to capture the 10–15 percent of Scope 1
and Scope 2 emissions (the emissions generated from producing natural gas) during the gas production process or by purchasing
carbon offsets. Yet up to 90 percent of emissions from oil and gas do not occur at production . Instead, these
emissions, called Scope 3 emissions, occur when the product is actually used, that is, burnt. As shown in our study,
capturing Scope 3 emissions, the biggest chunk of emissions created from using the product, is not being
accounted for in these “carbon-neutral” claims.
Apart from the poor performance of carbon capture projects, carbon capture in power plants has shown a track record of
technical failures since 2000. Close to 90 percent of the proposed global carbon capture capacity in the
power sector has failed at the implementation stage or was suspended early .
Even if the carbon dioxide can be injected underground, there is no guarantee that it will stay there and
not leak into the atmosphere.
There are several real-world examples of failure to keep gas underground. The best example is the California Aliso
Canyon gas leak in 2015, the worst man-made greenhouse gas disaster in US history, when 97,000 metric tons of methane leaked into the
atmosphere. While the leak at Aliso Canyon was a methane, not carbon dioxide, leak, depleted oil and gas reservoirs are commonly used to store
captured carbon dioxide. The problems encountered at Aliso Canyon could also be encountered with carbon dioxide at a carbon capture project.
Another failure was the In Salah project in Algeria, a carbon capture project with a total cost of US$2.7 billion. Injection started in 2004 and was
suspended in 2011 due to concerns about the integrity of the seal and suspicious movements of the trapped carbon dioxide under the ground.
The entire efficacy of the carbon capture process has been called into question by the Intergovernmental
Panel on Climate Change. In its special report on Carbon Dioxide Capture and Storage, the IPCC stated:
“CO2 storage is not necessarily permanent. Physical leakage from storage reservoirs is possible via (1)
gradual and long-term release or (2) sudden release of CO2 caused by disruption of the reservoir.”
Carbon capture has been used as a justification for new oil and gas projects . It has a history of poor
performance , only captures a fraction of the total emissions from the lifecycle of oil and gas production
and its long-term efficacy is questionable .
Disclosure
Disclosure---AT Basic Shit
***Demarcating debate norms along racial lines (like “only disclose new affs to black
trans women”) is epistemologically flawed---worse for community formation, reifies
trauma, and actively strengthens hegemonic structures by marginalizing the people
who were never here to defer to in the first place.
Táíwò, 20—assistant professor of philosophy at Georgetown University (Olúfémi, “Being-in-the-Room
Privilege: Elite Capture and Epistemic Deference,” The Philosopher, vol. 108, no. 4, dml)
I think it’s less about the core ideas and more about the prevailing norms that convert them into practice. The call to “listen to the most affected”
or “centre the most marginalized” is ubiquitous in many academic and activist circles. But it’s never sat well with me. In my experience, when
people say they need to “ listen to the most affected ”, it isn’t because they intend to set up Skype calls to
refugee camps or to collaborate with houseless people . Instead, it has more often meant handing
conversational authority and attentional goods to those who most snugly fit into the social categories
associated with these ills – regardless of what they actually do o0r do not know , or what they have or
have not personally experienced . In the case of my conversation with Helen, my racial category tied me more “authentically” to an
experience that neither of us had had. She was called to defer to me by the rules of the game as we understood it. Even where stakes are high –
where potential researchers are discussing how to understand a social phenomenon, where activists are deciding what to target – these rules often
prevail.
The trap wasn’t that standpoint epistemology was affecting the conversation, but how. Broadly, the norms of putting standpoint epistemology
into practice call for practices of deference : giving offerings , passing the mic , believing . These are good ideas in
many cases, and the norms that ask us to be ready to do them stem from admirable motivations: a desire to increase the social power of
marginalized people identified as sources of knowledge and rightful targets of deferential behaviour. But deferring in this way as a
rule or default political orientation can actually work counter to marginalized groups’ interests,
especially in elite spaces .
Some rooms have outsize power and influence : the Situation Room, the newsroom, the bargaining table, the conference room.
Being in these rooms means being in a position to affect institutions and broader social dynamics by way
of deciding what one is to say and do. Access to these rooms is itself a kind of social advantage , and one
often gained through some prior social advantage. From a societal standpoint, the “ most affected ” by the social injustices we
associate with politically important identities like gender, class, race, and nationality are disproportionately likely to be incarcerated,
underemployed, or part of the 44 percent of the world’s population without internet access – and thus both left out of the rooms of
power and largely ignored by the people in the rooms of power. Individuals who make it past the various
social selection pressures that filter out those social identities associated with these negative outcomes are most likely to be in the room.
That is, they are most likely to be in the room precisely because of ways in which they are systematically
different from (and thus potentially unrepresentative of) the very people they are then asked to represent
in the room.
I suspected that Helen’s offer was a trap. She was not the one who set it, but it threatened to ensnare us both all the same. Broader cultural norms
– the sort set in motion by prefacing statements with “As a Black man…” – cued up a set of standpoint-respecting practices that many of us know
consciously or unconsciously by rote. However, the
forms of deference that often follow are ultimately self-undermining
and only reliably serve “ elite capture ”: the control over political agendas and resources by a group’s
most advantaged people. If we want to use standpoint epistemology to challenge unjust power arrangements, it’s
hard to imagine how we could do worse .
To say what’s wrong with the popular, deferential applications of standpoint epistemology, we need to understand what makes it popular. A
number of cynical answers present themselves: some (especially the more socially advantaged) don’t genuinely want social change – they just
want the appearance of it. Alternatively, deference to figures from oppressed communities is a performance that sanitizes, apologizes for, or
simply distracts from the fact that the deferrer has enough “in the room” privilege for their “lifting up” of a perspective to be of consequence.
I suspect there is some truth to these views, but I am unsatisfied. Many of the people who support and enact these deferential norms are rather like
Helen: motivated by the right reasons, but trusting people they share such rooms with to help them find the proper practical expression of their
joint moral commitments. We don’t need to attribute bad faith to all or even most of those who interpret standpoint epistemology deferentially to
explain the phenomenon, and it’s not even clear it would help. Bad “roommates” aren’t the problem for the same reason that Helen being a good
roommate wasn’t the solution: the problem emerges from how the rooms themselves are constructed and managed.
To return to the initial example with Helen, the issue wasn’t merely that I hadn’t grown up in the kind of low-income, redlined community she
was imagining. The epistemic situation was much worse than this. Many of the facts about me that made my life chances different from those of
the people she was imagining were the very same facts that made me likely to be offered things on their behalf. If I had grown up in such a
community, we probably wouldn’t have been on the phone together.
Many aspects of our social system serve as filtering mechanisms, determining which interactions happen and between whom, and thus which
social patterns people are in a position to observe. For the majority of the 20th century, the U.S. quota system of immigration made legal
immigration with a path to citizenship almost exclusively available to Europeans (earning Hitler’s regard as the obvious “leader in developing
explicitly racist policies of nationality and immigration”). But the 1965 Immigration and Nationality Act opened up immigration possibilities,
with a preference for “skilled labour”.
My parents’ qualification as skilled labourers does much to explain their entry into the country and the subsequent class advantages and monetary
resources (such as wealth) that I was born into. We are not atypical: the Nigerian-American population is one of the country’s most successful
immigrant populations (what no one mentions, of course, is that the 112,000 or so Nigerian-Americans with advanced degrees is utterly dwarfed
by the 82 million Nigerians who live on less than a dollar a day, or how the former fact intersects with the latter). The selectivity of immigration
law helps explain the rates of educational attainment of the Nigerian diasporic community that raised me, which in turn helps explain my entry
into the exclusive Advanced Placement and Honours classes in high school, which in turn helps explain my access to higher education...and so
on, and so on.
It is easy, then, to see how this
deferential form of standpoint epistemology contributes to elite capture at scale . The
rooms of power and influence are at the end of causal chains that have selection effects . As you get
higher and higher forms of education, social experiences narrow – some students are pipelined to PhDs
and others to prisons . Deferential ways of dealing with identity can inherit the distortions caused by
these selection processes.
But it’s equally easy to see locally – in this room, in this academic literature or field, in this conversation – why this deference
seems to make sense. It is often an improvement on the epistemic procedure that preceded it: the person deferred to may well be better
epistemically positioned than the others in the room. It may well be the best we can do while holding fixed most of the
facts about the rooms themselves: what power resides in them, who is admitted .
But these are the last facts we should want to hold fixed. Doing better than the epistemic norms we’ve inherited from a
history of explicit global apartheid is an awfully low bar to set. The facts that explain who ends up in which room shape
our world much more powerfully than the squabbles for comparative prestige between people who have
already made it into the rooms. And when the conversation is about social justice, the mechanisms of the
social system that determine who gets into which room often just are the parts of society we aim to
address. For example, the fact that incarcerated people cannot participate in academic discussions about freedom that physically take place on
campus is intimately related to the fact that they are locked in cages.
Deference epistemology marks itself as a solution to an epistemic and political problem. But not only
does it fail to solve these problems, it adds new ones . One might think questions of justice ought to be
primarily concerned with fixing disparities around health care , working conditions , and basic material
and interpersonal security . Yet conversations about justice have come to be shaped by people who have ever
more specific practical advice about fixing the distribution of attention and conversational power .
Deference practices that serve attention-focused campaigns (e.g. we’ve read too many white men, let’s now read some
people of colour) can fail on their own highly questionable terms: attention to spokespeople from
marginalized groups could, for example, direct attention away from the need to change the social system that
marginalizes them.
Elites from marginalized groups can benefit from this arrangement in ways that are compatible with
social progress. But treating group elites’ interests as necessarily or even presumptively aligned with full
group interests involves a political naiveté we cannot afford. Such treatment of elite interests functions as
a racial Reaganomics : a strategy reliant on fantasies about the exchange rate between the attention
economy and the material economy.
Perhaps the lucky few who get jobs finding the most culturally authentic and cosmetically radical
description of the continuing carnage are really winning one for the culture . Then, after we in the
chattering class get the clout we deserve and secure the bag , its contents will eventually trickle down to
the workers who clean up after our conferences , to slums of the Global South’s megacities , to its
countryside .
But probably not .
A fuller and fairer assessment of what is going on with deference and standpoint epistemology would go beyond technical argument, and contend
with the emotional appeals of this strategy of deference. Those in powerful rooms may be “elites” relative to the larger group they represent, but
this guarantees nothing about how they are treated in the rooms they are in. After all, a person privileged in an absolute sense (a person belonging
to, say, the half of the world that has secure access to “basic needs”) may nevertheless feel themselves to be consistently on the low end of the
power dynamics they actually experience. Deference epistemology responds to real, morally weighty experiences of being put down,
ignored, sidelined, or silenced. It thus has
an important non-epistemic appeal to members of stigmatized or
marginalized groups: it intervenes directly in morally consequential practices of giving attention and
respect.
The social dynamics we experience have an outsize role in developing and refining our political subjectivity, and our sense of ourselves. But
this very strength of standpoint epistemology – its recognition of the importance of perspective – becomes its
weakness when combined with deferential practical norms . Emphasis on the ways we are marginalized
often matches the world as we have experienced it. But, from a structural perspective, the rooms we
never needed to enter (and the explanations of why we can avoid these rooms) might have more to teach
us about the world and our place in it. If so, the deferential approach to standpoint epistemology actually
prevents “ centring ” or even hearing from the most marginalized; it focuses us on the interaction of the
rooms we occupy , rather than calling us to account for the interactions we don’t experience . This fact about
who is in the room, combined with the fact that speaking for others generates its own set of important problems (particularly when they are not
there to advocate for themselves), eliminates pressures that might otherwise trouble the centrality of our own suffering – and of the suffering of
the marginalized people that do happen to make it into rooms with us.
The dangers with this feature of deference politics are grave , as are the risks for those outside of the most
powerful rooms. For those who are deferred to , it can supercharge group-undermining norms . In Conflict is
Not Abuse, Sarah Schulman makes a provocative observation about the psychological effects of both trauma and felt superiority:
while these often come about for different reasons and have very different moral statuses, they result in similar
behavioural patterns. Chief among these are misrepresenting the stakes of conflict (often by overstating
harm ) or representing others’ independence as a hostile threat (such as failures to “ centre ” the right
topics or people ). These behaviours, whatever their causal history, have corrosive effects on individuals
who perform them as well as the groups around them, especially when a community’s norms magnify or
multiply these behaviours rather than constraining or metabolizing them.
For those who defer , the habit can supercharge moral cowardice . The norms provide social cover for the
abdication of responsibility : it displaces onto individual heroes , a hero class, or a mythicized past the work that is
ours to do now in the present. Their perspective may be clearer on this or that specific matter , but their
overall point of view isn’t any less particular or constrained by history than ours. More importantly, deference
places the accountability that is all of ours to bear onto select people – and, more often than not, a hyper-
sanitized and thoroughly fictional caricature of them.
The same tactics of deference that insulate us from criticism also insulate us from connection and transformation .
They prevent us from engaging empathetically and authentically with the struggles of other people –
prerequisites of coalitional politics. As identities become more and more fine-grained and disagreements sharper ,
we come to realize that “ coalitional politics ” (understood as struggle across difference ) is, simply, politics .
Thus, the deferential orientation, like that fragmentation of political collectivity it enables, is ultimately
anti-political .
Deference rather than interdependence may soothe short-term psychological wounds . But it does so at a
steep cost : it can undermine the epistemic goals that motivate the project, and it entrenches a politics
unbefitting of anyone fighting for freedom rather than for privilege , for collective liberation rather than
mere parochial advantage .
How would a constructive approach to putting standpoint epistemology into practice differ from a deferential approach? A constructive
approach would focus on the pursuit of specific goals or end results rather than avoiding “ complicity ” in
injustice or adhering to moral principles . It would be concerned primarily with building institutions and
cultivating practices of information-gathering rather than helping . It would focus on accountability rather than
conformity. It
would calibrate itself directly to the task of redistributing social resources and power rather
than to intermediary goals cashed out in terms of pedestals or symbolism . It would focus on building and rebuilding
rooms, not regulating traffic within and between them – it would be a world-making project: aimed at building and rebuilding actual structures of
social connection and movement, rather than mere critique of the ones we already have.
The water crisis in Flint, Michigan presents a clear example of both the possibilities and limitations of refining our epistemic politics in this way.
Michigan’s Department of Environmental Quality (MDEQ), a government body tasked with the support of “healthy communities”, with a team
of fifty trained scientists at its disposal, was complicit in covering up the scale and gravity of the public health crisis from the beginning of the
crisis in 2014 until it garnered national attention in 2015.
The MDEQ, speaking from a position of epistemic and political authority, defended the status quo in Flint. They claimed that “Flint water is safe
to drink”, and were cited in Flint Mayor Dayne Walling’s statement aiming to “dispel myths and promote the truth about the Flint River” during
the April 2014 transition to the Flint River water source. That transition was spearheaded under the tenure of the city’s emergency manager
Darnell Earley (an African-American, like many of the city residents he helped to poison). After the American Civil Liberties Union (ACLU)
circulated a leaked internal memo from the federal Environmental Protection Agency (EPA) in July of 2014 expressing concern about lead in
Flint water, the MDEQ produced a doctored report that put the overall measure of lead levels within federally mandated levels by mysteriously
failing to count two contaminated samples.
The reaction from residents was immediate. The month after the switch in water source, residents reported that their tap water was discoloured
and gave off an alarming odour. They didn’t need their oppression to be “celebrated”, “centred”, or narrated in the newest academic parlance.
They didn’t need someone to understand what it felt like to be poisoned. What they needed was the lead out of their water. So they got to work.
The first step was to develop epistemic authority. To achieve this they built a new room: one that put Flint residents and activists in active
collaboration with scientists who had the laboratories that could run the relevant tests and prove the MDEQ’s report to be fraudulent. Flint
residents’ outcry recruited scientists to their cause and led a “citizen science” campaign, further raising the alarm about the water quality and
distributing sample kits to neighbours to submit for testing. In this stage, the alliance of residents and scientists won, and the poisoning of the
children of Flint emerged as a national scandal.
But this was not enough. The second step – cleaning the water – required more than state acknowledgement: it required apportioning labour and
resources to fix the water and address the continuing health concerns. What Flint residents received, initially, was a mix of platitudes and
mockery from the ruling elite (some of this personally committed by a President that shared a racial identity with many of them). This year,
however, it looks as though the tireless activism of Flint residents and their expanding list of teammates has won additional and more meaningful
victories: the ongoing campaign is pushing the replacements of the problematic service lines to their final stage and is forcing the state of
Michigan to agree to a settlement of $600 million for affected families.
This outcome is in no way a wholesale victory: not only will attorney fees cut a substantial portion of payouts, but the settlement cannot undo the
damage that was caused to the residents. A constructive epistemology cannot guarantee full victory over an oppressive system by itself. No
epistemic orientation can by itself undo the various power asymmetries between the people and the imperial state system. But it can help make
the game a little more competitive – and deference epistemology isn’t even playing.
The biggest threats to social justice attention and informational economies are not the absence of yet
more jargon to describe, ever more precisely or incisively, the epistemic, attentional, or interpersonal afflictions of
the disempowered. The biggest threats are the erosion of the practical and material bases for popular
power over knowledge production and distribution, particularly that which could aid effective political action and
constrain or eliminate predation by elites. The capture and corruption of these bases by well-positioned
elites, especially tech corporations, goes on unabated and largely unchallenged , including: the corporate monopolization of
local news, the ongoing destruction and looting of the journalistic profession, the interference of corporations and governments in key democratic
processes, and the domination of elite interests in the production of knowledge by research universities and the circulation of the output of these
distorted processes by established media organizations.
Confronting these threats requires leaving some rooms – and building new ones.
The constructive approach to standpoint epistemology is demanding. It asks that we swim upstream : to be
accountable and responsive to people who aren’t yet in the room , to build the kinds of rooms we could
sit in together , rather than merely judiciously navigating the rooms history has built for us. But this weighty demand is par for the course
when it comes to the politics of knowledge: the American philosopher Sandra Harding famously pointed out that standpoint epistemology,
properly understood, demands more rigour from science and knowledge production processes generally, not less.
But one important topic stands unaddressed. The deferential approach to standpoint epistemology often comes packaged with
concern and attention to the importance of lived experience . Among these, traumatic experiences are
especially foregrounded.
At this juncture, scholarly analysis and argument fail me. The remainder of what I have to say skews more towards conviction than contention.
But the life of books has taught me that conviction has just as much to teach, however differently posed or processed, and so I press on.
I take concerns about trauma especially seriously . I grew up in the United States, a nation structured by
settler colonialism , racial slavery , and their aftermath, with enough collective and historical trauma to go round. I also
grew up in a Nigerian diasporic community, populated by many who had genocide in living memory . At
the national and community level, I have seen a lot of traits of norms, personality, quirks of habit and action that I’ve suspected were downstream
of these facts. At the level of individual experience, I’ve watched and felt myself change in reaction to fearing for my
dignity or life, to crushing pain and humiliation. I reflect on these traumatic moments often , and very
seldom think : “ That was educational ”.
These experiences can be, if we are very fortunate, building blocks. What comes of them depends on how the blocks are put together: what
standpoint epistemologists call the “achievement thesis”. Briana Toole clarifies that, by itself, one’s social location only puts a person in a
position to know. “Epistemic privilege” or advantage is achieved only through deliberate, concerted struggle from that position.
I concede outright that this is certainly one possible result of the experience of oppression: have no doubt that humiliation, deprivation, and
suffering can build (especially in the context of the deliberate, structured effort of “consciousness raising”, as Toole specifically highlights). But
these same experiences can also destroy, and if I had to bet on which effect would win most often, it would be the latter. As Agnes Callard rightly
notes, trauma (and even the righteous, well-deserved anger that often accompanies it) can corrupt as readily as it can ennoble. Perhaps more so.
Contra the old expression, pain – whether borne of oppression or not – is a poor teacher. Suffering is partial, short-sighted, and self-absorbed. We
shouldn’t have a politics that expects different: oppression is not a prep school.
When it comes down to it, the thing I believe most deeply about deference epistemology is that it asks something of trauma
that it cannot give . Demanding as the constructive approach may be, the deferential approach is far more demanding and in a far
more unfair way: it asks the traumatized to shoulder burdens alone that we ought to share collectively . When I
think about my trauma, I don’t think about grand lessons. I think about the quiet nobility of survival. The very fact that those chapters weren’t the
final ones of my story is powerful enough writing all on its own. It is enough to ask of those experiences that I am still here to remember them.
Deference epistemology asks us to be less than we are – and not even for our own benefit. As Nick Estes
explains in the context of Indigenous politics: “The cunning of trauma politics is that it turns actual people and
struggles , whether racial or Indigenous citizenship and belonging, into matters of injury . It defines an entire people
mostly on their trauma and not by their aspirations or sheer humanity ”. This performance is not for the benefit
of Indigenous people, but “for white audiences or institutions of power ”.
I also think about James Baldwin’s realization that the things that tormented him the most were “the very things that connected me with all the
people who were alive, who had ever been alive”. That
I have survived abuse of various kinds, have faced near-death
from both accidental circumstance and violence (different as the particulars of these may be from those around me) is not a
card to play in gamified social interaction or a weapon to wield in battles over prestige . It is not what
gives me a special right to speak , to evaluate , or to decide for a group . It is a concrete, experiential
manifestation of the vulnerability that connects me to most of the people on this Earth. It comes between
me and other people not as a wall , but as a bridge .

You might also like