0% found this document useful (0 votes)
2K views60 pages

Montgomery Bell-Barnard-Asbury-Aff-University of Kentucky Season Opener-Round5

1. Interoperability between cybersecurity tools is currently weak due to a lack of common standards, protocols, and data models. Organizations use many different cybersecurity products from various vendors, making it difficult to integrate these tools and gain a comprehensive view of threats. 2. The Open Cybersecurity Alliance is working to develop interoperable messaging formats and standardized data models to help cybersecurity tools better communicate and analyze threat information. Adopting open standards would improve integration and allow security teams to more efficiently leverage new tools. 3. For cyber defenses to be optimized, a national standard is needed to ensure interoperability between jurisdictions. Without standardization at all levels, inconsistencies undermine the reliability and trustworthiness
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2K views60 pages

Montgomery Bell-Barnard-Asbury-Aff-University of Kentucky Season Opener-Round5

1. Interoperability between cybersecurity tools is currently weak due to a lack of common standards, protocols, and data models. Organizations use many different cybersecurity products from various vendors, making it difficult to integrate these tools and gain a comprehensive view of threats. 2. The Open Cybersecurity Alliance is working to develop interoperable messaging formats and standardized data models to help cybersecurity tools better communicate and analyze threat information. Adopting open standards would improve integration and allow security teams to more efficiently leverage new tools. 3. For cyber defenses to be optimized, a national standard is needed to ensure interoperability between jurisdictions. Without standardization at all levels, inconsistencies undermine the reliability and trustworthiness
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 60

1AC

Cyberattacks ADV---1AC
Advantage one is CYBERATTACKS
Law enforcement responses to attacks are failing BUT the plan solves:
1. Interoperability---it’s weak now because of federal disengagement AND it’s key to
cyber response
Crumpler and Lewis 20 [William D. Crumpler, Research Assistant, Technology Policy Program, James
A. Lewis, Senior Vice President and Director, Technology Policy Program, “Cybersecurity and the
Problem of Interoperability,” Center for Strategic and International Studies, 1/27/20,
https://www.csis.org/analysis/cybersecurity-and-problem-interoperability]

Organizations face a growing threat from malicious cyber activity. Nation-states are becoming more aggressive, and
criminals are growing in sophistication. The spread of poorly secured “Internet of Things” devices increases the attack surface,
and AI-enabled hacking tools and cybercrime-as-a-service intensifies the competition between defenders
and attackers. These threats have driven companies to build layers of defenses, resorting to a variety of products and services developed
by different cybersecurity vendors. According to AttackIQ and the Ponemon Institute, large organizations use an average of 47 different
cybersecurity tools across their networks, and research firm ESG estimates that firms source their tools from an average of 10 different
vendors.1 Coordinatingthe implementation of all these products is a challenge of its own . Moreover, this
complex mixture of cybersecurity products and services creates interoperability problems that work
against the efficient use of these tools. Integrating different products is a major challenge for security teams.2 When new
tools are introduced but are unable to communicate with other platforms, it is hard to get a useful
picture of the threat landscape. Some may be ineffective because they are not being fed data from complementary systems. The
pace of cyberattacks is accelerating too quickly for organizations to rely on manual threat analysis and
response, and a multiplicity of tools can provide contradictory information . In the face of these inconveniences,
purchased tools may be left languishing. Even when cybersecurity teams manage to integrate their cyber defense toolkits, the time and effort
required to do so can create a significant resource drain. Instead
of spending their time responding to threats, cyber
professionals are occupied with managing a complex web of products and services that was supposed
to make their jobs easier. A common set of standards, protocols, taxonomies, and open-source software that can tie
cybersecurity tools together could help ease this burden. If tools used shared methods for identifying and classifying
threat intelligence, communicating anomalies, and automating response actions, it would be significantly easier to take advantage of new
cybersecurity solutions. The Organization for the Advancement of Structured Information Standards (OASIS) has launched a new Open Project
called the Open Cybersecurity Alliance (OCA). The OCA brings together interested stakeholders intended to provide a solution to the described
problem. It is attempting to do so through two ongoing programs. One will develop an interoperable messaging format for cybersecurity tools,
while the other will develop standardized data models and libraries to classify threats in a way that can be analyzed by any cybersecurity tool.3
Adopting open standards for cybersecurity tools will take time, and cybersecurity customers can encourage interoperability. The federal
government, for instance, can play an important role. Projects like the Continuous Diagnostics and Mitigation (CDM) “Dynamic
Evolving Federal Enterprise Network Defense” from the Department of Homeland Security can promote interoperability and the
widespread adoption of common standards. It is also in companies’ best interest to push for the adoption of common
standards. Even with an expert staff and all the latest tools, security teams will continue to face challenges as long as security architectures
work against integration. By prioritizing the construction of a more open, interoperable cyber ecosystem, companies can be leaders in building
a more effective, more sustainable cyber defense. One thing we have learned in cybersecurity is that speed is crucial, to identify, to block, and
to respond. Things that make a defender slower give advantage to the attacker. Thus, it is particularly undesirable that
the very investment intended to strengthen defense can sometimes weaken it. Improving interoperability returns some of
the advantage to defenders.
A national standard solves---it’s the only way to optimize integration
Brian Cusack 19, Director @ Cyber Forensic Research Center, International Standards negotiator for
the last 18 years and has edited for publication International Standards in Governance, Management,
Digital Forensics and Security, professor at the Graduate Research Institute, “Extracting Benefits from
Standardization of Digital Forensic Practices,” Policing: A Journal of Policy and Practice, 10/10/2019, p. 1-
9

Standardization offers consistencies for interoperability between jurisdictions and organizational entities.
In some instances, accreditation and certification services are available to assure the reputational transfer of
conformance and the compliance for best practice requirements. The impact of these benefits is a more
predictable environment in which trust elements are accessible to all participants. To improve the
consistency of digital evidence, standardization must occur at all levels. This includes governance, management (for
example, digital forensic laboratory accreditation) and operations (for example, error rates are published for tools and tool testing is
enforced). International standards are available for all levels of organization and a co-ordinated and sequenced adoption is required for optimal
effects. This means the adoption of more than one standard in order to mitigate risk and to get advantage on matters in specific relation to
digital evidence, while avoiding the distraction of generic and over generalized processes. The
objective must be to reduce
failures and to increase the transfer of trustworthy evidence. Inter-party trust is critical for professional
reputations and the usefulness of digital evidence to courts. Adopting best practices for the implementation of
standards can often be overshadowed by conflicting motivations. Political and economic expediency has always reduced the
implementation of standardization to its immediate functional attributes rather than for the longer term benefits. The full life cycle of owning
standards requires investment into processes that do not have distinct ends. The retirement of a specific standard usually signals an upgrade to
a current or more relevant one. The short-term returns a standard provides are symbolic, and in the publicity,
brand association, and process consistency. These benefits may be reaped as political capital, market share,
revenue, or reputation. However, a long-term investment delivers trusted practices that have predictable
and robust procedures for optimal, repeatable, and valuable performance. In this condition, the
management of digital evidence may be handled with minimal introspection and maximum trustworthiness.
When expediency overshadows best practices for ownership the full benefits cannot be realized and the quality of
lifecycle processes compromised. The owner of a standard must be motivated to commit the required resources and
knowledge to an adoption and avoid the temptation to use standards as a quick fix solution. The greatest challenge
for a standard’s owner is to extract the maximum value from its tenancy. This requires the skilful selection of the most effective

single standard or networked standards to treat the problem at hand. The margins between losses and gains are often close and the
management has to be done in such a way that the benefits outweigh the costs of the ownership. The new international standards for digital
forensics present a range of opportunities to increase security risk mitigation and to provide post-event capability. The absence of a
specific Laboratory Standardization is a weakness that still requires redress. The provision of digital forensic and related service
standardization is an example of structured planning for robust processes and practices for gaining user trust. The adoption of standards that
are not fit for the selected purpose will
create drag on the system, incur costs, and pull the system away from its
intended purposes. In this way, the best value cannot be realized. A compromise solution is to adopt two
or more standards that overlap the area of concern, but again this affects optimal outcomes. Conclusion Extracting
the full benefits from the Standardization of Digital Forensic Practices requires a long-term investment into
standards ownership, renewal processes, and the adoption of lifecycles of best practices. To address current points of failure
in the management of digital evidence, standardization is an attractive option. It offers consistencies for
interoperability between jurisdictions and organizational entities, and, in some instances accreditation and certification.
The impact of these benefits is a more predictable environment in which to deliver sensitive and high-
risk services, leading to the reduction in failures and the enhancement of inter-party trust. In this article, it
has been argued that how standards are adopted, and how they are managed through a full life cycle and is critical for the delivery of the most
beneficial effects. The key attributes of motivation and context are often overlooked in the rush to get accreditation and certification, but
without careful consideration, the costs of adoption will outweigh the benefits. The advocacy is for the long-term investment in standardization
and the creation of trusted practices for predictable and robust procedures.

2. Courts---they’ll exclude digital forensic evidence without a uniform federal standard


to verify methods AND investigator expertise
C Henderson & KW Lenz 15, Henderson is from the Stetson University College of Law; Lenz is from
Saint Petersburg, FL, “Expert Witness Qualifications and Testimony,” in Professional Issues in Forensic
Science, edited by Max M. Houck, 2015, Elsevier/AP
Qualifications

The court must determine whether a proffered witness is qualified to testify as an expert, and that
determination will not be overturned except for an abuse of discretion (Kumho Tire Co. Ltd. v. Carmichael); but see, for example, Radlein v.
Holiday Inns, Inc. (holding that the trial court’s decision will not be reversed unless there is a clear showing of error). Federal Rule of Evidence
702 states that a witness may qualify as an expert on the basis of knowledge, skill, training, experience, or education.
An expert witness must possess only one of these traits for the judge to find the expert qualified to give an opinion. In making this evaluation,
the judge may consider the expert’s educational background, work experience, publications, awards, teaching, speaking, or other
professional engagements, prior expert–witness testimony, and membership in professional associations. Often, the expert may have to
educate the attorney proffering the expert regarding the significance of particular experience, achievements, and certifications to ensure that
they are appropriately presented to the judge. An expert must be prepared to explain board certification and licensure requirements to the
judge in detail. Experience as an Expert Witness Experience and training are often more significant than academic background and are accorded
more weight by jurors, according to at least one study evaluating juror perceptions of fingerprint experts. However, experience as an expert
witness, standing alone, does not qualify someone as an expert in later cases. One court rejected the opinion of a witness who had testified as
an expert 126 times (Bogosian v. Mercedes-Benz of North America Inc.). Another court noted, “it would be absurd to conclude that one can
become an expert by accumulating experience in testifying” (Thomas J. Kline, Inc. v. Lonillard, Inc.). Conversely, a lack of previous experience as
an expert witness does not disqualify one from testifying as an expert, because “even the most qualified expert must have his first day in court”
(US v. Locascio). Education and Training Anexpert may be qualified on the basis of academic credentials, including the
expert’s undergraduate, graduate, and postgraduate work. An expert’s academic credentials should only be issued by accredited
educational institutions and programs, because the proliferation of the Internet, while laudable for so many reasons, has also
rekindled the oldfashioned diploma mill. One such business, Diplomas 4U, once provided bachelor’s, master’s, MBA, or PhD degrees in its
customers’ field of choice; advertisements assured that no one would be turned down and that there would be no bothersome tests, classes,
books, or interviews. After
studying this issue, the National Academy of Sciences has concluded that it is
crucially important to improve undergraduate and graduate forensic science programs with, among other things,
attractive scholarship and fellowship offerings, and funding for research programs to attract research universities and students in fields relevant
to forensic science. An expert should continuously perform research and publish in the expert’s field, preferably in peer-reviewed publications.
Teaching experience is another of the qualifications that judges will evaluate: all forms of teachingd regular, specialty, guest lecturing, visiting
professorships, continuing education, and short coursesdweigh in as credentials. An expert should also be up-to-date with developments in his
or her field of expertise by reading the current literature, enrolling in continuing education seminars, joining professional societies, and
attending professional meetings. Membership in Professional Associations A study published by the U.S. Department of Justice in 1987 found
that jurors perceived those fingerprint experts who belonged to professional associations to be more credible than other experts, and
presumed experts would belong to such groups (Illsley, supra). It is therefore important for an expert to remain active and participate in
professional societies; the expert’s credibility is diminished if the expert has not recently attended a professional meeting. Professional
associations that only require annual dues payment to become a member are not as prestigious as associations that are joined by special
invitation only, by approval of special referees, or by passing an examination. Thus, an expert should be selective about which professional
associations to join. The National Academy Science (NAS) Report callsfor standardized accreditation and/or certification, as
well as a uniform code of ethics: Although some areas of the forensic science disciplines have made notable efforts to achieve
standardization and best practices, most disciplines still lack any consistent structure for the enforcement of
‘better practices,’ operating standards, and certification and accreditation programs . . Accreditation is required
in only three states . [and] [i]n other states, accreditation is voluntary, as is individual certification. . NAS Report at 213
Thus, the NAS Report calls for the creation of a federal agency to develop tools to advance reliability in
forensic science, to ensure standards that reflect best practices, and serve as accreditation tools for laboratories and as guides for the
education, training, and certification of professionals (NAS Report at 214). Increased Scrutiny of Experts Experts have come
under increased scrutiny for either fabricating or inflating their qualifications. In Florida, in 1998, a person who had
been testifying as an expert in toxicology for 3 years for both the prosecution and defense in criminal cases was prosecuted for perjury for
testifying with fraudulent credentials. The expert claimed to possess master’s and doctorate degrees from Florida Atlantic University, but when
a prosecutor sought to confirm the claims, he discovered that the registrar’s office had no record of the expert attending or receiving a degree
from the university. In another case, a Harvard medical professor was sued for trademark infringement for falsely claiming to be board-certified
by the American Board of Psychiatry and Neurology (ABPN) in five trials (ABPN v. Johnson-Powell). The board sought to seize the expert’s
witness fees and treble damages, but the court denied that relief because it believed the expert was unlikely to infringe in the future. In 2007, a
court granted the plaintiff a new trial in her product liability action when it was discovered that the pharmaceutical company’s cardiology
expert had misrepresented his credentials by testifying that he was board-certified in internal medicine and cardiovascular disease when in fact
those certifications had expired (In re Vioxx Products). In
addition to perjury prosecutions for false qualifications, some
jurisdictions also prosecute for academic fraud. For example, in Florida, a person who misrepresents association with, or
academic standing at, a postsecondary educational institution is guilty of a first-degree misdemeanor (Fla. Stat. x 817.566). Courts have
also overturned convictions where the experts testified outside their field of expertise. Instances include a
medical examiner testifying to shoe-pattern analysis and an evidence technician with no ballistics expertise giving testimony about bullet
trajectory (see Gilliam v. State; Kelvin v. State). There
is evidence to suggest that, since the Supreme Court’s decisions in
Daubert v. Merrell Dow Pharmaceuticals, Inc., and Kuhmo Tire Co., courts have been more willing to exclude
expert testimony. The Federal Judicial Center compared a 1998 survey of 303 federal judges with a 1991 survey. In 1998, 41% of
the judges claimed to have excluded expert testimony, whereas only 25% of the judges did so in 1991. A 2001
RAND study similarly concluded that judges were becoming more vigilant gatekeepers; for example, in the
U.S. Third Circuit Court of Appeals, the exclusion rate in products liability cases rose from 53% to 70%.
This contradicts most of the reported case law following Daubert, which seems to indicate that the exclusion of expert testimony remains the
exception, not the rule (see Fed. R. Evid. 702).

3. Legal attribution---clear standards for sufficient digital evidence enable trustworthy


attribution AND treaty verification
Alessandro Guarino 13, StudioAG – ICT Consulting & Engineering, “Digital Forensics in a Cyber Warfare
Context,” StudioAG, March 2013, http://www.studioag.pro/wp-
content/uploads/2013/03/DigitalForensicsAndCyberwarfare.pdf
1. Introduction

Digital forensic has come a long way since its inception and has taken by now its place among the forensic sciences. Digital forensic
techniques have now reached the state where they are less of an art and more like repeatable and well-
documented scientific procedures. Work-flow phases have been codified in several guidelines developed by
practitioners and also in developing international standards like ISO/IEC 27037, published probably 2012. This paper proposes a
model under which digital forensics concepts developed along the years in a civilian context can be usefully applied to
military operations, and particularly to cyber warfare. Scenarios in which digital forensics can be useful, and even
necessary, include the attribution problem (of cyber attacks), treaty assurance, intelligence and counter-
intelligence, both at tactical and strategic levels in the organization. At first sight the object of digital forensics in a
warfare situation is very different from what is needed in the civilian world: here the "final product" is sound
evidence, to be used in a court of law -civil or criminal- or at least that can be used, in principle. From there the
need to assure the integrity of evidence from the very beginning and along all the chain of custody. In
military contexts this necessity may seem unnecessary, compared to the need for actionable
intelligence, often within strict time constraints, but contemporary warfare presents us with more and more legal
concerns than the past. Contemporary international relations are more and more mad of supranational bodies
with varying degrees of legal clout and status, from the United Nations to the International Court of Justice, various regional
bodies and alliances, and a long history of international conventions currently in force. More and more we are faced with the
phenomenon of nation-states presenting evidence of enemy conduct in order to justify a military action.
If cyber warfare is to be a proper part of warfare, these evidence inevitably is and will be in digital form.
Another example of digital forensics relevance is the problem of verification of possible future treaties
regarding cyber warfare: this a thorny problem, and a very open one, currently debated by legal
experts, military bodies and forensic analysts worldwide. It is quite easy to detect ICBM silos, not so
easy to detect cyber weapons ready for action. International relations experts and country leaders are
faced by the challenge of adapting concepts formulated long ago like aggression of a nation-state onto
another, or combatant status, to the realities of cyber warfare. Also the relevance of non-national actors
is very high in cyber warfare, by its very nature, complicating again the picture; we have only to think of the
proxy problem, where non-national organizations or even individuals can conduct cyber activities on behalf of governments (knowingly or
not). Legal concerns are mentioned because forensics is where the technical side meets the legal one:
digital evidence is a tool in a legal process so it is to be conducted keeping in mind legal concerns like
jurisdiction, lawful acquisition, and so on. The rest of this paper will present a brief overview of the field of digital forensics, as
we see it now, followed by a short modelization of cyber warfare activities and context. After that a model matching digital forensics to the
cyber warfare context will be proposed. 2. Digital Forensics What is digital forensics? We report here one of the most useful definitions of
digital forensics formulated. It was developed during the first Digital Forensics Research Workshop (DFRWS) in 2001 and it is still very much
relevant today: Digital Forensics is the use of scientifically derived and proven methods toward the preservation, collection, validation,
identification, analysis, interpretation, documentation and presentation of digital evidence derived from digital sources for the purpose of
facilitating or furthering the reconstruction of events found to be criminal, or helping to anticipate unauthorized actions shown to be disruptive
to planned operations.(Pearson 2001) This formulation stresses first and foremost the scientific nature of digital forensics methods, in a
point in time when it was transitioning from being a "craft" to an established field and rightful part of the forensic sciences. At that point
digital forensics was also transitioning from being mainly practised in separated environments such as law
enforcement bodies and enterprise system administrators to a unified field. Today this process is very advanced
and it can be said that digital forensics principles, procedures and methods are shared by a large part of its practitioners, coming from different
backgrounds (criminal prosecution, defence consultants, corporate investigators and compliance officers). Applying scientifically derived
methods implies important concepts and principles to be respected when dealing with digital evidence. Among others we can cite: 1. Previous
validation of tools and procedures. Tools and procedures should be validated by experiment prior to their application on actual evidence. 2.
Reliability. Processes should yield consistent results and tools should present consistent behaviour over time.
3. Repeatability. Processes should generate the same results when applied to the same test environment . 4.
Documentation. Forensic activities should be well-documented , from the inception to the end of evidence life-cycle. On one
hand strict chain-of-custody procedures should be enforced to assure evidence integrity and and the other hand complete
documentation of every activity is necessary to ensure repeatability by other analysts. 5. Preservation of evidence – Digital evidence is
easily altered and its integrity must be preserved at all times, from the very first stages of operations, to avoid spoliation and
degradation. Both technical (e.g. hashing) and organizational (e.g. clear accountability for operators) measures are to be
taken.

Verification’s the one barrier to a cyber AI treaty---extinction


Shaun Tan 17, researcher at China-US focus who has written for Quartz, the Diplomat, and the Malay
Mail Online, citing Bill Gates, Elon Musk, and Steven Hawking, “We Need an AI Limitation Treaty. Now.,”
China-US Focus, 9/26/17, https://www.chinausfocus.com/society-culture/we-need-an-ai-limitation-
treaty-now

But thethreat it poses is real. Prominent computer scientists have warned of it for years, and recently some of
the smartest people on the planet have taken up the call . Bill Gates considers AI more dangerous than a
nuclear catastrophe, Elon Musk said it was probably humanity’s “biggest existential threat,” Steven Hawking said it
could “spell the end of the human race.” We should start by defining what’s meant by the term “AI.” AI, in a sense, is already
here. It’s in online search engines, the computer opponents in video games, the spam filter in our emails, and the Siri assistant in our iPhones.
All of these are examples of artificial narrow intelligence (ANI) – AI that’s only capable of a few specific tasks. Well-designed ANIs can match or
surpass humans at particular tasks, but, unlike humans, they can’t be applied to much else. Google’s AlphaGo may be able to beat any human
at Go, but that’s all it can do. Such AIs are useful, and don’t seem to pose an existential threat. It’s at the level of artificial general intelligence
(AGI) when things get dangerous. An AGI would be as smart as a human across the board. Unlike an ANI, an AGI could be applied to anything.
No one’s been able to develop one yet, but in theory, an AGI would be able to match a human at any task, and, naturally, would also be able to
do things like perform complicated calculations effortlessly, make countless copies of itself in seconds, and transmit itself across the world
instantaneously. An artificial superintelligence (ASI) would be something else entirely. It would be smarter than humans across the board, and
the extent to which it’s smarter may be beyond our reckoning. Our final invention In his great article “The AI Revolution: The Road to
Superintelligence” in Wait But Why, Tim Urban explained why growth in AI cognitive power is likely to take us by surprise. Humans tend to think
that the difference in intelligence between the smartest human and the dumbest human is large, that is, to use Oxford philosopher Nick
Bostrom’s example, that someone like Albert Einstein is much smarter than the village idiot. On the grand scale of intelligence including non-
human animals, however, this difference is miniscule. The difference between the intelligence of a human and that of a chimpanzee is many,
many times larger than the difference between the intelligence of Einstein and that of the village idiot. The difference between the intelligence
of a chimpanzee and that of a mouse is larger still. This means that whilst it may take years or decades to get an AI to chimpanzee-level
intelligence, for example, once that level is reached the transition to general human-level intelligence (AGI) will be much faster, resulting in
what some have termed an “intelligence explosion.” Furthermore, we should factor-in recursive self-improvement, a popular idea amongst AI
researchers for boosting intelligence. An AI capable of recursive self-improvement would be able to find ways to make itself smarter; once it’s
done that, it’ll be able to find even more ways to make itself smarter still, thereby bootstrapping its own intelligence. Such an AI would
independently and exponentially increase in cognitive power. An AI approaching general human-level intelligence, therefore, would pick up
speed, and, far from stopping at Humanville Station, as Bostrom puts it, would whoosh past it. An AI capable of recursive self-improvement that
had attained village idiot intelligence level in the morning might hit Einstein-level by the afternoon. By evening, it could have reached a level of
intelligence far beyond any human. AI researchers, celebrating their success at creating an AGI, might find themselves faced with a
superintelligence before they’d even finished the champagne. A superintelligence could be smarter than humans in the
same way that humans are smarter than chimpanzees. We wouldn’t even be able to comprehend an
entity like that. We think of an IQ of 70 as dumb and an IQ of 130 as smart, but we have no idea what an IQ of 10,000
would be like, or what a being with that cognitive capacity would be capable of. Its power, for us anyway, would be incalculable: many
things we deem impossible or fantastical would be child’s play for it. Curing all disease would be as easy for it as popping a
pill, interstellar travel as easy as stepping from room to room, and extinguishing all life on earth as easy as snuffing out a
candle. The only term we have that comes close to describing something like that is God, and, as Urban ominously puts it, the question we
should ask then is: Will it be a nice God? Taming God Some computer scientists seem confident that we can make an AGI or a superintelligence
be “nice,” that taming the god we created is a matter of programming. Programming an AI of human intelligence or above will likely be a
daunting task. Who knows what it might do without being given specific goals or values, and, even if it is, its actions might still be unpredictable.
Nick Bostrom, who is also the founding director of the Future of Humanity Institute at the University of Oxford, gives the example of an AI being
tasked with the seemingly boring and innocuous goal of making as many paperclips as possible. At some point, it may decide that in order to
maximize the number of paperclips it should prevent humans from reprogramming it or switching it off, upon which it kills all the humans so it
can continue making endless amounts of paperclips unimpeded. Note, of course, that in that scenario the AI wouldn’t exterminate humans
because of any malice it had towards them (no more than we hate bacteria when we take antibiotics), but because they don’t matter to it.
Likewise, when Google’s DeepMind AI program grew increasingly aggressive as it got smarter, and was more likely to attack opponents with
lasers in simulated games, it wasn’t because of any malice towards those opponents; it was just because that strategy maximized its chances of
winning. In order to prevent something like that from happening, some have suggested programming AIs with goals specifically beneficial to
humans. Such attempts, however, can also lead to unexpected results. For example, an AI programmed to “make people happy” might realize
that the most efficient way to do this is to capture humans, implant electrodes into their brains and stimulate their pleasure centers. Likewise,
an AI programmed with Isaac Asimov’s Three Laws of Robotics— 1) A robot may not injure a human being or, through inaction, allow a human
being to come to harm. 2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. --might decide that, since
humans are constantly harming each other, the best way to obey these laws would be to gently imprison all of them. Another suggestion is to
upload a pre-existing set of values into an AI – utilitarianism, say, or liberal democracy. But even assuming people could agree on which
philosophy to go with, it’s hard enough to imbue humans with human values as it is. There’s no telling how a superintelligence might interpret
it, or the contradictions within it. There’s no reliable way to ensure a superintelligence’s goals or values accord with our own. A single careless
assumption or oversight or ambiguity could lead to results no one expected or intended. Caging God Others have suggested building safeguards
around the AGI or superintelligence. They’ve mooted measures of varying degrees of complexity, from denying it access to the internet, to
restricting its contact with the outside world, to trapping it in a series of concentric virtual worlds. None of these safeguards inspire confidence.
First, as Roman V. Yampolskiy, Associate Professor of Computer Engineering and Computer Science at the University of Louisville, noted, every
security measure ever invented has eventually been circumvented. “Signatures have been faked, locks have been picked, supermax prisons had
escapes, guarded leaders have been assassinated, bank vaults have been cleaned out, laws have been bypassed…passwords have been brute-
forced, networks have been penetrated, computers have been hacked, biometric systems have been spoofed, credit cards have been cloned,
cryptocurrencies have been double spent…CAPTCHAs have been cracked, cryptographic protocols have been broken,” he wrote. “Millennia
long history of humanity contains millions of examples of attempts to develop technological and logistical solutions to increase safety and
security, yet not a single example exists which has not eventually failed.” Any safeguards would eventually be circumvented either by human
hackers, or acts of nature (for example, the tsunami that caused the radiation leak at the Fukushima nuclear reactor). Whilst a certain failure
rate may be acceptable in an enterprise where the stakes are lower, it’s unacceptable where a single leak might be all the AI needs to end
humanity’s dominance. Then, there’s the likelihood that any safeguards would be circumvented by the AI itself. Indeed, any security measures
our best computer scientists could devise would be laughable to a superintelligence, which by definition would be many times smarter than any
human. Imagine a human being held captive by chimpanzees. Suppose that these are unusually intelligent chimpanzees that use state-of-the-
art monkey technology to keep the human prisoner – perhaps they manage to construct a rudimentary cage out of sticks. Is there any doubt
that the human wouldn’t eventually escape in ways the chimpanzees couldn’t possibly think of? Perhaps he’d dig a hole under the cage, or
fashion tools out of nearby objects to help him, or remove the bars of the cage and use them as weapons, or make a fire that burns down a
portion of the cage. One way or another, it would only be a matter of time before he found a way free. A superintelligence would be smarter
than humans in a similar fashion. In his article “Leakproofing the Singularity: Artificial Intelligence Confinement Problem,” Yampolskiy suggested
that a superintelligence could easily manipulate a human guard into letting it escape. It could target a guard’s weaknesses, offering him power
or immortality, or promising a cure for a loved-one with a terminal disease. It could also find a bug in the system and exploit it (something even
human hackers do all the time). Or pretend to malfunction, and then escape when its jailors lower safeguards to investigate. Or it could escape
in ways humans aren’t even aware are possible. Insulated from the outside world, Bostrom suggested, it might find a way to generate radio
waves by shuffling the electrons in its circuitry in particular patterns. Of course, these are just the methods our puny human brains can imagine
– an entity thousands of times smarter would be able to come up with a lot more. Effective safeguards are built around power – they’re not
possible against a being that’s smarter, and therefore more powerful, than us. Thinking we could contain something like that would be hubris.
At a talk at MIT, Elon Musk compared developing AI to summoning a demon. “You know all the stories where there’s a guy with the pentagram
and the holy water and he’s like, yeah, he’s sure he can control the demon? Doesn’t work out.” How do you cage a god? The short answer to
that question is “You can’t.” The Need for a Treaty The development of AGI and superintelligence may be approaching. The median realistic
year leading
computer scientists predict it to happen by is 2040. While this might seem far off, we need to
start preparing for it now. “If a superior alien civilization sent us a text message saying, ‘We’ll arrive in a few decades,’ would we just
reply, ‘Ok, call us when you get here – we’ll leave the lights on?’” asked Stephen Hawking in an article co-written with Stuart Russell of the
University of Berkeley and Max Tegmark and Frank Wilczek of MIT. “Probably not – but this is more or less what is happening with AI.” AI is a
technology no major power can afford to ignore if it wants to advance in the 21st century. The U.S. and China in particular are
pouring vast resources into AI research in both the public and private sectors in hopes of achieving the next breakthrough. At
the same time however, AI presents a real existential threat to humanity. All other existential threats,
from global warming to weapons of mass destruction, have some sort of treaty in place to manage the
associated risks. It’s time we had one for AI too. It’s vital we work on establishing an international
framework now, in what are relatively early days, before the AI industry develops too far, before we become too used to its
benefits, before associated vested interests and lobby groups gain too much power. The difficulties in addressing the global warming crisis show
the tendency of humans to inertia, even when faced with a proven existential threat. “[T]he human race might easily permit itself to drift into a
position of such dependence on the machines that it would have no practical choice but to accept all of the machines’ decisions,” wrote Bill Joy,
co-founder of Sun Microsystems, in his essay “Why the Future Doesn’t Need Us.” At that point, he warned, “People won’t be able to just turn
the machines off, because they will be so dependent on them that turning them off would amount to suicide.” When I put the idea of an AI
limitation treaty to top computer scientists, many were skeptical, some even fatalistic. “A machine that is ‘smarter than humans across the
board’ would be worth something comparable to world GDP, approximately $100 trillion,” said Russell. “It’s not going to be easy to stop people
building that.” “[U]nlike [with] nuclear weapons,” said Steve Omohundro, formerly professor of computer science at the University
of Illinois at Champaign-Urbana, and now President of Self-Aware Systems, a think tank promoting the safe uses of AI, “it is not easy to
verify compliance with any [AI] agreement given today’s technologies.” Yet an effort must be made. The growing field
of AI offers vast potential, both for human flourishing, and its extinction. We have no excuse for not trying to stave off
the latter. There seem to be a few conclusions that can be drawn: 1) A superintelligence cannot be tamed or caged. 2) An AGI capable of
recursive self-improvement would soon become a superintelligence. 3) Even without recursive self-improvement, an AGI might pose an
existential threat simply because in addition to being able to perform any task at a human level, it would also be able to do things only
computers can do. The line, if one is to be drawn in an AI limitation treaty, then, should be at the AGI level: no one should be allowed to
develop an AI that’s as smart as or smarter than a human across the board, nor one that could independently become so. Research into ANI –
better versions of the AI we use today – can continue unimpeded. The important difference is domain specificity; an ANI cannot be used for
problems beyond a narrow scope, whilst an AGI can be used for anything. “A system is domain specific if it cannot be switched to a different
domain without significant redesigning effort,” explained Yampolskiy. “Deep Blue [IBM’s chess AI] cannot be used to sort mail. Watson [IBM’s
Jeopardy! AI] cannot drive cars. An AGI (by definition) would be capable of switching domains.” What might such a treaty based on these
principles look like? Possible Provisions An
international AI control framework could contain some of the same elements as
control frameworks for weapons of mass destruction: 1) Commitments not to pursue that kind of technology, or to abet anyone in
pursuing such technology, or to allow anyone to do so 2) An information and technology-sharing channel between signatories who abide by the
provisions 3) An international organization to monitor developments 4) An inspections regime to catch cheaters 5) Recourse to the UN Security
Council for punishment of anyone who breaches these rules 6) A mechanism to remove and dispose of any forbidden material The
commitments and information and technology sharing are self-explanatory enough. Suffice to say that states would have to commit not just to
eschewing research that may result in AGI themselves, they will also have to commit to ensuring private entities within their borders do so. This
will obviously be difficult. The fruits of AGI research are likely lucrative, and corporations, in particular, have great incentives to pursue it, even
illegally. James Barrat, author of Our Final Invention: Artificial Intelligence and the End of the Human Era, points to many instances of
irresponsible corporate behavior driven by greed. “Corporations behave like psychopaths turned loose on society,” he told me. “I’m thinking of
Union Carbide (Bhopal), Ford (the exploding Pinto), Enron (causing rolling blackouts in California). Facebook, Google, IBM, [and] Baidu are no
more upright than these corporations. I don’t expect them…to temper innovation with stewardship.” States will have to commit to
strict monitoring of AI research domestically, and to imposing penalties for any research that could lead to AGI that are harsh enough to
outweigh any potential benefits. When it comes to the monitoring of AI developments, this can be successfully done to an extent. “Although
several authors make the point that AGI is much easier to develop unnoticed than something like nuclear weapons,” wrote Yampolskiy and Kaj
Sotala of the Machine Intelligence Research Institute, “cutting-edge high-tech
research does tend to require major
investments which might plausibly be detected even by less elaborate surveillance efforts .” “[I]t would not be
too difficult to identify capable individuals with a serious long-standing interest in artificial general intelligence research,” wrote Bostrom in
Superintelligence: Paths, Dangers, Strategies. “Such individuals usually leave visible trails. They may have published academic papers, posted on
internet forums, or earned degrees from leading computer science departments. They may also have had communications with other AI
researchers, allowing them to be identified by mapping the social graph.” Thus, researchers working on projects that may
result in an AGI can be monitored. Perhaps an international agency can be established to promote safe AI practices and to carry
out inspections, similar to what the International Atomic Energy Agency does for nuclear material. The specifics would of course have to
be decided by experts. As G. S. Wilson, Deputy Director of the Global Catastrophic Risk Institute, proposed, a body of experts could
determine what constitutes a “reasonable level of concern” involving AGI or other possibly dangerous research. Such a treaty would of course
raise concerns that it’s stifling innovation. These concerns are justified. AI innovations would be significantly constrained by these measures,
innovations that could improve knowledge, save lives, raise our standard of living to an unprecedented degree. Yet the very real risk of human
extinction makes it wiser to forfeit some of these benefits. Shortcomings The
shortcomings of such a treaty are obvious. Will
some clandestine AGI-related research elude even the most vigilant watchdogs? Yes, in the same way that a terrorist
somewhere could probably build a dirty nuclear bomb without the authorities’ knowledge. But that doesn’t mean nuclear control treaties
aren’t worthwhile. Will some countries cheat? Certainly, and any treaty is only as good as its enforcement.

Continuous indictments and attribution grounded in hard evidence are key to establish
cyber deterrence
Lewis 16. James Andrew Lewis is the Senior Vice President and Director of the CSIS Technology Policy
Program, 3-25-2016, "Indictments, Countermeasures, and Deterrence," Center for Strategic and
International Studies, https://www.csis.org/analysis/indictments-countermeasures-and-deterrence -
MBA AM
“What counts are the political and military consequences of a violation…since these alone will determine whether or not the violator stands to gain in the end.”

Fred Ikle, “After Detection, What?” 1961

The announcement of the indictment of seven Iranian hackers is part of a larger effort to reshape
adversary thinking about the costs of attacking the U nited States. There has been a sequence of linked events - the PLA indictments,
the response to Sony, the threat of sanctions before the Xi-Obama summit, and now these indictments. The effect is to push back against

foreign-state hackers. These actions create consequences for malicious actions in cyberspace (recommended
in a 2013 CSIS Report). When there is no consequence or penalty, there is no incentive to stop. Cyberspace has been largely penalty-free

until recently. The indictments signal to our opponents to realize that attacking the United States can have

consequences.
Iran rapidly developed its cyber-attacks capabilities after its “Green Revolution,” where political opponents of the regime used the internet to organize protests and
dissent. These attack capabilities let them manage and then eliminate the political threat created by the internet. In developing these capabilities, Iran was helped
by Russia, or at least by Russian hackers with the approval of the Russian state. There are no freelance hackers in Iran - the attackers are funded and controlled by
the Iranian Revolutionary Guard. Iran routinely probes American critical infrastructure networks to locate
vulnerabilities, has launched massive denial of service attacks against leading U.S. banks (apparently to protest sanctions) and disrupted networks and data
at the Sand Casino in an effort to intimidate and punish its owner. What Iran did to Aramco in 2012 shows the kind of damage

they could do in the United States, if they thought they could get away with it.
The United States has been aware of Iranian activities for several years, but was hesitant to say so. When the Department of Homeland Security (DHS) briefed
critical infrastructure companies on how their networks were being probed, they were denied permission by the Intelligence Community, for reasons best known to
itself, to even say the word “Iran” in the briefing even though it was an open secret. Actions like this (the DHS briefing was reported in the press) inspire more
ridicule than caution in opponents. The likely cause for the self-imposed restraint was probably a desire not to put the nuclear talks at risk. This may explain why
Sony received presidential attention while Sands did not.
This is not name-and-shame.
Iran’s leaders do not burst into tears when they are named. Naming must be
accompanied by consequences or the threat of consequences if it is to have effect. The Department of Justice’s approach
has been to develop the case as if it was really going to court. They do a lot of work on evidence. We will
probably never see these people stand trial (as with the PLA indictments) but Justice prepares as if this was a possibility. This slows any response - as does the need
to find companies willing to go public about being the victim of hacking - but the wealth of evidence in an indictment is compelling,
perhaps even frightening to opponents.

The latest indictments send a powerful message. The responses to China, North Korea and now Iran say two things:
attackers are no longer invisible and there will be consequences for their actions. This message reshapes opponent
thinking about the risk and potential costs of cyber actions against the United States. The effect of this is not always clear to American

commentators - diplomacy and espionage are colored in half tones, not the black-and-white, clear distinction of popular culture. The most
obvious evidence of change is that the PLA indictments, widely questioned when they were announced,
contributed significantly to the Chinese decision to agree to refrain from commercial cyber-spying.

The United States faces four major opponents in cyberspace and the name missing from this list is Russia. Russian hackers,
presumably acting with the permission of the Russian state, are the most energetic and skilled in committing financial crimes against Wall Street and other financial
centers. There have been indictments against Russian hackers and some, foolish enough to travel outside of Russia, have been arrested, but these have not stopped
cybercrime, suggesting that Russia has a higher tolerance for risk and a greater willingness (or desire) to flout the United States. This has implications for future
actions - the
United States cannot rest on its laurels and stop with these actions against China, North Korea
and Iran, but must continue to create consequences for hacking incidents.

We can draw lessons from these incidents about what should be the nature of any future action . Signaling to opponents that U.S.
attribution capabilities have improved markedly is a first step - the President’s 2015 State of the Union Address briefly revealed
how and why they have improved when he said “we're making sure our government integrates intelligence to combat cyber threats, just as we have done to
combat terrorism.” This means the ability to blend cyber espionage, signals intelligence and human sources gives the United States an unparalleled advantage in
identifying hackers.

The second step involves countermeasures. The United States has held a long, painful discussion of cyber-deterrence that has been
handicapped by approaching deterrence as a military problem, as if we were still in the Cold War. Complications arise from trying to identify how military force can
be used in a “proportional response” to hacking and cybercrime, where no incident would justify a military response. The most effective actions to
date in causing state attackers to recalculate risk have not depended on the Department of Defense or
Cyber Command, but on attribution, indictments and the threat of sanctions. Looking for a military solution to cyber

deterrence has tied us in knots. The better response lies in countermeasures that fall below the level of
the use of force. Indictment and sanctions can make opponents wary. This will wear off if not refreshed, but
the latest indictment are something we should have done a long time ago to construct the rule of law in cyberspace. Indictments make clear that

the Wild West days of cyberspace are (slowly) coming to an end.

Prevents nuclear cyberwar---NC3 entanglement guarantees escalation


Klare 19 [Michael T. Klare, professor emeritus of peace and world security studies at Hampshire
College, “Cyber Battles, Nuclear Outcomes? Dangerous New Pathways to Escalation,” Arms Control
Association, November 2019, armscontrol.org/act/2019-11/features/cyber-battles-nuclear-outcomes-
dangerous-new-pathways-escalation]

In January 2018, details of the Trump administration’s Nuclear Posture Review (NPR) were posted online by the Huffington
Post, provoking widespread alarm over what were viewed as dangerous shifts in U.S. nuclear policy. Arousing most concern was a call for the
acquisition of several types of low-yield nuclear weapons, a proposal viewed by many analysts as increasing the risk of nuclear weapons use.
Another initiative incorporated in the strategy document also aroused concern: the claim that an enemy
cyberattack on U.S. nuclear command, control, and communications ( NC3) facilities would constitute a
“non-nuclear strategic attack” of sufficient magnitude to justify the use of nuclear weapons in
response. Under the Obama administration’s NPR report, released in April 2010, the circumstances under which the United States would
consider responding to non-nuclear attacks with nuclear weapons were said to be few. “The United States will continue to…reduce the role of
nuclear weapons in deterring non-nuclear attacks,” the report stated. Although little was said about what sort of non-nuclear attacks might be
deemed severe enough to justify a nuclear response, cyberstrikes were not identified as one of these. The
2018 NPR report, however,
portrayed a very different environment, one in which nuclear combat is seen as increasingly possible and
in which non-nuclear strategic threats, especially in cyberspace, were viewed as sufficiently menacing to
justify a nuclear response. Speaking of Russian technological progress, for example, the draft version of the Trump administration’s
NPR report stated, “To…correct any Russian misperceptions of advantage, the president will have an expanding range of
limited and graduated [nuclear] options to credibly deter Russian nuclear or non-nuclear strategic
attacks, which could now include attacks against U.S. NC3, in space and cyberspace.”1 The notion that a
cyberattack on U.S. digital systems, even those used for nuclear weapons, would constitute sufficient grounds to
launch a nuclear attack was seen by many observers as a dangerous shift in policy, greatly increasing the risk of accidental
or inadvertent nuclear escalation in a crisis . “The entire broadening of the landscape for nuclear deterrence is a very
fundamental step in the wrong direction,” said former Secretary of Energy Ernest Moniz. “I think the idea of nuclear deterrence of cyberattacks,
broadly, certainly does not make any sense.”2 Despite such admonitions, the Pentagon reaffirmed its views on the links between cyberattacks
and nuclear weapons use when it released the final version of the NPR report in February 2018. The official text now states that the president
must possess a spectrum of nuclear weapons with which to respond to “attacks against U.S. NC3,” and it identifies cyberattacks as one form of
non-nuclear strategic warfare that could trigger a nuclear response. That cyberwarfare had risen to this level of threat, the 2018 NPR report
indicated, was a product of the enhanced cybercapabilities of potential adversaries and of the creeping obsolescence of many existing U.S. NC3
systems. To overcome these vulnerabilities, it called for substantial investment in an upgraded NC3 infrastructure. Not mentioned, however,
were extensive U.S. efforts to employ cybertools to infiltrate and potentially incapacitate the NC3 systems of likely adversaries, including Russia,
China, and North Korea. For the past several years, the U.S. Department of Defense has been exploring how it could employ its own very robust
cyberattack capabilities to compromise or destroy enemy missiles from such states as North Korea before they can be fired, a strategy
sometimes called “left of launch.”3 Russia and China can assume, on this basis, that their own launch facilities are being probed for such
vulnerabilities, presumably leading them to adopt escalatory policies such as those espoused in the 2018 NPR report. Wherever one looks,
therefore, thelinks between cyberwar and nuclear war are growing. The Nuclear-Cyber Connection These links exist
because the NC3 systems of the United States and other nuclear-armed states are heavily dependent on
computers and other digital processors for virtually every aspect of their operation and because those systems
are highly vulnerable to cyberattack. Every nuclear force is composed, most basically, of weapons, early-warning radars, launch facilities, and
the top officials, usually presidents or prime ministers, empowered to initiate a nuclear exchange. Connecting them all, however, is an extended
network of communications and data-processing systems, all reliant on cyberspace. Warning systems, ground- and space-based, must
constantly watch for and analyze possible enemy missile launches. Data
on actual threats must rapidly be communicated
to decision-makers, who must then weigh possible responses and communicate chosen outcomes to launch facilities,
which in turn must provide attack vectors to delivery systems. All of this involves operations in cyberspace, and it is in this domain that great
power rivals seek vulnerabilities to exploit in a constant struggle for advantage. The use of cyberspace to gain an advantage over adversaries
takes many forms and is not always aimed at nuclear systems. China has been accused of engaging in widespread cyberespionage to steal
technical secrets from U.S. firms for economic and military advantages. Russia has been accused, most extensively in the Robert Mueller report,
of exploiting cyberspace to interfere in the 2016 U.S. presidential election. Nonstate actors, including terrorist groups such as al Qaeda and the
Islamic State group, have used the internet for recruiting combatants and spreading fear. Criminal groups, including some thought to be allied
with state actors, such as North Korea, have used cyberspace to extort money from banks, municipalities, and individuals.4 Attacks such as
these occupy most of the time and attention of civilian and military cybersecurity organizations that attempt to thwart such attacks. Yet for
those who worry about strategic stability and the risks of nuclear escalation, it is the threat of cyberattacks on NC3 systems
that provokes the greatest concern. This concern stems from the fact that, despite the immense effort devoted to protecting NC3
systems from cyberattack, no enterprise that relies so extensively on computers and cyberspace can be made 100 percent invulnerable to
attack. This is so because such systems employ many devices and operating systems of various origins and vintages, most incorporating
numerous software updates and “patches” over time, offering multiple vectors for attack. Electronic components can also be modified by
hostile actors during production, transit, or insertion; and the
whole system itself is dependent to a considerable degree
on the electrical grid, which itself is vulnerable to cyberattack and is far less protected . Experienced
“cyberwarriors” of every major power have been working for years to probe for weaknesses in these systems and in many cases have devised
cyberweapons, typically, malicious software (malware) and computer viruses, to exploit those weaknesses for military advantage.5 Although
activity in cyberspace is much more difficult to detect and track than conventional military operations, enough information has become public
to indicate that the major
nuclear powers, notably China, Russia, and the United States , along with such secondary
powers as Iran and North Korea, have
established extensive cyberwarfare capabilities and engage in offensive
cyberoperations on a regular basis, often aimed at critical military infrastructure . “Cyberspace is a contested
environment where we are in constant contact with adversaries,” General Paul M. Nakasone, commander of the U.S. Cyber Command
(Cybercom), told the Senate Armed Services Committee in February 2019. “We
see near-peer competitors [China and Russia]
conducting sustained campaigns below the level of armed conflict to erode American strength and gain
strategic advantage.” Although eager to speak of adversary threats to U.S. interests, Nakasone was noticeably but not surprisingly
reluctant to say much about U.S. offensive operations in cyberspace. He acknowledged, however, that Cybercom took such action to disrupt
possible Russian interference in the 2018 midterm elections. “We created a persistent presence in cyberspace to monitor adversary actions and
crafted tools and tactics to frustrate their efforts,” he testified in February. According to press accounts, this included a cyberattack aimed at
paralyzing the Internet Research Agency, a “troll farm” in St. Petersburg said to have been deeply involved in generating disruptive propaganda
during the 2016 presidential elections.6 Other press investigations have disclosed two other offensive operations undertaken by the United
States. One called “Olympic Games” was intended to disrupt Iran’s drive to increase its uranium-enrichment capacity by sabotaging the
centrifuges used in the process by infecting them with the so-called Stuxnet virus. Another left of launch effort was intended to cause
malfunctions in North Korean missile tests.7 Although not aimed at either of the U.S. principal nuclear adversaries, those two attacks
demonstrated a willingness and capacity to conduct cyberattacks on the nuclear infrastructure of other states. Efforts by strategic rivals of the
United States to infiltrate and eventually degrade U.S. nuclear infrastructure are far less documented but thought to be no less prevalent.
Russia, for example, is believed to have planted malware in the U.S. electrical utility grid, possibly with
the intent of cutting off the flow of electricity to critical NC3 facilities in the event of a major crisis .8 Indeed,
every major power, including the United States, is believed to have crafted cyberweapons aimed at critical NC3 components and to have
implanted malware in enemy systems for potential use in some future confrontation. Pathways to Escalation Knowing that the NC3 systems of
the major powers are constantly being probed for weaknesses and probably infested with malware designed to be activated in a crisis, what
does this say about the risks of escalation from a nonkinetic battle, that is, one fought without traditional weaponry, to a kinetic one, at first
using conventional weapons and then, potentially, nuclear ones? None of this can be predicted in advance, but those analysts who have studied
the subject worry about the emergence of dangerous new pathways for escalation. Indeed, several such scenarios have been identified.9 The
first and possibly most dangerous path to escalation would arise from the early use of cyberweapons in a
great power crisis to paralyze the vital command, control, and communications capabilities of an
adversary, many of which serve nuclear and conventional forces. In the “fog of war” that would naturally ensue
from such an encounter, the recipient of such an attack might fear more punishing follow-up kinetic attacks,
possibly including the use of nuclear weapons, and, fearing the loss of its own arsenal, launch its weapons
immediately. This might occur, for example, in a confrontation between NATO and Russian forces in east and central Europe or between
U.S. and Chinese forces in the Asia-Pacific region. Speaking of a possible confrontation in Europe, for example, James N. Miller Jr. and Richard
Fontaine wrote that “both
sides would have overwhelming incentives to go early with offensive cyber and
counter-space capabilities to negate the other side’s military capabilities or advantages .” If these early attacks
succeeded, “it could result in huge military and coercive advantage for the attacker.” This might induce the recipient of such attacks to back
down, affording its rival a major victory at very low cost. Alternatively, however, the recipient might view the attacks on its critical command,
control, and communications infrastructure as the prelude to a full-scale attack aimed at neutralizing its nuclear capabilities and choose to
strike first. “It is worth considering,” Miller and Fontaine concluded, “how even
a very limited attack or incident could set
both sides on a slippery slope to rapid escalation.”10 What makes the insertion of latent malware in an
adversary’s NC3 systems so dangerous is that it may not even need to be activated to increase the risk of
nuclear escalation. If a nuclear-armed state comes to believe that its critical systems are infested with
enemy malware, its leaders might not trust the information provided by its early-warning systems in a crisis and might misconstrue
the nature of an enemy attack, leading them to overreact and possibly launch their nuclear weapons out of fear they
are at risk of a preemptive strike . “The uncertainty caused by the unique character of a cyber threat could jeopardize the credibility
of the nuclear deterrent and undermine strategic stability in ways that advances in nuclear and conventional weapons do not,” Page O.
Stoutland and Samantha Pitts-Kiefer wrote in 2018 paper for the Nuclear Threat Initiative. “[T]he introduction of a flaw or malicious code into
nuclear weapons through the supply chain that compromises the effectiveness of those weapons could lead to a lack of confidence in the
nuclear deterrent,” undermining strategic stability. 11
Without confidence in the reliability of its nuclear weapons
infrastructure, a nuclear-armed state may misinterpret confusing signals from its early-warning systems
and, fearing the worst, launch its own nuclear weapons rather than lose them to an enemy’s first strike.
This makes the scenario proffered in the 2018 NPR report, of a nuclear response to an enemy cyberattack, that much more alarming.
Externally---empowering cyber-defense stops an impending surge of attacks targeting
critical infrastructure---including the grid, nuclear power, and dams
Eoyang et al. 18, Mieke Eoyang, Vice President for the National Security Program and Chairperson of
the Cyber Enforcement Initiative at Third Way; Allison Peters, Deputy Director of the National Security
Program at Third Way; Ishan Mehta, Former Policy Advisor, National Security Program; Brandon
Gaskew, National Security Fellow, 2018-2019, “To Catch a Hacker: Toward a comprehensive strategy to
identify, pursue, and punish malicious cyber actors,” Third Way, 10/29/18,
https://www.thirdway.org/report/to-catch-a-hacker-toward-a-comprehensive-strategy-to-identify-
pursue-and-punish-malicious-cyber-actors

In order to close the cyber enforcement gap, we argue for a comprehensive enforcement strategy that
makes a fundamental rebalance in US cybersecurity policies: from a heavy focus on building better
cyber defenses against intrusion to also waging a more robust effort at going after human attackers. We call for
ten US policy actions that could form the contours of a comprehensive enforcement strategy to better identify, pursue and bring to
justice malicious cyber actors that include building up law enforcement, enhancing diplomatic efforts, and developing a
measurable strategic plan to do so. This rebalance can only be achieved if we increase the emphasis on, and resources
in, US cybersecurity efforts to include a greater focus on identifying, stopping, and punishing the human attacker. This means:
Shedding a blame-the-victim mentality that drives the defensive approach in favor of one of shared responsibility that invigorates a catch-the-
hacker approach, and Creating
a more balanced approach that places more emphasis on law enforcement and
diplomacy to prevent an overreliance on the military. SamSam is only one of thousands of attacks affecting Americans, and it
is just a matter of time before another malicious actor aims at bigger targets for reasons far more
nefarious than SamSam. While system and network owners and operators have obligations to provide the
best security they can, we have seen time after time that a determined attacker will eventually get
through. By putting the human attacker in the crosshairs of America’s cybersecurity efforts we can
instead raise the costs of their actions to not only bring attackers to justice, but also to deter future
attacks—whether they come from criminals, organizations, or hostile governments. The Enforcement Gap and the
Burgeoning Cybercrime Wave Calculating the scope of the cyber enforcement gap is a challenging if not impossible task due to the lack of
comprehensive public data across agencies. Based on our analysis of the publicly available data that does exist from
federal, state, and local sources, we estimate the chance of arresting a cybercriminal is less than 1% of the total
number of malicious cyber incidents reported annually to the federal government. We define this enforcement rate as the
ratio of arrests to the number of incidents reported, as data on indictments and prosecutions is not consistently reported at all levels. In other
words, this enforcement rate may be optimistic as arrests do not mean conviction. [[IMAGE OMITTED]] By
comparison, the clearance rate for property crimes was approximately 18% and for violent crimes 46% ,
according to the Federal Bureau of Investigation’s (FBI) Uniform Crime Report (UCR) for 2016. The clearance rate is the number of
cases where at least one person has been arrested, charged with the commission of an offense, and
referred for prosecution.12 Those numbers in comparison to a less than 1% rate for just arrests for
computer crimes is a drastic difference in the rate of enforcement. What happens when there is a
criminal, terrorist, or other malicious actor engaging in destabilizing activity in which the likelihood of
getting caught and punished is close to zero? In this section, we lay out some of the dimensions of the cybercrime wave in the
United States and globally. The burgeoning cybercrime wave is the result of both the ubiquity of technology and the one-sided nature of our
defenses: a reliance on building systems that are harder and harder to breach, training lay users to be harder and harder to fool, and faced with
hackers who are harder and harder to catch. The ubiquity of technology means every
critical infrastructure sector in the
United States—from nuclear power plants to water facilities— utilizes some form of computer-enabled
system for their operations that, if attacked successfully, could have devastating impacts on Americans. That
is why the US Department of Treasury has designated cybersecurity incidents as one of the biggest threats to the
stability of the entire US financial system.13 Nearly every US citizen’s personal, financial, and sensitive
information is stored on a connected device in some form. There are now more active mobile phones, which store
sensitive information on them, than the number of people on the planet, and Cisco predicts 27.1 billion up from 17.1 billion in 2016 connected
devices by 2021 or roughly 3.5 per person.14 Each device is potentially an attack vector that a malicious actor could
exploit. Each device has applications, operating systems, and network connections, which all have potential vulnerabilities for an attacker to
exploit. And as we discovered and noted above, the effort to catch malicious cyber actors is
uncoordinated, under-resourced, and under-prioritized— just a handful of reasons why those actors are rarely caught.
The Cybercrime Wave There’s a rising and often unseen crime wave happening in America. The FBI received 298,728 self-reported cybercrime
complaints in the United States in the year 2016 alone through its Internet Crime Complaint Center (IC3).15 Of those, as many as 193,700
cybercrimes could credibly be described as serious attempts at individual or systemic cyber breaches, including such activities as identity theft
(16,878 reported incidents), personal data breach (27,573), ransomware (2,673), and malware (2,783), according to the IC3 database.16 This is
only part of the picture, as the FBI estimates that fraud victims report only 15 percent of crime nationwide to law enforcement.17 That may
mean there are 2 million cybercrimes per year, or roughly equal to 1.4 million burglaries in a given year, if underreporting estimates are
accurate.18 The IC3 is an FBI center established in May 2000 to serve as a central hub for Internet crime victims to alert federal, state, and local
authorities to suspected criminal Internet activity.19 From 2013 to 2017, the IC3 has received over 1.4 million complaints.20 While IC3’s
methodology tabulates each individual’s complaint as a separate entry, the Verizon Data Breach Investigations Report states that there have
been over 53,000 incidents targeted at organizations.21 And America
isn’t alone. The International Police Organization (INTERPOL),
the multilateral organization that facilitates global law enforcement cooperation to fight international crime, states
that cybercrime is
one of the fastest growing areas of crime.22 [[BOX BEGINS]] What do we mean by “cybercrime?” While this paper refers
to the more general term “malicious cyber activity” in certain places, or “cyberattack” for high-impact incidents, we’re primarily focused on
cybercrime or crimes that use or target computer networks. This includes data theft, fraud, distributed denial-of-service (DDoS) attacks, worms,
ransomware, and viruses.23 We recognize the concerns raised with the term “cyberattack,”24 but considering its widespread adoption and lack
of global consensus on overall terminology, we continue its use in certain places to describe significant cyber incidents. Cybercriminals come in
all shapes and sizes. The FBI assesses that these threats can come from attackers with a host of different motivations and affiliations.25 High-
level intrusions usually stem from attackers affiliated with global organized crime syndicates or state-sponsored
attackers.26 Hacker-rings or lone actors typically run mid-level identity fraud or carding schemes for financial gain.27 Finally, privacy crimes,
such as doxing, are targeted crimes usually committed by lone actors with malicious personal or political motivation.28 However, that
landscape is changing fast. Nation-states like North Korea have attacked systems for a variety of reasons. Sony was
hacked to prevent reputational harm, the Bank of Bangladesh heist was for financial gain, and the WannaCry attack was motivated by a desire
to cause economic chaos.29 Terrorists have
also continued to use the Internet as a key operational tool,
including launching malicious cyberattacks against targets in the United States.30 Many of these crimes threaten the
stability of systems, either intentionally or through the way they spread. There are also a few categories of
malicious cyber activity that, while extremely serious, do not threaten to disrupt the stability of systems. While critically important, our
recommendations will not focus on what the Department of Justice refers to as “cyber-enabling crimes threatening personal privacy,” such as
cyber-enabled stalking, non-consensual pornography, and cyber-enabled harassment.31 The recommendations also do not cover issues related
to child pornography. These devastating crimes involve potentially very different motivations than other forms of cybercrime and deserve
dedicated research related to government responses to these crimes. [[BOX
ENDS]] The rewards from a successful
cyberattack are high, and the costs (in terms of risk) low, which has incentivized malicious actors to develop
more effective hacking techniques. Some examples of those techniques and their costs are as follows: Ransomware attacks, where an
attacker encrypts the victim’s data and typically only frees it when a ransom is paid, doubled in frequency between 2016 and 2017 with
incidents affecting a diversity of targets and disrupting the operations of public services and large corporations around the country and globe.32
Malware attacks on mobile devices have now surged with an increase in 54% globally from 2016 to 2017.33 Software update supply chain
attacks in which malware is implanted into software packages to infect computer systems has increased by 200 percent globally in 2017 from
the year prior.34 The Ponemon Institute estimates the average total cost of a data breach at $3.62million.35 IC3 calculated that reported
crimes, such as identity theft and online fraud, cost victims more than $1.42 billion.36 In 2016, the White House Council of Economic Advisors
estimated in 2016 that malicious cyber activity costs the United States economy between $57 billion and $109 billion per year.37 Other
estimates put the number as high as $3 trillion for the global economy annually.38 The targets that malicious cyber actors are hitting with
their attacks span a wide spectrum of sectors with the healthcare, public, accommodation, and manufacturing bearing
the brunt of security incidences and data breaches.39 For example, the Mirai Botnet attack in October 2016 led to some of the world’s most
popular websites going offline for up to twelve hours—including Netflix, Twitter, Reddit, PayPal, The New York Times, and The Wall Street
Journal—costing these companies millions of dollars in lost revenue.40 Criminal use of technology is creating entirely new categories of crime
that never existed before the digital age.41 It is ending the notion of “good neighborhoods” and “bad neighborhoods” when it comes to crime
because cyberspace is both ubiquitous and borderless. New types of crime from carding schemes, to ransomware, to crypto mining have made
investigations even more complex where the victim and perpetrator may be unknown to each other and may be in different countries.
Technologies like Virtual Private Networks (VPNs), the Tor browser,42 and cryptocurrencies like Bitcoin lend anonymity, or at
least perceived anonymity, to the malicious cyber actor. These technologies also help make attacks more effective and
easier to execute. Tools created using machine learning allow malicious cyber actors to perform reconnaissance, or information-gathering
efforts, more efficiently and to a much higher degree of accuracy. For attackers, the more information they have about the systems and the
operators of the system, the more effective the attack. Attackers can assess information regarding potential vulnerabilities, unpatched systems,
and exploits much quicker through the advanced technology available to them. Marketplaces and discussion forums on the dark web have
made buying and using cyber-exploits as easy as shopping for shoes online.43 Cybercrime has hit victims across the United States in
every single state and territory. California, Florida, Texas, New York, and Pennsylvania—states with very different demographics,
corporate representation, and cybersecurity laws—make up the highest number of victims.44 These states have been hit by
devastating economic losses as a result of the cybercrime wave.45 Cybercrime’s impact is so broad that it has security
implications for the entire nation and globe. A single incident like the WannaCry cyberattack in 2017 affected more than 200,000
computer systems in 150 countries and potentially cost the world economy $4 billion.46 Malicious cyber actors have attacked health care
systems and critical infrastructure in the United States, such as Industrial Control Systems ( ICS), the electric grid, and dams.
A
successful attack executed on these systems can threaten life, property and cause large scale destruction. In
March of this year, the Department of Homeland Security (DHS) and the FBI issued an alert that the Russian government was
targeting the electric grid and other critical energy systems.47 In 2015, malicious actors managed to
access the ICS software at a water treatment plant and tampered with the controls related to water flow and
the amount of chemicals used to treat the water.48 Beyond financial harm, some cyberattackers, at the
behest of nation states, are doing real damage to US national security. US defense contractors have become
targets for adversaries seeking to steal national security secrets. Recently, Chinese government hackers infiltrated the
network of a US Navy contractor, stealing data on undersea warfare and secret plans for US submarine anti-ship
missiles.49 China and others are hacking US companies to steal intellectual property, at an estimated cost of $225 billion
to $600 billion annually.50 Hostile nations are also using cyber operations to affect US national security
personnel directly. In 2014 and 2015, the Office of Personnel Management51 suffered a massive data breach exposing the sensitive
information of up to 22 million people, including personal information in their security clearance forms. And, of greatest concern, Russia’s
malicious cyber activities aimed at trying to affect the outcome of the 2016 US presidential election have been well-documented in indictment
after indictment.52 The cybercrime wave is so big it should be setting off alarm bells at every level of law
enforcement. And yet, the response from the enforcement community is a drop in the bucket compared to the sheer volume of crimes
occurring. The Enforcement Gap We know how big the problem is, but assessing the adequacy of the response to the problem is tougher. Not
only are we in a cybercrime wave, but we also have a hidden enforcement crisis. Third Way’s analysis
estimates that the enforcement rate for reported incidents of the IC3 database is 0.3%. Taking into
account that cybercrime victims often do not report cases, the effective enforcement rate estimate may
be closer to 0.05%. [[BOX BEGINS]] How did we calculate the cyber enforcement rate? There were significant challenges to
estimating an aggregate cyber enforcement rate for the purpose of this research. Most significantly, there are currently no public databases
which specifically report enforcement metrics on computer crime across all localities in the same way that exists for other categories of crime.
We analyzed close to two dozen public and private databases to calculate the cyber enforcement rate. There were numerous discrepancies and
inconsistencies across the different datasets that estimated the number of cyber incidents. Additionally, none of the datasets had
comprehensive attribution information. To calculate the enforcement rate, we therefore decided to use Department of Justice (DOJ), FBI, and
Secret Service self-reported numbers on incidents and arrests. This data is not perfect and includes categories of crimes that we do not directly
address in our recommendations, such as privacy crimes, and the number of incidents relies on reports by victims to the federal government.
Yet, these are the best datasets publicly available that give a picture of the enforcement gap rate for the United States. This is precisely why we
call for better comprehensive reporting in our recommendations later in this report. The FBI IC3 received 298,728 complaints in 2016.53 By
analyzing a variety of official US government reporting databases, we determined that there were fewer than 1,000 arrests that year between
federal, state, and local law enforcement agencies for reported cybercrimes. Specifically, to determine the number of enforcement actions we
looked at various reports of the number of cybercrime arrests. In 2014 and 2015, through the FBI’s Uniform Crime Reporting (UCR) Program,
the Bureau reported the number of individuals arrested for “criminal computer intrusion” by each FBI field office. The total number was 105 for
2014 and 49 for 2015.54 In 2016, UCR transferred to the National Incident-Based Reporting System (NIBRS) and no longer separates out the
arrest numbers for computer crime in their reporting. However, in 2016, the arrest numbers for computer crime by state and local law
enforcement were included through NIBRS for the first time. The number reported under “hacking/computer invasion” crime was 581 for 2016,
the most recent year reported, which includes reporting from 6,849 state and local agencies.55 The Secret Service reported 251 cybercrime
arrests in 2016.56 If we assume the FBI field offices made a similar number of arrests as in the previous two years in 2016 combined, we arrive
at the total federal, state, and local computer crime arrests to be between 871 and 927 for 2016, barring a significant increase in federal
arrests. To determine a denominator, we looked at various reports that tabulate the number of cyber incidents and cybercrime. The FBI IC3
report for 2016 notes 298,728 complaints received that year.57 Based on these numbers, we estimate the enforcement rate at 0.31%.
Considering only one in six victims of cybercrime report to law enforcement,58 the effective enforcement rate estimate may be closer to 0.05%.
The number of convictions reported by the FBI alone is even lower than the number of arrests used to calculate the cyber enforcement rate.
The only DOJ document that Third Way has found that reports prosecution numbers is the FBI Congressional Budget Justification document,
which lists them as “Internet Fraud.” The FBI reports that using the IC3 data to develop law enforcement referrals, it only secured nine
convictions in 2016, down from nineteen cases the previous year.59 [[IMAGE
OMITTED]] While these cases are important and
meaningful in punishing cyber attackers, they represent a very small drop in a very large bucket. And the
low enforcement rate for
cybercrime has consequences. Cybercriminals are operating with near impunity compared to their real-
world counterparts. Given the increasing ease of committing these crimes and the unlikely chance of
being caught, it is no wonder that this category of crime is on the rise.60 In the face of such a small response from law
enforcement, some believe the private sector should take matters into their own hands and go on the offense. A widely-perceived enforcement
failure will lead victims to eventually say “enough is enough” and act on their own. This offensive approach is not to deter attackers but to
disrupt their capabilities, including rendering useless their devices, locking accounts, and blocking server access.61 Proponents of so-called
“hacking back” will acknowledge that the impulse comes from a recognition of an enforcement failure and a frustration about the inability to do
anything to stop the attacker.62 But hack back exposes the counter-hacker to their own liability for unauthorized access to someone else’s
system and malicious action. Additionally, malicious cyber actors use proxy systems that are tough to identify and retaliations may target
systems of innocent individuals. In a well-functioning system, victims have confidence that law enforcement is doing their best to catch the
attacker and have a reasonable chance of doing so. Furthermore, America’s enforcement gap has been largely hidden because there are no
good metrics to assess law enforcement response. The number of reported crimes is proportionally miniscule in comparison to the number of
actual crimes. Anecdotal data on high profile incidents and prosecutions do not provide a full picture of what’s at stake. The traditional crime
statistics also do not reflect the kinds of new computer enabled crimes that are happening today.63 And, the lack of clarity in how to report
crimes means that state, local, and federal agencies do not report cybercrime in a clear or consistent manner. There is a clear enforcement gap
in cybercrime that must be urgently addressed. The problem is right in front of us, but policymakers are largely not paying any attention to it.
The recent indictments against Russian and North Korean state cyber actors may be perceived as progress, but they do not address the large
number of crimes that go unnoticed.64 The lack of action, the rising costs as a result, and the apparent impunity of these malicious actors
would not be tolerated in any other domain. But it often seems like an afterthought in the realm of cyber. Closing the enforcement gap will also
require understanding the motivation of the human attacker and their relationship to foreign states or other non-state actors that might harbor
or support them. This is an essential factor in crafting an effective policy solution to compel behavioral change. There are four reactions the
state can take toward the attacker: passive, ignore or abet, or order or conduct.65 Depending on the nature of the nation state and the cyber
attacker(s) and their motivations, the tools used to target a change in behavior of both the state and attackers will vary. It’s important to deeply
understand the nature of this relationship to employ the most effective solution once the human attacker has been identified. [[FIGURE
OMITTED]] Working with states that prohibit attacks may require increased cooperation or capacity building to be able to coordinate
efforts to bring enforcement actions against the attackers. If the state is ignoring or abetting the attacker, diplomatic pressure will need to be
brought to bear to change the state’s attitude about its complicity in the attacks. This may in some cases bring a different set of more coercive
tools to bear. Finally, if the state has direct responsibility for the attacks and is encouraging or conducting them as part of the attacking state’s
foreign policy, then the victimized nation may have to consider the full spectrum of actions available, beyond law enforcement and diplomacy,
against the attacking nations. Ultimately, if malicious cyber actors are working at the behest of nation-states to advance their objectives
through cyberattacks, they are likely to be much more difficult to punish or change their behavior at all. Even if you are able to do so either by
sanctioning or arresting them in another country, it is likely that the foreign government sponsor would just recruit others to take up the
banner and continue the attacks. Importantly, although there are a number of nation-states that are using cyberattacks as a tool to advance
their objectives, this does not in any way mean the United States can ignore the massive cybercrime wave that is occurring, granting impunity
to the large number of malicious cyber actors that may be able to be identified, stopped, and punished. Regardless of whether the behavior is
the decision of an individual or the state, whether it’s the fingers on the keyboard or the ones signing the order, it is still a human whose
decision-making process can be impacted, and who can (and should) feel real consequences. Rebalancing the US Cyber Approach Given the
magnitude both of the cybercrime wave and the enforcement gap the nation faces, it’s clear that the current approach is insufficient. As the
number and intensity of cyberattacks has increased, robust efforts at cyber defense are necessary, but not nearly sufficient. A determined
attacker will get through even the most heavily defended system. Focusing on making the most secure target possible to the exclusion of a
substantial focus on also getting the attacker allows malicious actors to continue to multiply and operate with a sense of impunity. And while
there are an infinite number of vulnerabilities and a growing number of attacks, there are a finite number of attackers. To stop those attackers,
we must transform both the way we think about cybersecurity and rebalance our efforts to include a greater focus on going after the human
attacker. To be sure, there has been a growing emphasis under the Obama and Trump Administrations in going after malicious cyber actors
through law enforcement actions and imposing other types of costs to change their behavior. This includes the number of actions that have
been taken against malicious cyber actors working on behalf of adversarial nation-states. However, as the enforcement rate makes clear, these
efforts are not nearly enough. Nor have they been sufficiently resourced and given the political leadership necessary to make progress. Most of
the cybersecurity efforts are currently defensive in orientation, focused on protecting systems and networks. Building better firewalls against
attacks, creating better passwords, and educating users are all critical. But a strategy primarily developed around building impregnable cyber
walls and mistake-proof human users cannot succeed. We need to create a more robust parallel effort around how we identify, stop, and
punish the human attacker. We need to change the calculation of malicious cyber actors by balancing defense of systems with an offense
designed to stop and deter the human. It’s no surprise that thus far the American government has had a heavy focus on defending systems and
networks. This approach has been, in part, driven by a blame-the-victim mentality in cybersecurity. When there is a major cyber breach of a
company they are often hauled up before Congress and made to apologize for their lapses, their holes in security, and their failure to have the
most up-to-date defenses. To be sure, some of these companies deserve criticism for not taking proper precautions. For example, Equifax, a
consumer reporting agency, which holds millions of Americans personally identifiable information, was hacked in 2017 because they failed to
update their software after knowing about the risk for months. This led to hackers exploiting the vulnerability, exposing the information of 143
million Americans.66 This was preventable and companies that similarly fail to address known vulnerabilities should be held accountable.
Corporations in America fear the losses and reputational harm that come from a major breach, and thus focus their efforts on defending their
networks and data. Beyond the private sector, the government’s own approach to cybersecurity has also been primarily defensive in nature. In
2008, the Bush Administration adopted a new approach to securing the internet, the Comprehensive National Cybersecurity Initiative (CNCI),67
which established a broad series of policies aimed at trying to secure the United States in cyberspace.68 Later declassified by President Obama,
it was a call to arms establishing and modernizing the government’s role in defending networks, sharing information, and increasing cyber-
education. The CNCI established the basic parameters of the debate which focused on: network security, securing critical infrastructure, and
global supply chain risk mitigation.69 This overarching focus on defense is one that has continued in the cybersecurity debate to this day,
including the Trump Administration’s recently released the National Cyber Strategy.70 While the Strategy is an important conceptual
framework for strengthening law enforcement efforts at home and abroad and imposing consequences on cyberattackers and nation-state
sponsors, the Strategy still heavily centers on cyber defense with only a few short sections committed to pursuing hackers. It proposes no
advances to how the government will assess its progress on enforcement and has few innovative, new solutions to address the number of
tremendous challenges that exist in closing the enforcement gap. Yet, the government is the only institution with the authority to do anything
about the human attacker and the capability to bring them to justice. The government’s abilities in this area are quite broad, but in our
assessment, priorities and resourcing have been improperly aligned to go after the attacker. When there is a conversation about how the
government is going after hackers it is often framed in military terms, which is inapplicable against most of the attackers we see today. Military
leaders have been debating how large a cyberattack must be in order to make it an act of war since the massive Russian denial of service
attacks that crippled Estonia in 2007.71 Those attacks were largely the inspiration for the North Atlantic Treaty Organization’s (NATO) efforts to
develop the Tallinn Manual, an attempt to set the rules for cyber war.72 Multiple efforts have been made to define the rules of cyberwar and to
develop Digital Geneva Conventions.73 Given the vast amount of funding the military has to invest in cybersecurity, and the over-militarization
of US foreign policy generally, it is no surprise that the debate around when a cyberattack will trigger a kinetic response is robust. For example,
the elevation of US Cyber Command to a unified combat command shows the political consensus in the Executive Branch and in Congress to
embrace a military approach. On August 18, 2017, the administration elevated Cyber Command to a unified command, and according to a
White House statement, this “… demonstrates our increased resolve against cyberspace threats and will help reassure our allies and partners
and deter our adversaries.”74 This military priority is reflected in Cyber Command’s request of approximately $647 million for fiscal year 2018,
a 16% increase over the previous year.75 Additionally, in August 2018, the Trump administration relaxed the rules in Presidential Policy
Directive 20, which governs the use of US offensive and defensive cyber operations, especially those to “deter foreign election influence and
thwart intellectual property theft by meeting such threats with more forceful responses.”76 Yet, until that threshold is crossed, all of those
military cyber-weapons are limited to cyberspace and cannot physically touch the human cyber attacker. While the Pentagon is developing
weapons that may deny the attacker access to their tools, these responses may have collateral consequences and are limited in their ability to
impose consequences on the individual human attacker. Given the range of types of attacks and attackers, cyber weapons may not be the best
response in a particular situation. There are other tools besides military action that can be used to stop the attacker. Rather than responding
with military force, the government can use its Title XVIII authorities to bring law enforcement to bear against the attacker at any time.
Unfortunately, the current prioritization undervalues and underinvests in that response. We can only stop this cybercrime wave and close the
cyber enforcement gap by transforming law enforcement, enabled by diplomacy, to go after the attacker. America needs a comprehensive
cyber enforcement strategy aimed at identifying, stopping, and punishing cyberattackers, which it currently lacks. This strategy would need
both domestic and international components to it as well as the structure and process in place to achieve its objectives. We lay out elements of
that strategy below. Toward a Comprehensive Cyber Enforcement Strategy In this section we lay out the contours of what a comprehensive
cyber enforcement strategy could look like. These broad recommendations are aimed at achieving the fundamental rebalance we aim to see in
America’s cybersecurity approach to dramatically improve the country’s security. Over a multi-year initiative, Third Way will develop more
detailed policy proposals to advance these efforts. Below we detail our recommendations for areas of priority that require urgent attention by
policymakers. Some of these recommendations are aimed at building upon existing streams of efforts while others propose new reforms. These
recommendations fall into three general categories of those that deal with: 1) domestic enforcement reform, 2) international coordination and
cooperation reform, and 3) internal US government reform efforts to put in place the structure and process to lead all of these efforts.
Domestic enforcement reform Recommendation #1: A Larger Role for Law Enforcement Absent a state of war, the primary US
government agencies with the authority and ability to identify, stop, and punish the humans
responsible for these attacks are law enforcement— enabled by our diplomats and allies. Law enforcement is how we
deal with people who have broken our laws in peacetime. Recent high-profile enforcement actions
demonstrate what is possible when law enforcement and diplomats target individual attackers and point
to a new way forward. For example, in 2015, after a series of cyber espionage attacks on intellectual
property in the US private sector, the Obama administration exerted diplomatic pressure on China. Under the threat of sanctions, the Chinese government
arrested individuals accused of commercial cyberespionage.77 Experts believe the individuals arrested to have ties to the cyber
offense unit of the People's Liberation Army (PLA).78 The US Government was able to investigate and indict twelve Russian

GRU (Main Intelligence Directorate, abbreviated GRU) agents for hacking the Democratic National Committee and the Clinton Campaign during the
2016 election . The indictment detailed the methods and technologies used by the GRU to execute the hack. The investigators were also able to obtain the

names of individuals responsible for executing, coordinating, and ordering the hack.79 Even against the most sophisticated nation-state

actors it is possible to identify and bring indictments against the individuals who launch the attacks. In
cyber policy circles, there are many who have argued that enforcement actions cannot have an impact
when it comes to America’s adversaries who use cyberattacks to target our country. But enforcement
actions taken against malicious cyber actors even in the most difficult cases can still have a substantial
impact. Deputy Attorney General Rod Rosenstein recently laid out the Department of Justice’s view on this very issue, arguing that indictments and
prosecutions are an important tool in these cases for a number of reasons, including: 1) the defendants may one day face a
trial if there is a change in their government’s calculus or they travel to another nation that cooperates with the United
States in these efforts; 2) public indictments can provide some level of deterrence by raising the risk that these

individuals will be held accountable, making them less attractive for future attacks ; 3) these actions
demonstrate the ability of US law enforcement to attribute attacks and charge hackers, which may deter
others; 4) federal indictments in the US criminal justice system given its evidentiary standards are often
taken seriously by other countries, which could impact their relationship with the offending countries; and 5) victims
deserve justice for the attacks that were perpetrated against them.80 But it’s not enough to just bring indictments leaving the hackers on the loose in
foreign lands. The ultimate goal is to take them off the field completely, and law enforcement, enabled by diplomats, does that too. Unfortunately, American
law enforcement and diplomatic efforts are severely under-resourced to address the growing cyber-crime wave. In fiscal year 2017, the
Department of Defense spent $7.2 billion on cybersecurity broadly, nearly ten times the cybersecurity resources of the Department of
Justice.81 As the recent CSIS report highlighted, America needs better cyber forensic capabilities and training.82 These
resources must be committed to:

Blackouts from grid hacking cause extinction


Martin Rees 18, Astronomer Royal, founded the Centre for the Study of Existential Risk, Fellow of Trinity
College and Emeritus Professor of Cosmology and Astrophysics at the University of Cambridge, “On the
Future: Prospects for Humanity,” 10/16/2018, Princeton University Press

2.5. TRULY EXISTENTIAL RISKS? Our world increasingly depends on elaborate networks: electricity power grids,
air traffic control, international finance, globally dispersed manufacturing, and so forth. Unless these
networks are highly resilient, their benefits could be outweighed by catastrophic (albeit rare) breakdowns—
realworld analogues of what happened in the 2008 global financial crisis. Cities would be paralysed [gridlocked] without
electricity— the lights would go out, but that would be far from the most serious consequence. Within a
few days our cities would be uninhabitable and anarchic. Air travel can spread a pandemic worldwide within days,
wreaking havoc on the disorganised megacities of the developing world. And social media can spread panic and rumour, and economic
contagion, literally at the speed of light. When
we realise the power of biotech, robotics, cybertechnology, and AI— and, still
more, their potential in the coming decades— we can’t avoid anxieties about how this empowerment could be misused. The
historical record reveals episodes when ‘civilisations’ have crumbled and even been extinguished. Our
world is so interconnected it’s unlikely a catastrophe could hit any region without its consequences
cascading globally. For the first time, we need to contemplate a collapse — societal or ecological— that would
be a truly global setback to civilisation. The setback could be temporary. On the other hand, it could be so
devastating (and could have entailed so much environmental or genetic degradation) that the survivors could
never regenerate a civilisation at the present level.
AND triggers nuclear retaliation
Klare 19 [Michael T. Klare, professor emeritus of peace and world security studies at Hampshire
College, “Cyber Battles, Nuclear Outcomes? Dangerous New Pathways to Escalation,” Arms Control
Association, November 2019, armscontrol.org/act/2019-11/features/cyber-battles-nuclear-outcomes-
dangerous-new-pathways-escalation]

Yet another pathway to escalation could arise from a cascading series of cyberstrikes and counterstrikes
against vital national infrastructure rather than on military targets. All major powers, along with Iran and
North Korea, have developed and deployed cyberweapons designed to disrupt and destroy major elements of
an adversary’s key economic systems, such as power grids, financial systems, and transportation network s. As noted, Russia has
infiltrated the U.S. electrical grid, and it is widely believed that the United States has done the same in Russia.12 The Pentagon has also devised a plan known as
“Nitro Zeus,” intended to immobilize the entire Iranian economy and so force it to capitulate to U.S. demands or, if that approach failed, to pave the way for a
crippling air and missile attack.13 The danger here is that economic
attacks of this sort, if undertaken during a period of
tension and crisis, could lead to an escalating series of tit-for-tat attacks against ever more vital
elements of an adversary’s critical infrastructure , producing widespread chaos and harm and eventually leading one side
to initiate kinetic attacks on critical military targets, risking the slippery slope to nuclear conflict. For example,
a Russian cyberattack on the U.S. power grid could trigger U.S. attacks on Russian energy and financial
systems, causing widespread disorder in both countries and generating an impulse for even more devastating attacks . At some
point, such attacks “could lead to major conflict and possibly nuclear war.”14 These are by no means the only pathways to escalation
resulting from the offensive use of cyberweapons. Others include efforts by third parties, such as proxy states or terrorist organizations, to provoke a global nuclear
crisis by causing early-warning systems to generate false readings (“spoofing”) of missile launches. Yet, they do provide a clear indication of the severity of the
threat. As
states’ reliance on cyberspace grows and cyberweapons become more powerful, the dangers of
unintended or accidental escalation can only grow more severe.

Nuclear meltdowns cause extinction


Christopher Allen Slocum 15, VP @ AO&G, “A Theory for Human Extinction: Mass Coronal Ejection and
Hemispherical Nuclear Meltdown,” 07/21/15, The Hidden Costs of Alternative Energy Series,
http://azoilgas.com/wp-content/uploads/2018/03/Theory-for-Human-Extinction-Slocum-20151003.pdf

With our intelligence we have littered the planet with massive spent nuclear fuel pools, emitting lethal radiation in over-crowded
conditions, with
circulation requirements of electricity, water-supply, and neutron absorbent chemicals. The
failure of any of these conditions for any calculable or incalculable reason, will release all of a pool’s cesium into the
atmosphere, causing 188 square miles to be contaminated, 28,000 cancer deaths and $59 billion in damage. As of 2003, 49,000 tons of
SNF was stored at 131 sites with an additional 2,000-2,400 metric tons produced annually. The NRC has issued permits, and the nuclear industry
has amassed unfathomable waste on the premise that a deep geological storage facility would be available to remediate the waste. The current
chances for a deep geological storage facility look grim. The NAS has required geologic stability for 1,000,000 years. It is impossible to calculate
any certainty 1,000,000 years into the future. Humanity could not even predict the mechanical failures at Three Mile Island or Chernobyl, nor
could it predict the size of the tsunami that triggered three criticality events at Fukushima Daiichi. These irremediable crises span just
over 70 years of human history. How can the continued production and maintenance of SNF in pools be anything but a precedent to an
unprecedented human cataclysm? The Department of Energy’s outreach website explains nuclear fission for power production,
providing a timeline of the industry. The timeline ends, as does most of the world’s reactor construction projects in the 1990s, with the removal
of the FCMs from Three Mile Island. One would think the timeline would press into the current decade, however the timeline terminates with
the question, “How can we minimize the risk? What do we do with the waste?” (The History of Nuclear Energy 12). Nearly fifteen years into the
future, these questions are no closer to an answer. The reactors at Fukushima Daiichi are still emitting radioisotopes into the atmosphere, and
their condition is unstable. TEPCO has estimated it could take forty years to recover all of the fuel material, and there are doubts as to whether
the decontamination effort can withstand that much time (Schneider 72). A detailed analysis of Chernobyl has demonstrated that nuclear
fall-out, whether from thermonuclear explosions, spent fuel pool fires, or reactor core criticality events are deleterious to the
food-chain. Cesium and strontium are taken into the roots of plants and food crops, causing direct human and
animal contamination from ingestion, causing cancer, teratogenicity, mutagenesis and death.
Vegetation suffers mutagenesis, reproductive loss, and death . Radioactive fields and forest floors
decimate invertebrate and rodent variability and number necessary to supply nature’s food-chain and
life cycles. The flesh and bones of freshwater and oceanic biota contribute significantly to the total
radiation dose in the food-chain. Fresh water lakes, rivers and streams become radioactive. Potable
aquafers directly underlying SNFs and FCMs are penetrated by downward migration of radioisotopes . Humans
must eat to live. Humans must have water. No human can survive 5 Sv of exposure to ionizing radiation, many
cannot survive exposure to 1 Sv. Realizing the irremediable devastation caused by one thermonuclear warhead, by one Chernobyl, by one
Fukushima Daiichi, it remains to be said that the earth can handle as many simultaneous loss of coolant failures as
nature can create. Humanity cannot. It is not good enough to lead by relegating probable human wide extinction phenomena to
an appeal to lack of evidence. Policy cannot indefinitely ignore responsibility by requiring further study. Nor can leadership idle into
cataclysm by relying on the largest known natural phenomena of the last 200 years. Permitting construction and continued operation of
malefic machinery, based on 200 years of cataclysmic experience is a protocol for calamity. Of coronal mass ejections, Hapgood warns, that we
need to prepare for a once-in-1000-year event, not just simulate infrastructure safeties by the measure of what we have seen in the past. The
same is true for all natural phenomena. The future of humanity is too precious to operate with such insouciance. The engineering is not good
enough. It never will be. Nature is too unpredictable, and nuclear power is too dangerous.
Hack Backs ADV---1AC
Advantage two is HACK BACKS
Weak federal policy on cyber law enforcement is creating a gap that will be filled by
private cyber vigilantism. Articulating clear policy that removes the need is key.
Eoyang et al. 18, Mieke Eoyang, Vice President for the National Security Program and Chairperson of
the Cyber Enforcement Initiative at Third Way; Allison Peters, Deputy Director of the National Security
Program at Third Way; Ishan Mehta, Former Policy Advisor, National Security Program; Brandon
Gaskew, National Security Fellow, 2018-2019, “To Catch a Hacker: Toward a comprehensive strategy to
identify, pursue, and punish malicious cyber actors,” Third Way, 10/29/18,
https://www.thirdway.org/report/to-catch-a-hacker-toward-a-comprehensive-strategy-to-identify-
pursue-and-punish-malicious-cyber-actors
While these cases are important and meaningful in punishing cyber attackers, they represent a very small drop in a very large bucket. And the low enforcement rate
for cybercrime has consequences. Cybercriminals are operating with near impunity compared to their real-world counterparts. Given the increasing ease of
committing these crimes and the unlikely chance of being caught, it is no wonder that this category of crime is on the rise.60 In
the face of such a
small response from law enforcement, some believe the private sector should take matters into their own
hands and go on the offense. A widely-perceived enforcement failure will lead victims to eventually say
“enough is enough” and act on their own. This offensive approach is not to deter attackers but to
disrupt their capabilities, including rendering useless their devices, locking accounts, and blocking server
access.61 Proponents of so-called “hacking back” will acknowledge that the impulse comes from a
recognition of an enforcement failure and a frustration about the inability to do anything to stop the
attacker.62 But hack back exposes the counter-hacker to their own liability for unauthorized access to
someone else’s system and malicious action. Additionally, malicious cyber actors use proxy systems that are tough to identify and
retaliations may target systems of innocent individuals. In a well-functioning system, victims have
confidence that law enforcement is doing their best to catch the attacker and have a reasonable chance
of doing so. Furthermore, America’s enforcement gap has been largely hidden because there are no good metrics to assess law enforcement response. The
number of reported crimes is proportionally miniscule in comparison to the number of actual crimes. Anecdotal data on high profile incidents and prosecutions do
not provide a full picture of what’s at stake. The traditional crime statistics also do not reflect the kinds of new computer enabled crimes that are happening
today.63 And, the lack of clarity in how to report crimes means that state, local, and federal agencies do not report cybercrime in a clear or consistent manner.
There is a clear enforcement gap in cybercrime that must be urgently addressed. The problem is right in front of us, but policymakers are largely not paying any
attention to it. The recent indictments against Russian and North Korean state cyber actors may be perceived as progress, but they do not address the large number
of crimes that go unnoticed.64 The lack of action, the rising costs as a result, and the apparent impunity of these malicious actors would not be tolerated in any
other domain. But it often seems like an afterthought in the realm of cyber. Closing the enforcement gap will also require understanding the motivation of the
human attacker and their relationship to foreign states or other non-state actors that might harbor or support them. This is an essential factor in crafting an
effective policy solution to compel behavioral change. There are four reactions the state can take toward the attacker: passive, ignore or abet, or order or
conduct.65 Depending on the nature of the nation state and the cyber attacker(s) and their motivations, the tools used to target a change in behavior of both the
state and attackers will vary. It’s important to deeply understand the nature of this relationship to employ the most effective solution once the human attacker has
been identified. [[FIGURE OMITTED]] Working with states that prohibit attacks may require increased cooperation or
capacity building to be able to coordinate efforts to bring enforcement actions against the attackers. If the state is ignoring or
abetting the attacker, diplomatic pressure will need to be brought to bear to change the state’s attitude about its complicity in the attacks. This may in some cases
bring a different set of more coercive tools to bear. Finally, if the state has direct responsibility for the attacks and is encouraging or conducting them as part of the
attacking state’s foreign policy, then the victimized nation may have to consider the full spectrum of actions available, beyond law enforcement and diplomacy,
against the attacking nations. Ultimately, if malicious cyber actors are working at the behest of nation-states to advance their objectives through cyberattacks, they
are likely to be much more difficult to punish or change their behavior at all. Even if you are able to do so either by sanctioning or arresting them in another country,
it is likely that the foreign government sponsor would just recruit others to take up the banner and continue the attacks. Importantly, although
there are
a number of nation-states that are using cyberattacks as a tool to advance their objectives, this does not
in any way mean the United States can ignore the massive cybercrime wave that is occurring, granting impunity

to the large number of malicious cyber actors that may be able to be identified, stopped, and punished. Regardless of whether the behavior is the

decision of an individual or the state, whether it’s the fingers on the keyboard or the ones signing the order, it is still a human whose decision-

making process can be impacted, and who can (and should) feel real consequences. Rebalancing the US Cyber Approach Given the magnitude
both of the cybercrime wave and the enforcement gap the nation faces, it’s
clear that the current approach is insufficient. As the
number and intensity of cyberattacks has increased, robust efforts at cyber defense are necessary, but not nearly

sufficient. A determined attacker will get through even the most heavily defended system. Focusing on making
the most secure target possible to the exclusion of a substantial focus on also getting the attacker allows malicious actors to continue to multiply and operate with a
sense of impunity. And while there are an infinite number of vulnerabilities and a growing number of attacks, there are a finite number of attackers. To stop those
attackers, we must transform both the way we think about cybersecurity and rebalance our efforts to include a greater focus on going after the human attacker. To
be sure, there has been a growing emphasis under the Obama and Trump Administrations in going after malicious cyber actors through law enforcement actions
and imposing other types of costs to change their behavior. This includes the number of actions that have been taken against malicious cyber actors working on
behalf of adversarial nation-states. However, as the enforcement rate makes clear, these efforts are not nearly enough. Nor have they been sufficiently resourced
and given the political leadership necessary to make progress. Most of the cybersecurity efforts are currently defensive in orientation, focused on protecting
systems and networks. Building better firewalls against attacks, creating better passwords, and educating users are all critical. But a strategy primarily developed
around building impregnable cyber walls and mistake-proof human users cannot succeed. We need to create a more robust parallel effort around how we identify,
stop, and punish the human attacker. We need to change the calculation of malicious cyber actors by balancing defense of systems with an offense designed to stop
and deter the human. It’s no surprise that thus far the American government has had a heavy focus on defending systems and networks. This approach has been, in
part, driven by a blame-the-victim mentality in cybersecurity. When there is a major cyber breach of a company they are often hauled up before Congress and made
to apologize for their lapses, their holes in security, and their failure to have the most up-to-date defenses. To be sure, some of these companies deserve criticism
for not taking proper precautions. For example, Equifax, a consumer reporting agency, which holds millions of Americans personally identifiable information, was
hacked in 2017 because they failed to update their software after knowing about the risk for months. This led to hackers exploiting the vulnerability, exposing the
information of 143 million Americans.66 This was preventable and companies that similarly fail to address known vulnerabilities should be held accountable.
Corporations in America fear the losses and reputational harm that come from a major breach, and thus focus their efforts on defending their networks and data.
Beyond the private sector, the government’s own approach to cybersecurity has also been primarily defensive in
nature. In 2008, the Bush Administration adopted a new approach to securing the internet, the Comprehensive National Cybersecurity
Initiative (CNCI),67 which established a broad series of policies aimed at trying to secure the United States in cyberspace.68 Later declassified
by President Obama, it was a call to arms establishing and modernizing the government’s role in defending networks, sharing information, and
increasing cyber-education. The CNCI established the basic parameters of the debate which focused on: network security, securing critical
infrastructure, and global supply chain risk mitigation.69 This overarching focus on defense is one that has continued in the cybersecurity
debate to this day, including the Trump Administration’s recently released the National Cyber Strategy.70 While the
Strategy is an important conceptual framework for strengthening law enforcement efforts at home and abroad and imposing consequences on
cyberattackers and nation-state sponsors, the Strategy still heavily centers on cyber defense with only a few short sections committed to
pursuing hackers. It proposes no advances to how the government will assess its progress on enforcement
and has few innovative, new solutions to address the number of tremendous challenges that exist in
closing the enforcement gap. Yet, the government is the only institution with the authority to do
anything about the human attacker and the capability to bring them to justice. The government’s
abilities in this area are quite broad, but in our assessment, priorities and resourcing have been improperly aligned to
go after the attacker. When there is a conversation about how the government is going after hackers it is often

framed in military terms, which is inapplicable against most of the attackers we see today. Military
leaders have been debating how large a cyberattack must be in order to make it an act of war since the
massive Russian denial of service attacks that crippled Estonia in 2007.71 Those attacks were largely the inspiration for the North Atlantic Treaty Organization’s
(NATO) efforts to develop the Tallinn Manual, an attempt to set the rules for cyber war.72 Multiple efforts have been made to define the rules of cyberwar and to
develop Digital Geneva Conventions.73 Given
the vast amount of funding the military has to invest in cybersecurity, and the over-militarization of
US foreign policy generally, it is no surprise that the debate around when a cyberattack will trigger a kinetic
response is robust. For example, the elevation of US Cyber Command to a unified combat command shows the
political consensus in the Executive Branch and in Congress to embrace a military approach. On August 18,
2017, the administration elevated Cyber Command to a unified command, and according to a White House statement, this “… demonstrates our increased resolve
against cyberspace threats and will help reassure our allies and partners and deter our adversaries.”74 This military priority is reflected in Cyber Command’s request
of approximately $647 million for fiscal year 2018, a 16% increase over the previous year.75 Additionally, in August 2018, the Trump administration relaxed the rules
in Presidential Policy Directive 20, which governs the use of US offensive and defensive cyber operations, especially those to “deter foreign election influence and
thwart intellectual property theft by meeting such threats with more forceful responses.”76 Yet, until that threshold is crossed, all of those military cyber-
weapons are limited to cyberspace and cannot physically touch the human cyber attacker. While the Pentagon is developing weapons that may deny the attacker
access to their tools, these responses may have collateral consequences and are limited in their ability to
impose consequences on the individual human attacker. Given the range of types of attacks and attackers, cyber weapons may not be
the best response in a particular situation. There are other tools besides military action that can be used to stop the attacker.

Rather than responding with military force, the government can use its Title XVIII authorities to bring law
enforcement to bear against the attacker at any time. Unfortunately, the current prioritization undervalues
and underinvests in that response. We can only stop this cybercrime wave and close the cyber enforcement gap
by transforming law enforcement, enabled by diplomacy, to go after the attacker. America needs a comprehensive cyber
enforcement strategy aimed at identifying, stopping, and punishing cyberattackers, which it currently lacks. This strategy would need both domestic
and international components to it as well as the structure and process in place to achieve its objectives. We lay out elements of that strategy below. Toward a
Comprehensive Cyber Enforcement Strategy In this section we lay out the contours of what a comprehensive cyber enforcement strategy could look like. These
broad recommendations are aimed at achieving the fundamental rebalance we aim to see in America’s cybersecurity approach to dramatically improve the
country’s security. Over a multi-year initiative, Third Way will develop more detailed policy proposals to advance these efforts. Below we detail our
recommendations for areas of priority that require urgent attention by policymakers. Some of these recommendations are aimed at building upon existing streams
of efforts while others propose new reforms. These recommendations fall into three general categories of those that deal with: 1) domestic enforcement reform, 2)
international coordination and cooperation reform, and 3) internal US government reform efforts to put in place the structure and process to lead all of these
efforts. Domestic enforcement reform Recommendation #1: A Larger Role for Law Enforcement Absent a state of war, the primary US
government agencies with the authority and ability to identify, stop, and punish the humans
responsible for these attacks are law enforcement— enabled by our diplomats and allies. Law enforcement is how we
deal with people who have broken our laws in peacetime. Recent high-profile enforcement actions
demonstrate what is possible when law enforcement and diplomats target individual attackers and point
to a new way forward. For example, in 2015, after a series of cyber espionage attacks on intellectual
property in the US private sector, the Obama administration exerted diplomatic pressure on China. Under the threat of sanctions, the Chinese government
arrested individuals accused of commercial cyberespionage.77 Experts believe the individuals arrested to have ties to the cyber
offense unit of the People's Liberation Army (PLA).78 The US Government was able to investigate and indict twelve Russian

GRU (Main Intelligence Directorate, abbreviated GRU) agents for hacking the Democratic National Committee and the Clinton Campaign during the
2016 election . The indictment detailed the methods and technologies used by the GRU to execute the hack. The investigators were also able to obtain the

names of individuals responsible for executing, coordinating, and ordering the hack.79 Even against the most sophisticated nation-state

actors it is possible to identify and bring indictments against the individuals who launch the attacks. In
cyber policy circles, there are many who have argued that enforcement actions cannot have an impact
when it comes to America’s adversaries who use cyberattacks to target our country. But enforcement
actions taken against malicious cyber actors even in the most difficult cases can still have a substantial
impact. Deputy Attorney General Rod Rosenstein recently laid out the Department of Justice’s view on this very issue, arguing that indictments and
prosecutions are an important tool in these cases for a number of reasons, including: 1) the defendants may one day face a
trial if there is a change in their government’s calculus or they travel to another nation that cooperates with the United
States in these efforts; 2) public indictments can provide some level of deterrence by raising the risk that these

individuals will be held accountable, making them less attractive for future attacks ; 3) these actions
demonstrate the ability of US law enforcement to attribute attacks and charge hackers, which may deter
others; 4) federal indictments in the US criminal justice system given its evidentiary standards are often
taken seriously by other countries, which could impact their relationship with the offending countries; and 5) victims
deserve justice for the attacks that were perpetrated against them.80 But it’s not enough to just bring indictments leaving the hackers on the loose in
foreign lands. The ultimate goal is to take them off the field completely, and law enforcement, enabled by diplomats, does that too. Unfortunately,

American law enforcement and diplomatic efforts are severely under-resourced to address the growing cyber-
crime wave. In fiscal year 2017, the Department of Defense spent $7.2 billion on cybersecurity broadly, nearly ten times the cybersecurity resources of the
Department of Justice.81 As the recent CSIS report highlighted, America needs better cyber forensic capabilities and training.82
These resources must be committed to:

This vigilantism---also known as “hacking back”---causes inter-state war


Patrick Lin 16, PhD, California Polytechnic State University, Ethics + Emerging Sciences Group, “Ethics of
Hacking Back: Six arguments from armed conflict to zombies,” 09/26/16, US National Science
Foundation, http://ethics.calpoly.edu/hackingback.pdf

2.4 Argument from escalation This is


one of the most serious practical criticisms of hacking back, so I will dedicate
more time to developing and examining this particular argument. As with attribution, the
counterattacking individual or
company is usually in a worse position than state authorities to judge the escalatory ladder—how the
adversary might respond—and contain any escalation. This is a particular concern when cyberattacks
come from abroad. We may imagine them to be the opening volleys of a cyberwar, which could
escalate into a physical or kinetic war.60 Knowing that a cyberattack originated from a foreign territory is not
by itself a smoking gun that a foreign government was behind it.61 It might be state-sponsored hackers,
but it also could be patriotic hackers, hacktivists, or ordinary criminals operating in that territory. Or the
cyberattack might not have started from that territory at all; again, its source could be spoofed to frame
an innocent country, precisely to create a conflict for it. Regardless of attribution, hacking back against a
foreign target may be misinterpreted by the receiving nation as a military response from our state, to serious
political and economic backlash. Even if not perceived as a military action from the state, a poorly timed
hackback could derail delicate relationships and negotiations with a competitor state. Again, these are matters
that seem better left to the state, not to private cowboys. However, even if hacking back were conducted by the state to defend
private victims, this could be problematic. Assuming the most challenging case that a cyberattack counts as a “use of force” or “armed attack”,
if hacking back were a salvo in cyberwar, does it violate international laws of armed conflict (LOAC)? As declared by the
United Nations Charter, article 2(4): “All Members shall refrain in their international relations from the threat or use of force against the
territorial integrity or political independence of any state, or in any other manner inconsistent with the Purposes of the United
Nations.”62Nonetheless, it is also within the natural rights of the attacked nation to defend itself, possibly allowing for hackbacks. As the U.N.
Charter, article 51 declares: “Nothing in the present Charter shall impair the inherent right of individual or collective self-defence if an armed
attack occurs against a Member of the United Nations, until the Security Council has taken measures necessary to maintain international peace
and security.”63 So, inside a
cyberwar, hacking back by the state could be permitted . But what about before a
cyberwar has started: would hacking back exacerbate the conflict and trigger that war? If so, this is a
worst-case scenario that we would be right to guard against.

AND, hack backs will destroy IoT networks by targeting devices hijacked for botnet
DDoS attacks---that turns every impact AND collapses the state monopoly on cyber
law enforcement
Sara Sun Beale & Peter Berris 18, Beale is Charles L.B. Lowndes Professor, Duke Law School; Berris is
J.D., Duke Law School, 2017, “Hacking the Internet of Things: Vulnerabilities, Dangers, and Legal
Responses,” Digitization and the Law, edited by Jochen Feldle and Eric Hilgendorf, Nomos
Verlagsgesellschaft mbH & Co. KG, 2018, pp. 21–40 DOI.org (Crossref), doi:10.5771/9783845289304-21
A. The Danger of Botnets and the Allure of Hacking Back

The Botnets have a different relationship to the IoT than many of the other dangers discussed in this article. Much of this
article focuses on how the internet may be used to corrupt devices connected to it.191 In contrast, botnets present the reverse issue:
devices connected to the internet may be used to disrupt the internet itself.192 Compounding the problem, botnets
are not only an existential threat to the internet but a persistent one as well. Without curative solutions, botnets can be used in
multiple crimes.193 Once a device is recruited into a botnet, it becomes part of a “commodity” that can be rented out “by the hour” or
purchased.194

Thus, to eliminate the threat of botnets, a solution with retroactive and curative force is needed. Enter hacking back, part of a larger concept of
internet self help or remediation encompassing terms such as counterstrikes, “‘active defense,’ ‘back hacking,’ ‘retaliatory hacking,’ or
‘offensive countermeasures’”195 As the assorted terms suggest, remedial
action encompasses a range of different self-
help measures to prevent and counter botnets and hacking. Remedial actions might “enable attacked parties to detect, trace, and
then actively respond to a threat by, for example, interrupting an attack in progress to mitigate damage to the system.”196 Specific strategies
could include implementing a “DoS attack at the botnet controller or hacking the botnet controller and thereby taking control of the
botnet.”197 However, not all remedial efforts are so forceful: “Hacking back against a botnet can be as simple and nonaggressive as pushing
security patches onto infected computers, just as patients with a deadly virus could be forcibly treated or quarantined to prevent a contagion’s
spread.”198 Unlike enforcement and litigation which do little to prevent future attacks, and are “inherently ex post facto,”199 hacking back has
the crucial ability to prevent future attacks by combatting existing botnets.
Despite these potential benefits, there is also a potential problem. At least if undertaken by private parties,200 such behaviors may be
illegal.201 Ironically, “[t]he same laws that make it illegal to hack in the first place— for instance, to access someone else’s system without
authorization— presumably make it illegal to hack back.”202 The CFAA both criminalizes botnets and limits recourse against them.203 The
Department of Justice, the FBI, and “White House officials” have all suggested that such remedial efforts may be illegal.204 Scholarship echoes
this conclusion.205 As a result, the legal regime that is intended to protect the public from hacking also limits the manner in which such dangers
may be fought. A logical question then, is how hacking back might be legalized.206
B. Possible Theories for the Legalization of Hacking Back

There are a variety of ways in which hacking back might be legalized. This subsection focuses primarily on one possibility: creating
exceptions for strikebacks through a legal framework modeled on the laws governing recapture of property. It then briefly summarizes other
possibilities.

Recapture laws provide a promising framework for remedial action. They balance two conflicting considerations implicated by hacking back: the
right to protect personal property, and the understanding that that right cannot be absolute. On the one hand, “[t]he law has always
recognized that a person is justified in using some degree of force to protect his property from wrongful invasion or appropriation by
another.”207 On the other, the law has been wary of the dangers surrounding self-help measures to regain property.208

The Model Penal Code (MPC) provides an important compromise of these conflicting interests in the context of recaption of property. Under
MPC 3.06(1)(b), “use of force upon or toward the person of another” when protecting property is justifiable if:

[T]he actor believes that such force is immediately necessary . . . to effect an entry or re-entry upon land or to retake tangible
movable property provided that the actor believes that he or the person by whose authority he acts or a person from whom he or
such other person derives title was unlawfully dispossessed of such land or movable property and is entitled to possession, and
provided, further, that:

(i) the force is used immediately or on fresh pursuit after such dispossession; or

(ii) the actor believes that the person against whom he uses force has no claim of right to the possession of the property
and, in the case of land, the circumstances, as the actor believes them to be, are of such urgency that it would be an
exceptional hardship to postpone the entry or re-entry until a court order is obtained.209

Although closely related to the use of force to protect property, recaption is separate in the Model Penal Code.210

This separate right of recaption provides a useful template for laws governing hacking back, although further analogy is necessary. Returning to
the example of Bill and Jeremy, imagine that Jeremy steals some of Bill’s personal possessions. Applying the test of MPC 3.06, it could be
justifiable for Bill to take back his personal property if he believed it “immediately necessary.” Jeremy’s initial interference with Bill’s property
rights justifies some resulting intrusion by Bill into Jeremy’s rights.

To illustrate how the framework of MPC 3.06 could shape laws governing hacking back, imagine the digital equivalent. Assume that Bill
operates a thriving retail and manufacturing business out of his home comprised of a computer, a website, and an internet enabled 3D printer.
Jeremy hacks into Bill’s computer and steals consumer credit card information stored on it, saving it to his hard drive. Jeremy also controls a
sizeable botnet through his personal computer and directs it to launch a DDoS attack on Bill’s website, bringing it down. Finally, Jeremy exploits
the botnet to gain control of Bill’s 3D printer and causes it to malfunction. The basic scenario is the same as in the hypothetical above: Jeremy
has interfered with Bill’s property. Only the nature of the intrusion is different. Bill still has physical possession of his computer and printer, but
Jeremy has wrongfully copied some files, and taken control of the printer. If MPC 3.06 were the framework for hacking back laws, Bill might be
able to hack back to erase the stolen files, end the DDoS attack, and regain control of his printer. It is analogous to Bill taking back his physical
property above. The basic premise is the same: Jeremy’s meddling with Bill’s property merits some form of response to restore Bill’s property
interests.

Of course, there is a fundamental threshold difference between recaption as envisioned by MPC 3.06, and hacking back of the sort
contemplated in the Bill and Jeremy example. The MPC right of recaption is not directly relevant to hacking back. It provides a justification for
the use of non-deadly force against the person of another, rather than for interference with property, such as a computer within the meaning
of the CFAA. Except for the general defense of “choice of evils,” the MPC does not address the justification for interference with property.211
However, the law generally regards any use of force against a person as a more serious wrong than interference with personal property.
Therefore, the framework for recaption in MPC 3.06 should be sufficient, as a policy matter, to justify the lesser wrong of interference with
personal property.

Such interference already has a close analogy in the context of torts. Although tort law does not permit the use of force for recapture of
chattels “once possession is clearly lost,” it “permits a defendant who is entitled to immediate possession to recover the goods from another’s
land (a) if the defendant did not cause the intrusion of the goods in the first place and (b) if entry is reasonable as to both time and
manner.”212 For example, “[i]t is not disputed that if . . . [chattels belonging to another] have come upon the land through the wrongful
conduct of the landowner, a privilege to enter and recover them exists.”213 In exercising that privilege, “[r]easonable amounts of damage may
be done, even to the extent of breaking down a fence or a door . . . The privilege is complete, and, so long as only reasonable force is used, the
defendant is not liable for any damage he may do.”214 In some circumstances a person may use force against the physical property of
someone who has taken his own property, in the attempt to recapture it. This is particularly instructive in the context of hacking back, because
breaking down a thief’s door to regain stolen property is similar to hacking back against a digital aggressor to restore a compromised computer.

Allowing for some leeway regarding where force may be directed in recapturing property, the conceptual underpinning of MPC 3.06 fits well
with the basic nature of remedial action in the IoT. Reworking is necessary to accommodate the differences between the physical and digital
arenas, because they result in somewhat distinct property interests and methods of recaption. A rudimentary sketch of a law governing
counterstrikes may be imagined by modifying MPC 3.06(1)(b) to rectify these disparities and to clarify that force may be used against the
property of another:

Damage to, intrusion into, or interference with, the computer of another . . . is justifiable when protecting property . . . if the actor
believes that such action is immediately necessary . . . to regain control of a computer, website, digital information, or computer
enabled device, provided that the actor [reasonably]215 believes that he or the person by whose authority he acts . . . was
unlawfully deprived of control of such computer, website, digital information, or computer enabled device . . . and is entitled to
regain control, and provided, further, that:

(i) the action is used immediately after such interference with control; or

(ii) the actor believes that the person against whom he takes this action has no claim of right to the interference with
control of the computer, website, digital information, or computer enabled device . . . .”

This formulation is intended as merely a rough illustration of how the template of recaption law might apply to hacking back, and to further
paint the analogy between recapture of physical property and remedial action in the IoT. A comprehensive statute is well beyond the scope of
this paper. Nevertheless, an additional consideration demands attention.

MPC 3.06 contains temporal limitations that could greatly hinder an analogous right to hack back. MPC 3.06(1)(b) demands immediacy,
requiring a belief of “immediate” necessity, and actions that are “used immediately or on fresh pursuit after such dispossession.”216 These
requirements may be impractical in the context of an attack in the IoT because it may be impossible to quickly assess the harm and identify the
perpetrator.217 State laws modifying MPC 3.06 provide models for a more flexible timing requirement. For example, Connecticut allows force
for the recapture of personal property “when and to the extent that [the recapturer] reasonably believes such to be necessary . . . to regain
property which he reasonably believes to have been acquired by larceny within a reasonable time prior to the use of such force.”218 Extending
the window in which the victim of a botnet attack may respond from immediacy to reasonableness, as Connecticut does for recaption, could
better accommodate a range of remedial actions.

With these modifications to recapture law framework, more aggressive forms of hacking back might be legally permissible. Of course, creating a
right of reentry or recapture based on the MPC is just one way that hacking back might be legalized. Other routes have been suggested. For
example, one proposal would amend the CFAA to allow a limited selfhelp privilege narrowly cabined by four requirements:

(1) the counterattack must be necessary and proportional to the threat being mitigated or prevented; (2) the counterattack must be
in response to an ongoing or repeated attack; (3) the counterattacker must submit a good-faith justification and notification to the
government; and (4) the counterattacker must assume strict liability for all damage to third parties, and liability for all negligently
caused unnecessary damage to the original attacker. 219

Amending the CFAA has some proponents in Congress. Indeed,

Georgia representative Tom Graves proposed the Active Cyber Defense Certainty Act (ACDC), which would change the CFAA so that
it would not apply to victims of cyberattacks who accessed attackers’ networks to “gather information in order to establish
attribution of criminal activity to share with law enforcement” or to “disrupt continued unauthorized activity against the victim’s
own network.”220

Others propose a path for legalizing remedial action through analogy to retail security guards,221 bounty hunters, or private investigators.222
Under these theories, remedial actions like planting malware in botnets or searching the networks of invaders could be “considered seizure of
an offensive weapon” or security patrols, respectively. Other theories have looked to tort law exceptions such as private nuisance, trespass to
chattels,223 “the recapture of chattels privilege, entry upon land to remove chattels, private necessity, or even the castle doctrine.”224 But
even if legalizing hacking back under any of these theories would be possible, it is not necessarily a good idea. The next subsection explores the
pitfalls.

C. The Ethical and Logistical Problems with Hacking Back

Hacking back has


garnered considerable attention in the wake of prominent hacks,225 but the attention has
not all been positive.226 Critics have highlighted a range of logistical and ethical issues. Logistically, it is unclear
that hacking back would be an effective solution even if legalized. One major logistical concern is the danger of
escalation. Hacking back may create new attacks rather than end ongoing ones.227 Two considerations magnify
this danger. First, not all hackers will be deterred by remedial action.228 Some, such as hacktivists, may welcome the
challenge and ramp up their attacks.229 Alternatively, where the initial aggressor is a foreign government or
criminal organization, escalated retaliation is likely.230 American companies engaged in hack backs against such actors will
not be able to out-violate the law.231 Second, companies are not as well-equipped as the government to assess the likelihood of foreign
escalation.232 Disturbingly, a company’s remedial action could be perceived by a foreign country as “a military response from our state.”233
Remedial action from an American company could become “the opening volleys of a cyberwar, which could
escalate into a physical or kinetic war.”234

Another major logistical concern focuses on the danger that remedial actions could create chaos in the
wake of hacks. Some in law enforcement warn that remedial action could “lead to confusion in investigating
cyberattacks.”235 Remedial action looks similar to the tools used by the initial aggressors, and makes it “much harder to
distinguish between the good guys and the bad guys online.”236 And remedial action could also muddle the judicial recourse for cyberattacks
because evidence gained through hacking back may be inadmissible for those bringing suit under the CFAA.237

One last logistical criticism of remedial action is rooted in the relationship between companies and the cybersecurity firms they may contract
with to provide remedial action. Cybersecurity firms are given access to corporate networks and are in the ideal position to steal information
from the companies that hired them.238 Even if outright theft by cybersecurity firms is unlikely, there is a perverse incentive. As one article
phrased the relationship: “Would there not be a conflict of interest . . . between treating a problem (ongoing revenue for your security firm)
and curing it (which ends their engagement)?”239

The ethical critiques of remedial action are similarly varied. One focuses on the relationship between private and public that hacking back might
fuel. For example, remedial action intrudes on the domain of force against foreign actors that generally belongs to the state.240 Alternatively,
remedial action by private companies presents a danger of government ratification of illegal behavior as in Russia, which is said to rely on
“intelligence gathered by criminals, allowing it to benefit from crimes without accepting responsibility for them.”241

Other ethical concerns abound. For example, information security professionals that engage in remedial actions may actually violate the
professional code of their licensing agency.242 Additionally, even if hacking back were to be legalized under U.S. law, it might still “violate
foreign laws.”243 Finally,some distinguish hacking back from self-defense because unlike self-defense, the justifying threat is not existential.244

One last major criticism involves both logistical and ethical dilemmas. For hacking back to work, the entity doing it must be
able to identify the perpetrator of the hack. As discussed more fully in Section IV.A., identifying hackers is difficult
because they “‘like to cover their tracks by routing attacks through other people’s computers, without the owners’
knowledge.”245 As a result, remedial action is hampered by time and certainty.246 Quick remedial actions are likely to be
uncertain and could be against the wrong party, while accurate attribution is likely to be too slow to be to
allow for effective remediation.247 Ethically, this presents two major problems. First, remedial actions risk collateral damage to
innocent parties. Second, the limitations on attribution temper the justification of remedial action as self-defense. Using force against a
cyber aggressor is one thing, using it against a victim is another.

When applied to a hypothetical, many of these logistical and ethical critiques are damning. Return one last time to the
example of Jeremy’s hack. In using Bill and Jeremy to illustrate how recapture of property law might provide a framework for the legalization of
hacking back, it was necessary to analogize between the physical world and the digital world as so many accounts of hacking do.248 But many
of the ethical and logistical critiques of remedial action illustrate that such analogies are imperfect, even if plausible. For example, in the Jeremy
and Bill example, Bill was able to attribute the attack to Jeremy. That degree of certainty is unlikely in reality, and especially within a short
period of time. Second, the hypothetical presented Jeremy and Bill as sharing physical proximity. In the digital age a hacker may be far away,
often in another country. The hacker may even be the agent of a foreign government. By hacking
back against Jeremy, Bill may have
waded into the waters of international aggression and escalation. Alternatively, Jeremy could be an
innocent party whose network has been compromised by someone else. He might then mistake Bill’s
defensive hack back for an initial aggression, and respond with a new attack. Of course, it is unlikely that
both parties would be individuals. They could be corporations, governments, criminal organizations, or teams. Perhaps
that is most indicative of the core problem: the uncertainty inherent in cyberattacks and the IoT makes
solutions simultaneously essential but difficult.
VI. OTHER OPTIONS FOR IMPROVING THE SECURITY OF THE IOT
If remedial actions like hacking back cannot remedy the numerous and grave threats that permeate the era of the IoT, and the CFAA is
insufficient, then it is essential to find another way to reduce vulnerabilities and prevent attacks. Although there are many possibilities,249 this
section briefly explores two possible prospective solutions: (1) a standards approach; and (2) agency regulation.

Both solutions differ from remedial actions such as hacking back by focusing more on securing new IoT devices rather than combatting existing
ones that have already been corrupted. Both solutions are grounded in the same understanding of the problems with the IoT. Proponents of a
standards approach and agency regulation often view the IoT as a victim of a market failure, as Section II illustrates.250 Consumers want IoT
devices to be as cheap as possible.251 Manufacturers and retailers oblige, prioritizing cost over security because they have no incentive not
to.252 International supply chains and the limited security expertise of many IoT design teams further complicate matters.253 The widespread
weaknesses in IoT devices offer an enticing tool and opportunity for nefarious activity. This section evaluates the potential of a standards
approach or agency regulation to break this cycle.

A. The Standards Approach Vulnerabilities like default passwords and static firmware threaten IoT security. Although they are suboptimal,
because there is no uniform set of standards that IoT manufacturers or retailers must meet they are not technically substandard.254 The
standards approach would attempt to remedy this by imposing such a system on key players.

A standards system would combat the market failure by incentivizing better security practices in the proliferation of IoT devices.255 According
to one expert, adopting “defined standards” will “change buying and investment patterns” that are responsible for the current state of
vulnerability in the IoT.256 Imposing stronger security measures through standards for IoT developers is important because “[s]ecurity needs to
be built into IoT devices, not bolted on. If cybersecurity is not part of the early design of an IoT device, it’s too late for effective risk control.”257
Establishing standards that require better security measures from the start implicates “domestic and international” standards setting entities
like the International Standards Organization (ISO) or the National Institute of Standards and Technology (NIST),258 and may require
government intervention.259

Generally, organizations advocating for the use of a standards-based approach emphasize the importance of a consistent and uniform
standard,260 but the priorities of an IoT security standard might vary. For example, Dale Drew—a proponent of a standards approach—is
preoccupied with remedying vulnerabilities like default passwords, “hard-coded credentials,” and the “lack of capability of updating [IoT device]
firmware.”261

One bipartisan legislative attempt at employing a standards approach, titled “The Internet of Things Cybersecurity Act of 2017,” is currently
pending before Congress.262 The Bill would apply to IoT devices sold to the federal government, and “requires that manufacturers that sell
smart devices to government agencies regularly patch their products for vulnerabilities and steer clear from using hard-coded passwords to
access the devices via a backdoor.”263

Assuming arguendo that agreement could be reached on the correct standards, this approach would still have a serious limitation: it would not
affect the millions of existing devices.

B. Agency Regulation

Some experts have concluded that the pervasive


threats to the IoT, and the related market failure, require increased
government involvement.264 They argue that “[c]ybersecurity ought to be a public good much like automobile
safety.”265

One possibility is to expand the capabilities of existing government agencies to test IoT security. To promote automobile safety, there are
federally funded research and development centers, testing facilities run by the National Transportation Safety Board (post market),
automotive crash safety testing (premarket), and the Nevada National Security Site (destruction and survivability testing).266 But no analogous
regulatory entities or research facilities currently exist to provide a proving ground for embedded cybersecurity defenses needed by IoT.267
Such facilities would remedy the government’s lack of a means to “conduct thorough security testing and assessment on IoT devices” and
would reduce the inefficiencies of having diffuse entities conducting independent research.268 This expansion could potentially fall under the
control of the National Science Foundation or the NIST.269

Another possibility is the creation of a new regulatory agency. Bruce Schneier advocates for this position and analogizes the IoT to the once-
new technologies of the past that gave rise to new agencies: “trains, cars, airplanes, radio, and nuclear power.”270 He argues that “[i]n the
world of dangerous things, we constrain innovation,”271 and that the IoT presents new dangers just as those earlier technologies did during
their development. As a result, even if regulation would stifle some creativity, Schneier suggests that this is a necessary sacrifice for security.272
Furthermore, the IoT presents problems that the market cannot or will not solve on its own. The most prominent is the market failure and the
lack of consumer and manufacturer incentives to resolve technological vulnerabilities in the IoT.273 Schneier argues that—as with
environmental pollution—regulation is essential because the dangers and ill effects are felt only downstream.274

In the current political environment, which favors smaller government and reducing regulation, it seems doubtful that this approach could get
traction in Congress. And if it did so, recruiting the necessary expertise and resources could be a daunting task.

CONCLUSION
The dangers in the IoT are complex, multifaceted, and numerous; and none of the possible solutions discussed in this article is wholly satisfying.
For example, the
current legal regime under the CFAA governs many of the threats in the IoT, and there have
been some successful prosecutions under it. However, the CFAA’s utility is severely limited by practical and jurisdictional
concerns, and it also prohibits some remedial actions against hacking. Similar contradictions are apparent with the alternative solutions
evaluated in this article. Remedial actions like hacking back could ameliorate the perils of botnets, but they suffer from legal, ethical, and
practical drawbacks. A standards approach might help secure the IoT prospectively, but it does nothing to eliminate the threat posed by
preexisting botnets and compromised IoT devices. Agency regulation might provide similar relief, but seems unlikely in the current political
climate.

Given these obstacles, it is tempting to do nothing, despite the overwhelming and quickly accelerating dangers posed by the IoT. That would be
the worst option of all. First, an
absence of official action should not be mistaken for an absence of action. If
the government does not act to secure the IoT, others will, and the results could be chaotic and
perilous. This inevitability may already be occurring: self-appointed vigilante “white hat” hackers are
suspected in the proliferation of three botnets. One, known as Hajime, “has infected at least 10,000 home routers,
network-connected cameras, and other so-called Internet of Things devices” with the apparent goal of “ disrupt[ing] Mirai and
similar IoT botnets.”275 Even assuming that the vigilante hackers have good intentions, their solution is
fleeting, the methodology is illegal, and it interferes with “tens of thousands of devices” without the
permission of their owners.276 The other botnets, known as “BrickerBot.1” and “BrickerBot.2” may have a similar goal, but are particularly
destructive: they are “designed to damage routers and other Internet-connected appliances so badly that they become effectively
inoperable.”277 If these developments are any indication, without official intervention, the fight to secure the IoT could
become a war of attrition with many innocent victims.

Second, the extraordinary growth of the IoT and its extreme vulnerability threaten individuals, businesses, and the broader society.
Insecure IoT devices may be corrupted and exploited to attack the internet itself, threatening our reliance on the internet for
things such as finance, news, healthcare, education, communication, information storage, and more.278
Alternatively, IoT devices present new and unique opportunities for malicious actors to turn digital hacking into physical
consequences.279 Hackers can already jeopardize a frightening array of internet-enabled objects including cars, trains, voting machines,
power plants, dams, home thermostats, implanted medical devices, and possibly airplanes.280 With ever-increasing internet connectivity, the
perils could implicate any device that is connected to the internet. In the face of these potentially crippling threats, action
is essential. If we wait passively for the full array of dangers of the IoT to become a reality, the wait will not be long, and the
crisis could be severe.

That fundamentally challenges sovereignty, which is an existential risk


Dr. Randolph Kent 15, Director of the Humanitarian Futures Programme at King’s College, London, and
is presently a Senior Research Fellow at the Royal United Services Institute, “The future of warfare: Are
we ready?” International Review of the Red Cross (2015), 97 (900), 1341–1378.

Faced with these sorts of contending and draining pressures, the State may have to accept that private-sector institutions
and social networks – intentionally or inadvertently – will assume many services, such as security and welfare, that are normally
deemed to be within the purview of the State . The consequence could well be that those with the means will
be prone to opting out of engagement with State structures, be they authoritarian or democratic, leaving those who
cannot afford to do so trapped in an ever more financially insecure space. And with such a prospect, many argue that State structures and
activities like taxation will be negatively affected, as will the State’s monopoly over the legitimate means of
violence.38 This in turn may result in a deepening divide across societies – and with that divide, new sources of insecurity.39
On the other hand, so-called “sovereign States” might embark on collective ways to deal with such complexities and insecurities. With or
without intergovernmental organizations, they may attempt to find ways to gain agreement on what might be seen as common issues of
survival. “Minilateralism” and the prospect of growing multipolarity are potential alternatives to some.40 Common interests and common
insecurities may at least be responsible for arrangements on functional issues such as “runaway climate change” and
pandemics; international crime and seemingly uncontrollable capitalism may prove to be other issues that could
stimulate some form of cohesiveness. The requisite State authority for achieving such aims is, however, not regarded as a
given.
One possible challenge to the conventional State structure is what has been called “the atomized society”. The atomization process combines
“the Internet of things” with universal digital access – the transformative consequences of which may well be far more fluid approaches
to authority and governance, based upon networks that are far more creative and responsive than contemporary organizational systems.
Atomization, in other words, is a process by which social, political and economic dynamics are determined principally by fluid, self-
organizing entities that exist in parallel and normally independently of conventional structures.

Atomization may well, in various ways, challenge States’ assumptions about their capacities to control and regulate
economic, social and resourcerelated activities. In that context, it could also put into question States’ supposed
“monopoly of power”. To what extent might the consequences of cyber-cash, private interests in outer space, social
networks’ influence over cyberspace and myriad other reflections of power remain within the purview of the State ?

A related uncertainty has to do with the extent to which such social divides are controllable, and what measures States might require to deal
with these divisions. From the perspective of the World Economic Forum, the
options for States as well as for governments and civil
society will depend upon new forms of leadership and more collective and holistic solutions to ever mounting problems. If
such solutions are not found, the question seems to be answered in terms of what might be regarded as
the “new normal” … a combination of volatile markets, a lack of political will to deal conclusively with long-term issues, the recurrent
mobilization of the general public in social protest and a remarkable ability by leaders to nevertheless continue to push “ the next
big crisis” to future generations.41

This is a prospect that some in the United States – still the wealthiest nation in the world – regard as all too possible.42
Plan---1AC
The United States federal government should enact a flexible standard regarding the
handling of digital evidence and create an accredited national certification entity for
digital forensic scientists in the United States.
Solvency---1AC
The final contention is SOLVENCY
The plan creates a national standard for cyber forensics practices AND workers---this
makes the field credible, staves off judicial restrictions on cyber evidence, and creates
a reliable pipeline for capabilities.
Nima Zahadat 19, Professor of Data Science and Information Systems Security at George Washington
University, has also held positions as Chief Security Officer, Chief Information Officer, Director of
security, Director of Training Solutions, Dean of Computer Science, Program Chair of Information
Systems, and Director of Operations, “Digital Forensics: A Need for Credentials and Standards,”
03/31/19, Journal of Digital Forensics, Security and Law, Vol. 14, No. 1,
https://commons.erau.edu/cgi/viewcontent.cgi?article=1560&context=jdfsl
1. INTRODUCTION

Despite the wide variety of areas in the medical field and that of the legal field, both requiring credentialing and
accreditation at the state and at times the national level, there are no such requirements for digital forensic
investigators. It is fair to state that a person caught practicing medicine without a state license or a degree from an
accredited institution, would be sued and even prosecuted. It is also fair to state that most people would not trust a
doctor or a lawyer who was not a graduate of a properly accredited university with proper credentials
from a state or federal government. Even becoming a private investigator (PI) usually requires licensing in most states.
Digital forensic investigation is one of the prominent fields emerging from the broad discipline of
forensic science. Though the academic theory and practice of digital forensics has existed since the 1970s,
increased interest in the field has been witnessed recently owing to escalated risks of cyber-attacks and
computer-related crimes (Altheide & Carvey, 2011). The field of digital forensics is particularly concerned with the
evidence found in computers, mobile devices, storage devices, social media and cloud services among
other IT related elements that can be used in trials and other forms of inquiries (Mohay, 2005). Data
extraction, collation, carving, and the release of forensic expert reports are what encompasses the core
of practice in the field. While there are no national standards for digital forensic credentialing, and for that
matter, no state-level ones, some states have attempted to bring about such standards. As will be seen, these efforts
have been halfhearted and somewhat disorganized, many times causing more problems on the legal realm
than offering solutions. Many of these states lump Private Investigator (PI) licensing and forensic credentialing into
one in an attempt to add legitimacy to forensic investigators, which is quite a peculiar approach. Below are
some of the states and localities that have attempted to bring about some consistency to forensics investigations and a brief overview of their
attempts and methodologies: Alabama: Alabama offers no forensic licensing credentials, but the city of Mobile requires a city-issued private
investigator (PI) license to do forensic work (Leonardo, White, & Rea, 2012). Colorado: Colorado is somewhat intriguing as the state does not
have any digital forensic requirement, and PI licensing is voluntary. Because Colorado’s PI licensing is voluntary, anyone can come to the state
and be licensed as a PI, even if they have broken the law elsewhere. According to the Colorado Legislature itself, there have been numerous
instances of wrongdoing by licensed PIs from Colorado. District of Columbia: Washington, DC requires a PI investigator license for digital
forensic examiners (Leonardo, White, & Rea, 2012). Georgia: Georgia has required that digital forensic examiners obtain PI licensing (Leonardo,
White, & Rea, 2012). Indiana: Indiana, as of 2010, has elected not to require any credentialing or licensing for digital forensic examiners (SANS,
2010). Maine: Maine, like Georgia, has mandated that digital forensic examiners obtain PI licensing (Leonardo, White, & Rea, 2012). Maryland:
Maryland requires a PI license for private investigations, but neither digital forensic licensing or credentialing is addressed. North Carolina: Like
Indiana, North Carolina has elected not to require licensing of any kind for forensic investigators (SANS, 2010). Oklahoma: Oklahoma is really
odd as it permits that a PI license from another state can be used to get a temporary license in Oklahoma. This means if an investigator needs a
temporary license in Oklahoma, they can get one from Colorado first (InfoSec & Forensic Law, 2013). Texas: Texas has implemented the notion
that digital forensic examiners/investigators license themselves as PIs in the state. Texas has gone so far as to interpret digital investigation to
include computer technicians and repair personnel (Leonardo, White, & Rea, 2012). Virginia: Virginia codified in 2011, explicitly stating that PI
licensing requirements did not apply to any certified forensic individual employed as an expert witness. Virginia has reciprocity agreements with
several states, including Georgia (Leonardo, White, & Rea, 2012). It is worth pointing out that several states including New York, Nevada, North
and South Carolina, Washington, and Virginia are pushing to have PIs handle digital forensic investigations. No states were found to be
offering any paths towards an independent digital forensic licensing and credentialing . Despite being well established in
recent times, the discipline of digital forensics continues to face several core problems. A needs analysis survey by Rogers & Seigfried
(2004) indicated training and certification as the main challenges, a claim collaborated by several stakeholders in the field including the National
Institute of Justice. There are concerns that the field is largely fragmented, lacking a national framework for curricula
training and development. Pollitt (2010) in his paper “A History of Digital Forensics” starts his work by apologizing to his audience,
admitting there is little reliable data and rigorous logic that he can bring them regarding digital forensics. He gives a history of digital forensics
based on his 20+ years as a criminal investigator, then proceeds to make some bold predictions, acknowledging he will probably be wrong in
many of them. In addition, the field as currently constituted has no gold standard for certification, a central
challenge in instilling consistency and professionalism in the field. The National Institute of Standards and Technology
(NIST) published special publication 800-181, a National Initiative for Cybersecurity Education or NICE as a reference structure
describing the interdisciplinary nature of cybersecurity work. NICE attempts to provide a common lexicon, foundational
frameworks, workforce categories, specialty areas, roles, knowledge descriptions, skills descriptions,
abilities descriptions and a host of other well-thought-out guidelines, complete with example systems.
This special publication would serve as part of an excellent starting point for digital forensics framework
development and digital forensics academic development though, by itself, it would not be sufficient as it is too broadly focused on
cybersecurity. It is designed as a starting point to be applied in the public, private, and academic sectors but does not focus entirely on forensic
training, credentialing, or accreditation. NICE framework is comprised of the following components (NIST 800-181): 1. Categories – a high-level
grouping of common cybersecurity functions 2. Specialty Areas – distinct areas of cybersecurity work (includes digital forensic) 3. Work roles –
detailed groupings of cybersecurity work comprised of specific knowledge, skills, and abilities required to perform tasks in a work role While
NICE can be one of the solid starting points, there is still the egregious issue of credentialing and
certification in digital forensics, which this paper explores, drawing from relevant academic literature. It must be pointed out that
various agencies such as NSA and DHS have developed programs that institutions can apply for and be
designated as meeting the bar set by these agencies. For example, NSA and DHS have jointly developed the Centers of
Academic Excellence in Cyber Defense (CAE-CD) program. Regionally accredited colleges and universities can apply to this program and if
approved, have their curricula be designated as such, receiving formal recognition from the US government. This
is certainly an
appealing program for many universities, including the author’s university which has applied for this exact program, but it is still
a fragmented solution and a voluntary one, and one that does not address digital forensics
credentialing and accreditation at a high level; it focuses primarily on what the NSA and DHS consider
necessary security processes and controls. 2. RESEARCH METHODOLOGY The research was qualitative and descriptive in nature,
utilizing published research in the field of digital forensic investigation. A search was conducted in major academic databases including Google
Scholar and ProQuest, isolating articles from reputed journals on the subject of the federal, state, private, profit and non-profit credentialing of
digital forensic investigators in the United States. Additionally, private recommendations and practices of private organizations such as ISC2 ,
Guidance Software, and AccessData were studied. Each study was evaluated for the relevance of content and timeliness, with the inclusion
criteria only featuring articles within roughly 15 years of publication. A review of literature focused on the general fundamental theories in the
domain, the problematic issue of credentialing and possible solutions. Thematic reflections on the findings on various issues were noted and
forwarded as recommendations and conclusions on the present state of the identified problem. 3. LITERATURE REVIEW Though many studies in
digital forensic investigations have identified the bias in available research towards applied aspects of the domain as opposed to the
development of fundamental theories, prejudice is justified. This is because of the largely practical nature of forensic science at large and the
pressure mounting from external events such as cyber-terrorism and cyber-crimes, necessitating more applied research (Nelson, Phillips &
Steuart, 2014). As it emerges, the issue of credentialing of digital forensic investigators at various levels falls under applied research and
continues the implied bias. However, there is credence in the fact that several studies identify the lack of a proper credentialing standard as one
of the main challenges facing the profession today. For instance, a study by Flory (2015) indicated that though the state of Indiana’s law
enforcement agencies was deliberate about digital forensic training with half of their staff trained, their
ability could only be rated from low to mid-range. As such, there was still an overwhelming need to create
a standard and comprehensive framework for locating experts, obtain a forensic insight with the help of
standard operating procedures, and finance career advancement in the domain. The above study shows the
longstanding nature of the challenge of credentialing and locating competent experts in digital forensics and thus justifies the focus of research
towards that direction (as opposed to fundamental theories). The issue of credentialing, though vast, seems to be overshadowed by the
looming challenge of lack of a proper, consistent curriculum in the first place. As such, a good deal of research is currently dedicated to
advancing training and ensuring that there is a teaching framework that can be followed successfully by most universities and colleges. As
noted by Lang et al. (2014), the development of a digital forensics curriculum should provide a self-contained and comprehensive tool for
teaching the discipline in universities given the failure of many institutions to offer such courses for missing certain aspects of the entry barrier.
In their proposed curricula, Lang et al. (2014) offered an introductory and an advanced course and hands-on laboratory programs. They,
however, failed to focus or mention at any point, the essence of credentialing and its role in developing the digital forensics investigator. This
seems to be consistent with most curricula and reports on the status of digital forensics investigation and related disciplines throughout. For
instance, a report by West Virginia University Forensic Science Initiative (2007) submitted to the Department of Justice (DoJ) on training and
education of digital forensics investigators highlights the antecedent qualifications and a detailed career path but omits otherwise essential
information on credentialing. The report is comprehensive on other aspects of training and career path, highlighting the qualifications, skills,
and knowledge needed, the Associate, Baccalaureate, and advanced levels of learning in the discipline, but makes a major omission on
certifications and credentials needed in the profession. This sums the whole credentialing challenge in available studies- that most of it loom in
the shadow of a clear training and education framework for digital forensic investigators. The literature on building accreditation and
credentialing in digital forensics is quite unappealing. This is primarily due to the confusion surrounding digital forensics in the first place.
Losavio et al. (2016) make the bold allegation that digital forensics is not yet a profession and attempts justification of the claim on several
grounds. According to the paper, a profession entails specialized knowledge, specialized training, highly valuable work, self-regulation, a code of
ethics, high levels of autonomy, and many other significant elements. Certification
and credentialing are what offer code of
ethics, autonomy of practice, and evidence of specialized training, but lack in the discipline as per the
arguments of Losavio et al. (2016). This has hindered the development of digital forensics as a profession. A large
number of studies indeed recommend that proper standardized frameworks are brought into the
frame for credentialing of digital forensic investigators . Butler (2015) highlights some of these recommendations offered by
the National Academy of Sciences (NAS). They include creating a standardized accreditation model for digital forensic
investigators to achieve recognition, consistency, and the “expert” label. From the reading, it appears that there is a
robust framework for providing oversight to various accreditation bodies in digital forensics. These include the National Institute of Standards
and Technology (NIST), the Department of Justice (DoJ) and the Organization of Scientific Area Committees (OSAC) which came together to
carry out research and chart a framework that can operationalize accreditation bodies. The national commission on forensic science on its part
acts as an advisory body to the DoJ and carries out various roles that form the framework for accreditation. These include advice on training on
science and law, testimony and reporting, provision of interim solutions, and above all, accreditation and proficiency testing (Garfinkel et al.,
2009). Therefore, though there are no consistent accreditation frameworks, the framework to regulate bodies that offer credentialing exists
and operates with a clear mandate. The development of accreditation oversight in digital forensics has since been reported at the national
level. Coordinated by the DoJ and with the advice of NIST, such frameworks have emerged as a product of OSAC’s efforts. According to Butler
(2017), OSAC has been involved in the development and promulgation of technically-appropriate and universally accepted documentary
standards that are used by accrediting bodies to audit forensic laboratories and carry out credentialing of forensic investigators. OSAC has since
developed to include a Forensic Science Standards Board and various committees and subcommittees that are responsible for offering oversight
in the approval process for forensic sciences standards as provided by various scientific area committees. There are several credentialing
bodies, many of which are international that are apparent in the field of digital forensics. Gladyshev, Marrington, & Baggili (2014) note that the
bulk of these organizations are either for profit or privately owned, with the government only providing the business operational framework
that such bodies can use in carrying out certification and accreditation. They include companies like Mile2 and ISC2 . Other entities include the
EC-Council, the American Board of Information Security and Computer Forensics (ABISCF), International Association of Computer Investigative
Specialists (IACS) and International Society of Forensic Computer Examiners (ISFCE) (Freiling & Schwittay, 2007). Some of these bodies, in
particular, ISC2 , use the standards and frameworks issued by bodies like NIST to offer certifications such as Certified Information System
Security Professional (CISSP), Certified Authorization Professional (CAP), and Certified Cyber Forensics Professional (CCFP). For instance, the CAP
certification, which includes Digital Forensics Incident Handling, Risk Management, Continuous Monitoring, Auditing, and Assessment, is based
almost entirely on the NIST guidelines, in particular the 800 series and more specifically, 800-86 (Guide to Integrating Forensic Techniques into
Incident Response), 800-37 (Risk Management Framework), 800-30 (Risk Management Guide), 800-39 (Managing In- formation Security Risks),
800-53 (Security Controls), 800-53A (Security Control Assessments), and 800-137 (Continuous Monitoring) among others. Other organizations
such as EC-Council have had certifications for years in the field and continue to add more and revise already existing ones to make them more
attractive to government agencies and private organizations. These certifications are updated every 3-5 years with more material added, some
outdated material removed, and most are touted as skills that government and industry look for in today’s forensic and security professionals.
The fact that there are so many private organizations offering so many certifications, many in digital forensics, is testament to the need for
having a credentialing and accreditation process as well as a testament to how private organizations are utilizing this opportunity to advance
their own goals, primarily financial, even if they are labeled as non-profit. 4. CASE STUDIES The National Academy of Sciences stresses the
importance of quality assurance procedures in the practice of forensic science to “identify mistakes, scientific fraud, examiner bias, and to
confirm the continued validity and reliability of forensic processes and to improve on processes that need to be improved” (Jordaan, 2012). In
digital forensics specifically, a comprehensive quality assurance/quality management plan is required to ensure the credibility of digital forensic
laboratories. Quality assurance in the digital forensics process is also seen as a critical issue in the practice of forensic science by both the
National Research Council in Washington, DC and the Association of Chief Police Officers in London. As the public have seen in recent years,
failure to implement quality assurance procedures in digital forensics can lead to innocent persons being convicted of crimes (Jordaan, 2012).
One particular case which resulted in a wrongful conviction was that of Connecticut school teacher Julie Amero (Jordaan, 2012). According to
Alva & Endicott-Popovsky (2012), the case of State of Connecticut v. Julie Amero provides an understanding of how a general lack of knowledge
of digital forensic evidence can lead to the wrongful conviction of an innocent person. In 2004, Connecticut substitute teacher Julie Amero was
monitoring a seventh-grade classroom. Having had to step out into the hallway for a moment, upon her return, Amero found two students
browsing a website about hair styling (Alva & Endicott-Popovsky, 2012). Soon after that, the web browser began opening pop-up
advertisements depicting pornographic images. Amero did not turn off the computer, as she was instructed not to and was unaware that the
monitor itself could be turned off. Several of the students in the classroom were exposed to the pornographic content. During Amero’s trial, the
primary evidence presented by the state was the forensic copy of the hard drive of the computer in question. Though the digital forensic
investigator, in this case, did not utilize industry standards to make a copy of the hard drive, the evidence was still admitted into court by the
judge. The prosecution claimed that digital evidence would show an Internet history of pornographic links, indicating that Amero deliberately
visited pornographic websites (Alva & Endicott-Popovsky, 2012). Later during the ordeal, a computer forensics expert for the defense
discovered that the school’s antivirus software was not regularly updated nor maintained; also, no antispyware, firewall, or current content
filtering tool was found on the school’s computer (Alva & Endicott-Popovsky, 2012). The defense computer forensics expert was Herb Horner, a
self-employed computer consultant. In his examination of the hard drive, imaged from the school’s computer, Horner found evidence that
spyware had been installed on the computer, thus causing pornographic pop-up images to continuously appear on the monitor (Alva &
Endicott-Popovsky, 2012). Despite the evidence found by Horner, the judge, in this case, refused to allow the full testimony of defense expert
witness, Herb Horner, into evidence, claiming that the information to be presented by Horner was not made available during discovery prior to
the trial proceedings (Alva & EndicottPopovsky, 2012). Ultimately, Amero was found guilty of “Risk of Injury to a Child,” and at one point, faced
the possible fate of a 50-year prison sentence. Fortunately, the State Court of Appeals reversed the decision made by the lower court, and a
motion for a new trial was accepted. In an effort to put the events behind her, Amero eventually pled guilty to a misdemeanor and agreed to
have her teaching license terminated (Alva & Endicott-Popovsky, 2012). The events leading up to and during Amero’s trial caused great
emotional, social, and financial stress on her and her family. Amero and her family have also experienced several health problems due to the
stress caused by the events leading up to and during her trial (Alva & Endicott-Popovsky, 2012). While the case detailed above shows that
digital forensics is not foolproof and can lead to the conviction of innocent persons, digital forensics handled poorly has also led to guilty
persons being acquitted in court. One example of this is the case of Aaron Caffrey. On September 20, 2011, less than two weeks after the
September 11, 2001 (9/11) terrorist attacks, Aaron Caffrey was charged with “carryout of a denial of service attack on the computers of the port
of Houston, Texas” (Brenner, Carrier and Henninger, 2004). During trial proceedings, Caffrey claimed that the evidence brought against him had
been installed on his computer without his knowledge by malicious actors, installing a Trojan horse program to gain control of his computer and
launch the DDoS attack. A forensic examination of his computer by prosecution’s expert witness, Professor Neil Barrett, found tools that could
be used to launch an attack, but no trace that a Trojan horse had been planted, despite Caffrey’s claim (George, 2003). Nevertheless, Aaron
Caffrey was acquitted of launching a distributed denial-of-service (DDoS) attack in the United States, even though both prosecutorial and
defense attorneys confirmed that Caffrey’s computer was responsible for the DDoS attack (Brenner et al., 2004). It is assumed that Caffrey’s
defense was able to convince the jury that a Trojan horse armed with a “wiping tool” was responsible for the attack, which resulted in the
editing of the system’s log files and deletion of all trace of the Trojan; the prosecution claimed that no technology existed that could perform
such sophisticated tasks but without success. Caffrey’s case is part of the phenomenon commonly known as the “Trojan horse defense,” which
became popular in the UK during the early 2000s (Brenner et al., 2004). 5. KEY FINDINGS There were a number of findings from the research
conducted on digital forensics investigation. First, it became apparent that credentialing was a major issue in digital forensics and featured
some of the main issues that were on the radar of major stakeholders such as the National Academy of Sciences and NIST (Casey, 2009; 2011).
It, therefore, qualified to extend the bias on applied research over fundamental theorizing in the general domain of forensic science. In
addition, the field in the broader scope was fragmented and lacking in proper curricula, which was the preoccupation of various stakeholders
and educators, rather than the formation of credentialing frameworks (Nance, Hay, & Bishop, 2009). As such, the issue of credentialing while
important, had been overshadowed by the lack of proper, standardized curricula in the domain. It was also apparent that the state and federal
levels of governments were largely nonactors in the credentialing of digital forensic investigators. According to Garfinkel (2010), the majority of
the bodies involved in accreditation and certification were private companies, including non-profit and for-profit organizations. They included
Mile2, EC-Council, and ISC2 among others, offering a number of accreditations such as the Certified Computer Examiner (CCE) to digital forensic
experts. The scarcity of literature on accreditation and credentialing makes it difficult to determine the repute and ratings of these
organizations (Lillard, 2010). However, they appeared to be the main players in the credentialing in the absence of state and federal
governments actors. Instead, at least in part, the federal government offered guidelines which these bodies used for their curricula and
certification development, giving frameworks and standards to be applied in the operationalization of the credentialing bodies. These
guidelines were carried out by the DoJ, National Academy of Sciences and other affiliates working closely with the DoJ such as OSAC and NIST.
According to Lundquist (2016), there are several instances where private digital forensics have failed in assisting DoJ investigations, leading to
the incarceration of the innocent and mistrials in some cases. These include the case of State of North Carolina vs. Bradley Cooper and the
previously mentioned case of State of Connecticut vs. Julie Amero among others. In each of the highlighted cases, there were anomalies in the
process of collection, collation, submission, and reporting of evidence. Oversight bodies can improve this by coming up with a standardized
framework for digital forensics that can be applied in all cases. This entails credentialing of experts that the court can rely upon as experts in
cases requiring digital forensic evidence (Kessler, 2007). At the moment, oversight appears fragmented due to the lack of a singular, unifying,
and standardized curriculum to build on at the national or even at the state level. 6. RECOMMENDATIONS Based on the research presented,
clearly, more attention needs be paid to credentialing, which entails research, funding, and advocacy at the national and state levels. A
national framework for developing and teaching digital forensics in order to bring standardization to the
field is a necessity. This needs to be followed by a complementary credentialing system which would
set the base for professionalism in digital forensics investigation methodology, processes, and
techniques. Finally, state and federal governments must assume active roles in the oversight and
accreditation of credentialing bodies with measurable results. Meyers and Rogers (2004) identify the following three areas where
the computer forensics field needs improvement: the creation of a flexible standard, qualification of expert witnesses and standards regarding
the analysis, preservation, and presentation of digital evidence. Any
standard(s) developed for use in the computer
forensics discipline, must allow for flexibility, so that the standard may adapt to the continuous changes
in technology and the forensic process. It is also important that computer forensic standards cover all
aspects of the forensic process; from the search and seizure of digital evidence to the analysis and
examination of the evidence. The second area identified by the authors as needing improvement is the qualification of expert
witnesses. Because computer forensics is still considered to be in its infancy, it does not have any formal
credentialing bodies, nor a formal educational process. Therefore, in adjudication processes, the courts
accept persons as expert witnesses based on their skills and previous professional work experience.
While this process has not been challenged thus far, Meyers and Rogers (2004) anticipate that in the future, expert
witnesses’ qualifications will be more commonly challenged. The final area identified by the authors as needing
improvement is standards regarding the analysis, preservation, and presentation of digital evidence. Meyers and Rogers (2004) state that there
should be “rigorous” standards and requirements along with continuous updates to the forensic process. Currently, the common method used
to analyze digital evidence relies mostly on the software and/or hardware an expert uses in the analysis of the evidence; the authors challenge
that relying solely on the software/hardware does not allow experts to fully understand the digital forensics process so that they may articulate
the process to a judge in court proceedings. Finally, Meyers and Rogers (2004) stress the importance of the implementation of a universal
system for certifying those who claims to be computer forensic professionals, as a
continuous lack of professional certification,
investigative standards, and peer review process may eventually result in computer forensics being labeled as “junk
science” instead of an accepted scientific discipline (Meyers & Rogers, 2004). 7. POSSIBLE OUTLINES FOR A FRAMEWORK The
topic of presenting a potential full solution and/or framework for digital forensics can arguably be a doctorate dissertation in its
own right. It is a large undertaking and requires a great deal of research. One can argue that even then it truly
requires the efforts of governments, law enforcement, and academics to put forth a viable solution. Nevertheless,
the following possible outlines are intended to present the reader with some possibilities that are currently lacking in the field and could serve
as starting points. Abdalla, Hazem, and Hashem (2007) offer a guideline model for digital forensic investigation in their paper presented at the
annual ADFSL Conference on Digital Forensics, Security and Law (Abdalla, Hazem, Hashem, 2007). In it they first present several existing models
to include: 1. US Department of Justice’s Electronic Crime Scene Investigation: A guide to first responders 2. An Abstract Digital Forensic Model
(Reith & Gunsch, 2002) 3. The Integrated Digital Investigation Model consisting of 5 groups of 17 phases total (Carrier & Spafford, 2003) 4. A
Hierarchical, Objectives-Based Framework for the Digital Investigation Process (Beebe & Clarke, 2004) The authors then proceed to offer their
own model which includes the following: 1. Preparation phase which includes prepreparation, case evaluation, preparation of detailed design
for the case, and determination of required resources. 2. Physical forensic and investigation phase which has the goal of collecting, preserving,
and analyzing the physical evidence with an attempt to try and reconstruct the crime scene. 3. Digital forensic phase which needs to identify
and collect electronic events that may have occurred and proceed with analyses. 4. Reporting and presentation phase which needs to be based
entirely on the policy and laws of each jurisdiction (e.g., state, county, country) and presents the conclusions and corresponding evidence from
the investigation. 5. Closure phase which requires reviewing the whole investigation process, determining whether the evidence found and
collected solve the case in a forensically sound manner. The model presented by Abdalla, Hazem, and Hashem (2007), can be considered to be
universal, meaning that the authors try to have a model that is applicable in every possible locality. The model does not address issues when
dealing with national security and intelligence systems that require higher sensitivity. Nevertheless, it, together with NICE from NIST mentioned
previously as well as the other models mentioned can form a solid starting point for the development of a digital forensic investigation
framework that once formulated, should be sophisticated and flexible enough to apply to a wide range of localities and entities. Part of the
framework would need to discuss how to properly educate and credential would-be investigators. At
its heart, a digital forensic
framework must address the following areas: 1. Preparation phase 2. Acquisition phase 3. Analysis
phase 4. Reporting phase 5. Legal phase 6. Education phase 7. Credentialing phase 8. Accreditation
phase This means that digital forensic investigators must be trained in these 8 main phases . At the state
and/or federal level, interested investigators must be required to register and take rigorous exams . These exams must
address the phases of digital investigation and evaluate would-be investigators understanding of the ideals and processes involved in doing
digital investigations. These exams must focus on assessing a test taker’s ability to understand the digital forensic processes with the realization
of its legal and ethical importance. The passing of these exams must be made necessary to receive a state or federal license to practice digital
forensic investigation. This would form the backbone of the credentialing process of investigators. Given that such frameworks would have to
be turned into curricula at the academic level in order to prepare interested applicants in digital forensics, that, in turn, would bring about the
accreditation phase required for digital forensics as all reputable universities teaching the field must be appropriately accredited. Existing
private sector certifications must be made moot and removed as they generally serve the financial interest of the organization and not that of
the general public. 8. CONCLUSION The present research brings to light obstinate issues in the credentialing of digital forensic investigators.
The status quo reveals a troubling scenario of governments’ lack of full participation, lack of proper
certification bodies, and oversight. This has, however, been overshadowed by the apparent lack of a
consistent curriculum at the national and state levels to guide the teaching of digital forensics at the
university level and other institutions of higher learning. The findings at a glance show that there is a lot
to do to instill professionalism and inspire further development of digital forensics not only as a branch
of forensic science but as an independent domain emerging in contemporary scholarship. If the
recommendations issued are to be followed, there shall not only be a solution at the academic level of
digital forensics but also at the professional level, which remains a cause for concern. The governments should spearhead
curricular reinvention and development and take their active roles in the promotion of a unified credentialing framework to guide other bodies
in the same direction. To be sure, federal
agencies such as FBI, Secret Service, IRS, and DoD have their own
certification and accreditation processes. NIST also offers excellent certification and accreditation guidelines in
its 800 series Special Publications. External certification and accreditation processes supported and approved by governments are desirable as
they bring consistency and professionalism to the profession of digital forensics. Programs developed by DoD, NIST, DHS, etc.
are certainly useful and at times quite necessary, but these efforts are not coordinated and often target
the specific needs of the agency developing it. Many times, they are too broad, attempting to address too
much. What is needed is a collective and coordinated effort by the governments, and this cannot come soon
enough. The recent breaches of the federal Office of Personnel Management (OPM) which leaked over 22 million classified personnel
records and Equifax’s breach resulting in over 146 million private records of Americans being stolen show the tremendous need for
proper education, credentialing, and accreditation of professionals in digital forensics investigations.

Federal, congressionally-enacted certification rules are unique---they restore the


credibility of the field.
Amy Lynnette Popejoy 15, University of Colorado Denver, “Digital and Multimedia Forensics Justified:
An Appraisal on Professional Policy and Legislation,” 2015, Thesis
CHAPTER I INTRODUCTION- NATIONAL ACADEMY OF SCIENCES REPORT 2009 The Science, State, Justice, Commerce, and Related Agencies
Appropriations Act of 2006, became law in November 2005. As a result of that Act, the National Institute of Justice (NIJ), authorized by
Congress, sponsored
the National Academy of Sciences (NAS) Committee Project – “Identifying the Needs of the
Forensic Science Community,” to conduct a study within the field of forensic science. (1) The appointed
Forensic Science Committee met on eight occasions and later delivered the February 18, 2009, NAS Executive Summary- “Strengthening
Forensic Science in the United States: A Path Forward,” i.e. the NAS Report 2009. (2) The executive summary identified findings of the study and
outlined 13 Recommendations for the forensic science community to consider. This thesis will explore Recommendation 1- “Promote the
Development of Forensic Science,” Recommendation 2- “Standardized Terminology in Reporting and Testimony,” and Recommendation 10-
“Insufficient Education and Training.” Recommendation 1- “Promote the Development of Forensic Science,” suggests allocation of an
independent federal entity, funded by Congress, with expertise in but not limited to research, education, multiple forensic science disciplines,
and law. The oversight of this entity should develop programs to improve best practices, standards, and all related strategies to advance the
credibility and reliability of forensic science at the federal, state, and local levels. Chapter II of this thesis expands on Recommendation 1 and
focuses chronologically on professional policy and legislative advances since the release of NAS Report 2009 through 2014, specifically how
these developments relate to digital and multimedia science (DMS). Recommendation 2- “Standardized Terminology in Reporting and
Testimony,” currently, there are no federally accepted standards or guidelines for terminology used in testifying and reporting results of
forensic science investigations or any laboratory format with defined minimums specifying information needed to convey conclusions to the
court. Chapter III addresses Recommendation 2 and the issues of legal language and terminology, model laboratory reports, and expert
testimony concerning DMS case law. Recommendation 10- “Insufficient Education and Training,” forensic
evidence lies at the
juncture between science, technology, and the legal community. In the age of information, everyone
who plays a role in the criminal justice system must be accountable to increased learning and knowledge in
and around their area of expertise . Chapter IV analyzes Recommendation 10 identifying legal awareness for the digital and
multimedia examiner to understand the role of the expert witness, the attorney, the judge and the admission of forensic science evidence in
litigation in our criminal justice system. Challenges Facing the Forensic Science Community David Shawn Pope, Edgar Steele, Brandon Mayfield,
and George Zimmerman, are just a few cases, discussed with detail in Chapter III- Expert Testimony, indicating troubling legal issues based on
interpretations of forensic evidence. The Innocence Project website, http://www.innocenceproject.org, highlights the multitude of cases and
consequences of invalidated and improper forensic science used in the criminal justice system. In fact, many forensic science disciplines,
outside the Deoxyribonucleic Acid (DNA) gold standard, have never been subjected to rigorous peer-reviewed scientific evaluation. The
Innocence Project defines ‘invalidated and improper forensic science’ as 1- the use of forensic disciplines or techniques that have not been
tested to establish their validity and reliability, 2- testimony about forensic evidence that presents inaccurate statistics, gives statements of
probability or frequency (whether numerical or non-numerical) in the absence of valid empirical data, interprets non-probative evidence as
inculpatory, or concludes/suggests that evidence is uniquely connected to the defendant without empirical data to support such testimony, or
3- misconduct, either by fabricating inculpatory data or failing to disclose exculpatory data. Invalidated and improper forensic science is the
second greatest contributing factor of wrongful convictions, first being eyewitness misidentification, liable for 51% of the 300 exonerates to
date (Fig 1.1), for which 17 could have been executed. This factor has also led to claims not supported by science, errors due to unreliable
methods, scientific negligence, misconduct, concealed evidence of innocence, and vague or confusing terms that jurors could not be expected
to understand. An even colder fact is that in 90-95% of all criminal cases, DNA testing is not an option and the justice system must rely on non-
DNA forensic disciplines for the presentation of evidence. Disparities in the Forensic Science Community The word ‘forensic’ by definition
implies a relationship to scientific knowledge and the court of law and forensic science is a key factor to the fundamental functioning of our
criminal justice system. DNA [[FIGURE 1.1 OMITTED]] became a highly accepted discipline standard of science, mainly because of
federal funding, research, provision, and necessity. In 1994, as a result of the DNA Identification Act, an advisory board was established to
address research relevant to DNA. Professionals from the public and private sector came together and developed quality assurance standards
for testing in laboratories. These working groups created a pathway for the DNA community to follow and federal funding supported the
implementation of new practices, database index systems, and eventually led to the Innocence Protection Act of 2004 which allows imprisoned
people access to DNA testing to prove innocence. DNA is relied upon to provide a high level of certainty in the criminal justice system because it
was science-based and tested before it was presented in the courtroom. Likewise, the pharmaceutical industry tests and approves medication
long before it is released to the public, but there are differences among the disciplines of science. (3) (4) In August of 2013, President Barack
Obama stated in an interview, "I think there are legitimate concerns that people have that technology is moving so quick that, you
know, at some point, does the technology outpace the laws that are in place and the protections that are in place?” (5) This
idea, combined with the lack of federal standards referenced across state and local law enforcement
investigation units, raises a very valid point. Technology only continues to develop forcing the courts to
reconcile related forensic arguments. Digital and multimedia evidence (DME), referred to as a non-DNA discipline, relies
to some extent on observation, experience, and reasoning based analysis. DNA evidence relies more on biological and
chemical based analysis. Although all forensic analysis is subject to the human factor, [[FIGURE 1.2 OMITTED]] non-DNA
evidence analyzed using the more subjective methods can lead to higher error rates and less accuracy and
reliability in drawing expert conclusions. However, when non-DNA forensic evidence is adequate, it can
still be accurate and reliable and should not be dismissed altogether . Understanding and evaluating these
limitations of evidence will help toward reform of attaining supreme forensic truth, depressing wrongful
conviction rates of the innocent and increasing public safety from criminals who go free. In response to long awaited and
disturbing questions about the accuracy and reliability of non- DNA forensic science (Fig 1.2), the Consortium of Forensic Science Organization
(CFSO) urged Congress to pass legislation directing NAS to create an independent needs assessment study within these forensic disciplines. The
vehicle used to pass this legislation was the Science, State, Justice, Commerce, and Related Agencies Appropriations Act of 2006 and the
findings then became the NAS Report 2009. Before the report, it was just assumed that non-DNA forensic science was well grounded in
scientific methodology and unlike DNA, non-DNA forensic disciplines did not have a cheerleading commission to support or represent them at
the federal level. Creating the National Commission of Forensic Science (NCFS) independent of the jurisdiction of the legal or law enforcement
community, allowed a governing board to [[FIGURE 1.3 OMITTED]] mandate and manage setting new standards for validation methods
and practices to correct inconsistent science. The goal being that through verified and validated methodology, human error and bias can be
decreased, terminology can be unified, and report findings can be consolidated with scrutinized evidence before it ever reaches the court of
law. There is response to forensic science reform in all three branches of government (Fig 1.3). The Executive Branch is presently building the
framework of reform with SoFS, NCFS, and OSAC. The Legislative Branch is continuing to draft and re-introduce legislation in support of that
framework and the Judicial Branch persists to decide and argue case law, causing reform. It is the goal of OSAC, to create the [[FIGURE 1.4
OMITTED]] Forensic Science Code of Practice- Registry of Approved Standards and Registry of Approved Guidelines. This registry will
catalog a database of documents from all of the forensic science disciplines (Fig 1.4). OSAC will not write the documents but will require a
vetting process, promoting documents for the standards development process, in order for the approved standards and guidelines to be added
to the registry. (6) How does DME fit into forensic science reform? How do we validate, for admissibility to
the court, every single tool used for analysis of digital and multimedia evidence? How do you factor in, measure
or explain, or attempt to mitigate the human factor, i.e. cognitive bias, etc., as an element of forensic science analysis? It is thought
provoking to decide how to write best practices and standard operating procedures, or mapping details of likelihood ratio statistics, regarding
the limitless conditions and variables related to DME. As soon as technology changes, which happens at an alarmingly rapid rate, the validation
process must begin all over again. Just last year in the case of Michael Brown, Ferguson, Missouri, the point was raised again that one
technological solution for law enforcement encounters is that all police officers should be required to wear body cameras. Unfortunately,
several state and local law enforcement agencies that decide to use this technology, might purchase new
equipment first and think about long-term implementation, data storage management, retrieval, and privacy
issues after the fact. The progress made since the release of the NAS Report 2009, outlined in the next chapter, ensures that as
professionals, we are focusing on the challenges. As a forensic community, we are identifying next steps and the groundwork is being laid to
address our challenges. (8) (9) (10) CHAPTER II PROMOTION AND DEVELOPMENT OF FORENSIC SCIENCE Recommendation 1 of the National
Academy of Sciences (NAS) Executive Summary Report 2009- ‘Strengthening Forensic Science in the United States: A Path Forward,’ (Fig 2.1) is
the promotion and development of forensic science. This chapter expands on Recommendation 1 and focuses chronologically on professional
policy and legislative advances since the release of said report through 2014, specifically to how these developments relate to digital and
multimedia evidence (DME). A brief summary from 1998 -2006, will provide background information relating to the NAS Report 2009. (2)
[[FIGURE 2.1 OMITTED]] In 1998, the Scientific Working Group on Digital Evidence (SWGDE) was formed by the Federal Crime
Laboratory Directors group. This group was one of the earliest organizations to explore and combine digital audio, video, and photography with
computer forensics as a forensic discipline. Agencies represented by founding SWGDE members were the ATF, DEA, FBI, IRS-CID, U.S. Customs,
U.S. Postal Inspection Service, and the U.S. Secret Service. SWGDE worked in cooperation with other organizations including IOCE and ASCLD
adopting and publishing principles and definitions concerning acknowledgement and recognition of ‘digital evidence’ as an accredited
discipline. (11) In 2008, the American Academy of Forensic Science (AAFS) created the Digital and Multimedia Sciences (DMS) Section
recognizing the importance of the growing new field. This section to date has 111 members. (12) In 2006, NIJ sponsored the NAS Project-
Identifying the Needs of the Forensic Science Community. (1) The appointed Forensic Science Committee met on eight occasions and later
delivered the February 18, 2009 National Academy of Sciences Executive Summary- Strengthening Forensic Science in the United States: A Path
Forward, ISBN: 978-0-309-13130-8, a total of 352 pages, i.e. the NAS Report 2009. (2) On March 10, 2009, a hearing before the Subcommittee
on Technology & Innovation Committee on Science and Technology, House of Representatives: ‘Strengthening Forensic Science in the United
States: The Role of the National Institute of Standards and Technology,’ convened. The hearing focused on reviewing scientific and technical
issues raised by the NAS Report 2009, along with the role of National Institute of Standards and Technology (NIST). Chairman David Wu, U.S.
Democratic Representative from the State of Oregon, opened with three considerations: the possibility of building on federal resources and
capabilities versus creating a whole new government structure, full support and agreement to the goal of improving forensic science in the U.S.,
and taking the first step in moving from ‘entertainment’ to ‘reality’ with the expectations of forensic science. Representatives present were
Adrian Smith (NE), Paul Broun (GA), and Donna Edwards (MD). The witness panel included Mr. Peter M. Marone, Ms. Carol E. Henderson, Mr.
John W. Hicks, Mr. Peter Neufeld, and Dr. J.C. Upshaw Downs. (10) On March 18, 2009, a hearing before the Committee on the Judiciary United
States Senate: The Need to Strengthen Forensic Science in the United States: ‘The National Academy of Sciences’ Report on A Path Forward,’
convened. Chairman Patrick J. Leahy, U.S. Democratic Senator from the State of Vermont, in his opening statement addressed the NAS Report
2009 confirming problems demonstrated at the heart of our whole criminal justice system and that it showed the ‘CSI Effect’ is not ‘reality’ in
the field of forensic science. Leahy gave two examples, Detroit and Houston- Case Study #3 of this thesis, of laboratories shut down when audits
found less than adequate case results. In 2005, the DOJ reported a backlog of 350,000 forensic exams nationwide and alleged 1 in 5 labs did not
meet American Society of Crime Laboratory Directors (ASCLD) accreditation standards. Leahy stated forensic science is critical to our criminal
justice system in order to punish the guilty and exonerate the innocent. He referenced the Brandon Mayfield case, where an FBI examiner
affidavit was recanted, and the Kirk Bloodsworth case as two examples of faulty forensic science in the courts. The Honorable Harry T. Edwards,
(Senior Circuit Judge and Chief Judge Emeritus, United States Court of Appeals for the District of Columbia Circuit, and Co-Chair, Committee on
the Identifying the Needs of the Forensic Science Community, National Research Council of the National Academies, Washington, D.C.) gave his
statement. Edwards stated his Committee concluded congressional action was needed to cure serious problems facing the forensic science
community and admitted his preconceived views about the practice of scientific disciplines were incorrect assumptions and the simple principal
point called for an overhaul of forensic science in the United States. Hearing Submissions for the Record were as follows: -ASCLD, Laboratory
Accreditation Board, Garner, North Carolina: Jami St Clair, Chair, Lab Board, March 16, 2009, letter, Dean Gialamas, President and Beth Greene,
President-Elect, March 17, 2009, letter, Dean Gialamas, President and Beth Greene, President-Elect, December 2008, statement, -Edwards,
Harry T., Senior Circuit Judge and Chief Judge Emeritus, U.S. Court of Appeals for the District of Columbia Circuit, and Co-Chair, Committee on
Identifying the Needs of the Forensic Science Community, National Research Council of the National Academies, Washington, D.C., statement,
-IAI, Robert J. Garret, Metuchen, New Jersey, March 18, 2009, letter, -NDAA, Joseph I. Cassilly, President, Alexandria, Virginia, letter, -Neufeld,
Peter, Co-Director, Innocence Project, New York, New York, statement. (8) On May 13, 2009, a hearing before the Subcommittee on Crime,
Terrorism, and Homeland Security of the Committee on the Judiciary House of Representatives: National Research Council’s Publication
‘Strengthening Forensic Science in the United States: A Path Forward’ convened. Chairman Robert C. Scott, U.S. Democratic Representative
from the State of Virginia, in his opening statement acknowledged the unreliable role forensic science plays in criminal investigations, the fact
that the ‘CSI effect’ reaches most jury pools across the country, and confirmed fears about the national forensic science system. Scott felt the
most disturbing findings regarded judges and trial attorneys. He stated the NAS Report 2009 found that trial judges rarely exclude forensic
evidence and trial attorneys lack scientific training to adequately assess and question the forensic expert witnesses’ conclusions. Ranking
Member Louie Gohmert, U.S. Republican Representative from the State of Texas, opened with statements from the perspective of prosecutor,
district judge, and chief justice. He stated DNA is the forensic gold standard but his most important point challenged the belief that although
particular forensic disciplines have not been scientifically validated, it does not mean they are invalid and unreliable. Representatives present
were Mr. Robert C. Scott (VA), Mr. Anthony D. Weiner (NY), Mr. Louie Gohmert (TX), and Mr. Ted Poe (TX). The witness panel included Mr.
Kenneth E. Melson, Mr. Peter M. Marone, Mr. John W. Hicks, and Mr. Peter Neufeld. (9) On September 9, 2009, a hearing before the
Committee on the Judiciary United States Senate: ‘Strengthening Forensic Science in the United States,’ convened. Chairman Patrick J. Leahy,
U.S. Democratic Senator from the State of Vermont, in his opening statement suggested the need to ensure the highest scientific standards and
maximum reliability of forensic science. Leahy referenced the Cameron Todd Willingham case where an innocent man may have been executed
for a crime he did not commit, based in large part on forensic expert witness testimony and forensic evidence without scientific basis. The NAS
Report 2009 was summarized as a foundation to move forward with mandating national standards, enforcing best practices, certification of
examiners, and accreditation of laboratories. Senators present were Mr. Richard J. Durbin (IL), Mr. Sheldon Whitehouse (RI), Ms. Amy
Klobuchar (MN), Mr. Al Franken (MN), and Mr. Jeff Sessions (AL). The witness panel included Mr. Eric Buel, Mr. Paul Giannelli, Mr. Harold Hurtt,
Mr. Barry Matson, Mr. Peter Neufeld, and Mr. Matthew F. Redle. (4) On September 17, 2009, SWGDE released their Position on the National
Research Council (NRC) Report to Congress- NAS Report 2009. The position encompassed all 13 recommendations; however, this chapter
addresses Recommendation 1- The creation of the National Institute of Forensic Science (NIFS). SWGDE recognized the time needed to create a
new federal bureaucracy, NIFS, and supposed an immediate national strategy with existing forensic organizations. SWGDE stated that minimum
efforts should include standards for recognizing new forensic disciplines, newly established analytical methods, and a community-wide code of
ethics. The position also suggested, that funding allocation could follow the competitive Technical Support Working Group (TSWG) example.
(13) On January 31, 2011, the White House Office of Science and Technology Policy (OSTP), Under Executive Order 12881, issued the National
Science and Technology Council (NSTC) Charter of the Committee on Science. The purpose was “to increase overall effectiveness and
productivity of federally supported efforts that develop new knowledge in the sciences…” Functions of the Charter included science policy-
making processes, science policy decisions and programs, integration of science policy agenda, development and implementation federally,
NSTC clearance of documents, and international cooperation in science. On March 29, 2012, OSTP issued the Charter of the Subcommittee on
Forensic Science Committee on Science (SoFS CoS) NSTC to authorize and develop a White Paper summarizing the SoFS’s recommendation to
achieve: the Goals of the NAS Report 2009, a prioritized national forensic science research agenda, and a draft detailing strategy for developing
and implementing common interoperability standards to facilitate the appropriate sharing of fingerprint data across technologies. (14) (15)
[[FIGURE 2.2 OMITTED]] On February 15, 2013, through a Memorandum of Understanding, the Department of Justice (DOJ) and NIST
announced the intent to establish a National Commission on Forensic Science (NCFS-Fig 2.2). This 30-member group would develop federal
guidance at the intersections between forensic science and the courtroom, working together to create national standards for practitioners in
the areas of professional policy, training, and certification. Deputy Attorney General James M. Cole stated forensic science is an essential tool in
the administration of justice and scientifically valid and accurate forensic analysis strengthens all aspects of our criminal justice system. (16) On
June 18, 2013, chairs for 18 of 21 SWGs gathered and discussed the NIST responsibility to create guidance groups intended to replace SWG’s
with a new infrastructure. (17) On September 27, 2013, under Docket No. 130508459-3459-01, NIST, the Department of Commerce released a
Notice of Inquiry for proposed reorganization of scientific working groups and considered open input toward the ‘Possible Model for the
Administration and Support of Discipline- Specific Guidance Groups for Forensic Science’. The goal was to explore the establishment and
structure of governance models. Comments were requested across questions concerning the structure, the impact, the representation, and the
scope of the guidance groups. (56) SWGDE and combined SWG’s released their DME Response to NIST, Federal Register Notice- September 27,
2013. Two sections spoke to ‘Possible Models for the Administration and Support of Discipline-Specific Guidance Groups for Forensic Science’.
Section I overviewed the request for model perspectives and Section II provided SWG’s opinions with a collected 35 years of direct industry
experience. In reference to Recommendation 1 of the NAS Report 2009, Section I indicated the DME discipline has already proven accepted and
successfully tested as a science by the courts at all levels of the judicial process by providing information and results to juries as expert
testimony through technical assistance and quality assurance guidance. SWGDE’s established productive history, dedicated strong leadership,
and positions with response to federal progress, display a relentless and continued commitment to DME as a forensic discipline. (57) By
November 26, 2013, the Notice of Inquiry generated 82 public comments consisting of 337 pages across numerous forensic disciplines (20) and
the overall infrastructure was defined in the NIST January Summary, renaming the guidance groups the Organization of Scientific Area
Committees (OSAC). OSAC is practice-focused, reporting only to the Forensic Science Standards Board (FSSB) and will not provide advice to the
Attorney General, NIST Director, or the NCFS (Fig 2.3). This summary also detailed the FSSB, LRC, QIC, SACs, and other infrastructure specifics.
(17) [[FIGURE 2.3 OMITTED]] On January 10, 2014, the DOJ and NIST announced the first-ever appointed National Commission of
Forensic Science and AAFS released a statement applauding the broad representation listing NCFS named members. NCFS members will work
to develop guidance and recommended policy to the U.S. Attorney General on improving forensic science (Fig 2.3). (18) (19) On February 3-4,
2014, at the first NCFS meeting, NIST presented the infrastructure summary plan and slide presentation for the new OSAC (previously called
guidance groups,) unifying and incorporating the independent scientific working groups with more than 600 practitioners. The objective of the
infrastructure is to produce standards and guidelines for improving the quality and consistency of forensic science. OSAC was then launched at
the AAFS meeting on February 18th, in Seattle, Washington. It is important to note that digital evidence was not included in the IT/Multimedia
SAC, as shown in the figure below (Fig 2.4). (58) [[FIGURE 2.4 OMITTED]] On February 12, 2014, John D. Rockefeller IV, U.S. Democratic
Senator from the State of West Virginia, introduced the Forensic Science and Standards Act 2014 to establish a national forensic science
research program. The purpose of this act is to strengthen forensic science by promoting scientific research, establishing science-based
voluntary consensus standards and protocols across forensic science disciplines, and encouraging the adoption of these standards. (22) On
March 27, 2014, Patrick Leahy and John Cornyn introduced the Criminal Justice and Forensic Science Reform Act. Leahy stated our
confidence in the criminal justice system should be strengthened by evidence and testimony, which is
accurate, credible, and scientifically grounded. Since 1989, because of faulty forensic evidence, 314 DNA
exonerates spent a total of 4,202 unnecessary years in prison and guilty men went free, possibly
continuing to commit other crimes. Law Enforcement, Defense Attorneys, Prosecutors, Judges, Scientists, and Practitioners, all
want forensic evidence that is accurate and reliable to the court and executive action is not enough. In the
interest of justice, legislation must address comprehensive forensic science reform. (23) (24) Leahy originally introduced this
landmark forensics reform legislation bill in 2011. It was read by Congress twice and referred to the Committee on the Judiciary and is still not
law. The bill is scheduled to be re-introduced in March, 2015. On April 11, 2014, after the DOJ turned the guidance groups over to NIST and
OSAC was established, NIST defined the OSAC roles and responsibilities. The Organizational Authorities and Duties outlined the FSSB, HFC, LRC,
QIC, SAC, SACsubs, and the process for application. (25) On May 2, 2014, the NSTC Report ‘Strengthening the Forensic Sciences’ was released to
summarize three years work of the OSTP SoFS in response to the NAS Report 2009 National Academy of Sciences Executive Summary. SoFS
comprised 200 experts across 23 federal agencies and delivered the first set of research findings covering issues related to laboratory
accreditation, certification of forensic science, and medicolegal personnel, proficiency testing, and ethics. (26) On
May 7, 2014, NSF and NIJ partnered as co-sponsors to solicit proposals for Industry/University Cooperative Research Centers to develop the
relationship between industry, academia, and government in the relevant areas of forensic science. Federal agencies represented are DOD,
DFSC, DFBA, DHS, DOJ, ATF, DEA, FBI, and NIST. (27) On June 26, 2014, NIST and DOJ appointed 17 members of the first Forensic Science
Standards Board (FSSB- Fig 2.5). This marked the transition from planning to doing in the effort to improve the scientific basis of forensic
evidence used in courts of law. The board consists of 5 research community members, 5 OSAC-SAC Chairs, 6 national forensic science
professional organization members, and 1 ex officio. Richard Vorder Bruegge, Ph.D., FBI, Senior Photographic Technologist, will Chair the
OSACSAC IT/Multimedia. (28) On August 19, 2014, NIST announced competition to create a Forensic Science Center of Excellence anticipating
$4 million in funding annually over 5 years. The mission of this center will focus on two branches of forensic science, pattern evidence and
digital evidence. This is just one of several centers that NIST proposes. (29)
2AC
Federal, congressionally-enacted certification rules are unique---they restore the
credibility of the field.
Amy Lynnette Popejoy 15, University of Colorado Denver, “Digital and Multimedia Forensics Justified:
An Appraisal on Professional Policy and Legislation,” 2015, Thesis
CHAPTER I INTRODUCTION- NATIONAL ACADEMY OF SCIENCES REPORT 2009 The Science, State, Justice, Commerce, and Related Agencies
Appropriations Act of 2006, became law in November 2005. As a result of that Act, the National Institute of Justice (NIJ), authorized by
Congress, sponsored
the National Academy of Sciences (NAS) Committee Project – “Identifying the Needs of the
Forensic Science Community,” to conduct a study within the field of forensic science. (1) The appointed
Forensic Science Committee met on eight occasions and later delivered the February 18, 2009, NAS Executive Summary- “Strengthening
Forensic Science in the United States: A Path Forward,” i.e. the NAS Report 2009. (2) The executive summary identified findings of the study and
outlined 13 Recommendations for the forensic science community to consider. This thesis will explore Recommendation 1- “Promote the
Development of Forensic Science,” Recommendation 2- “Standardized Terminology in Reporting and Testimony,” and Recommendation 10-
“Insufficient Education and Training.” Recommendation 1- “Promote the Development of Forensic Science,” suggests allocation of an
independent federal entity, funded by Congress, with expertise in but not limited to research, education, multiple forensic science disciplines,
and law. The oversight of this entity should develop programs to improve best practices, standards, and all related strategies to advance the
credibility and reliability of forensic science at the federal, state, and local levels. Chapter II of this thesis expands on Recommendation 1 and
focuses chronologically on professional policy and legislative advances since the release of NAS Report 2009 through 2014, specifically how
these developments relate to digital and multimedia science (DMS). Recommendation 2- “Standardized Terminology in Reporting and
Testimony,” currently, there are no federally accepted standards or guidelines for terminology used in testifying and reporting results of
forensic science investigations or any laboratory format with defined minimums specifying information needed to convey conclusions to the
court. Chapter III addresses Recommendation 2 and the issues of legal language and terminology, model laboratory reports, and expert
testimony concerning DMS case law. Recommendation 10- “Insufficient Education and Training,” forensic
evidence lies at the
juncture between science, technology, and the legal community. In the age of information, everyone
who plays a role in the criminal justice system must be accountable to increased learning and knowledge in
and around their area of expertise . Chapter IV analyzes Recommendation 10 identifying legal awareness for the digital and
multimedia examiner to understand the role of the expert witness, the attorney, the judge and the admission of forensic science evidence in
litigation in our criminal justice system. Challenges Facing the Forensic Science Community David Shawn Pope, Edgar Steele, Brandon Mayfield,
and George Zimmerman, are just a few cases, discussed with detail in Chapter III- Expert Testimony, indicating troubling legal issues based on
interpretations of forensic evidence. The Innocence Project website, http://www.innocenceproject.org, highlights the multitude of cases and
consequences of invalidated and improper forensic science used in the criminal justice system. In fact, many forensic science disciplines,
outside the Deoxyribonucleic Acid (DNA) gold standard, have never been subjected to rigorous peer-reviewed scientific evaluation. The
Innocence Project defines ‘invalidated and improper forensic science’ as 1- the use of forensic disciplines or techniques that have not been
tested to establish their validity and reliability, 2- testimony about forensic evidence that presents inaccurate statistics, gives statements of
probability or frequency (whether numerical or non-numerical) in the absence of valid empirical data, interprets non-probative evidence as
inculpatory, or concludes/suggests that evidence is uniquely connected to the defendant without empirical data to support such testimony, or
3- misconduct, either by fabricating inculpatory data or failing to disclose exculpatory data. Invalidated and improper forensic science is the
second greatest contributing factor of wrongful convictions, first being eyewitness misidentification, liable for 51% of the 300 exonerates to
date (Fig 1.1), for which 17 could have been executed. This factor has also led to claims not supported by science, errors due to unreliable
methods, scientific negligence, misconduct, concealed evidence of innocence, and vague or confusing terms that jurors could not be expected
to understand. An even colder fact is that in 90-95% of all criminal cases, DNA testing is not an option and the justice system must rely on non-
DNA forensic disciplines for the presentation of evidence. Disparities in the Forensic Science Community The word ‘forensic’ by definition
implies a relationship to scientific knowledge and the court of law and forensic science is a key factor to the fundamental functioning of our
criminal justice system. DNA [[FIGURE 1.1 OMITTED]] became a highly accepted discipline standard of science, mainly because of
federal funding, research, provision, and necessity. In 1994, as a result of the DNA Identification Act, an advisory board was established to
address research relevant to DNA. Professionals from the public and private sector came together and developed quality assurance standards
for testing in laboratories. These working groups created a pathway for the DNA community to follow and federal funding supported the
implementation of new practices, database index systems, and eventually led to the Innocence Protection Act of 2004 which allows imprisoned
people access to DNA testing to prove innocence. DNA is relied upon to provide a high level of certainty in the criminal justice system because it
was science-based and tested before it was presented in the courtroom. Likewise, the pharmaceutical industry tests and approves medication
long before it is released to the public, but there are differences among the disciplines of science. (3) (4) In August of 2013, President Barack
Obama stated in an interview, "I think there are legitimate concerns that people have that technology is moving so quick that, you
know, at some point, does the technology outpace the laws that are in place and the protections that are in place?” (5) This
idea, combined with the lack of federal standards referenced across state and local law enforcement
investigation units, raises a very valid point. Technology only continues to develop forcing the courts to
reconcile related forensic arguments. Digital and multimedia evidence (DME), referred to as a non-DNA discipline, relies
to some extent on observation, experience, and reasoning based analysis. DNA evidence relies more on biological and
chemical based analysis. Although all forensic analysis is subject to the human factor, [[FIGURE 1.2 OMITTED]] non-DNA
evidence analyzed using the more subjective methods can lead to higher error rates and less accuracy and
reliability in drawing expert conclusions. However, when non-DNA forensic evidence is adequate, it can
still be accurate and reliable and should not be dismissed altogether . Understanding and evaluating these
limitations of evidence will help toward reform of attaining supreme forensic truth, depressing wrongful
conviction rates of the innocent and increasing public safety from criminals who go free. In response to long awaited and
disturbing questions about the accuracy and reliability of non- DNA forensic science (Fig 1.2), the Consortium of Forensic Science Organization
(CFSO) urged Congress to pass legislation directing NAS to create an independent needs assessment study within these forensic disciplines. The
vehicle used to pass this legislation was the Science, State, Justice, Commerce, and Related Agencies Appropriations Act of 2006 and the
findings then became the NAS Report 2009. Before the report, it was just assumed that non-DNA forensic science was well grounded in
scientific methodology and unlike DNA, non-DNA forensic disciplines did not have a cheerleading commission to support or represent them at
the federal level. Creating the National Commission of Forensic Science (NCFS) independent of the jurisdiction of the legal or law enforcement
community, allowed a governing board to [[FIGURE 1.3 OMITTED]] mandate and manage setting new standards for validation methods
and practices to correct inconsistent science. The goal being that through verified and validated methodology, human error and bias can be
decreased, terminology can be unified, and report findings can be consolidated with scrutinized evidence before it ever reaches the court of
law. There is response to forensic science reform in all three branches of government (Fig 1.3). The Executive Branch is presently building the
framework of reform with SoFS, NCFS, and OSAC. The Legislative Branch is continuing to draft and re-introduce legislation in support of that
framework and the Judicial Branch persists to decide and argue case law, causing reform. It is the goal of OSAC, to create the [[FIGURE 1.4
OMITTED]] Forensic Science Code of Practice- Registry of Approved Standards and Registry of Approved Guidelines. This registry will
catalog a database of documents from all of the forensic science disciplines (Fig 1.4). OSAC will not write the documents but will require a
vetting process, promoting documents for the standards development process, in order for the approved standards and guidelines to be added
to the registry. (6) How does DME fit into forensic science reform? How do we validate, for admissibility to
the court, every single tool used for analysis of digital and multimedia evidence? How do you factor in, measure
or explain, or attempt to mitigate the human factor, i.e. cognitive bias, etc., as an element of forensic science analysis? It is thought
provoking to decide how to write best practices and standard operating procedures, or mapping details of likelihood ratio statistics, regarding
the limitless conditions and variables related to DME. As soon as technology changes, which happens at an alarmingly rapid rate, the validation
process must begin all over again. Just last year in the case of Michael Brown, Ferguson, Missouri, the point was raised again that one
technological solution for law enforcement encounters is that all police officers should be required to wear body cameras. Unfortunately,
several state and local law enforcement agencies that decide to use this technology, might purchase new
equipment first and think about long-term implementation, data storage management, retrieval, and privacy
issues after the fact. The progress made since the release of the NAS Report 2009, outlined in the next chapter, ensures that as
professionals, we are focusing on the challenges. As a forensic community, we are identifying next steps and the groundwork is being laid to
address our challenges. (8) (9) (10) CHAPTER II PROMOTION AND DEVELOPMENT OF FORENSIC SCIENCE Recommendation 1 of the National
Academy of Sciences (NAS) Executive Summary Report 2009- ‘Strengthening Forensic Science in the United States: A Path Forward,’ (Fig 2.1) is
the promotion and development of forensic science. This chapter expands on Recommendation 1 and focuses chronologically on professional
policy and legislative advances since the release of said report through 2014, specifically to how these developments relate to digital and
multimedia evidence (DME). A brief summary from 1998 -2006, will provide background information relating to the NAS Report 2009. (2)
[[FIGURE 2.1 OMITTED]] In 1998, the Scientific Working Group on Digital Evidence (SWGDE) was formed by the Federal Crime
Laboratory Directors group. This group was one of the earliest organizations to explore and combine digital audio, video, and photography with
computer forensics as a forensic discipline. Agencies represented by founding SWGDE members were the ATF, DEA, FBI, IRS-CID, U.S. Customs,
U.S. Postal Inspection Service, and the U.S. Secret Service. SWGDE worked in cooperation with other organizations including IOCE and ASCLD
adopting and publishing principles and definitions concerning acknowledgement and recognition of ‘digital evidence’ as an accredited
discipline. (11) In 2008, the American Academy of Forensic Science (AAFS) created the Digital and Multimedia Sciences (DMS) Section
recognizing the importance of the growing new field. This section to date has 111 members. (12) In 2006, NIJ sponsored the NAS Project-
Identifying the Needs of the Forensic Science Community. (1) The appointed Forensic Science Committee met on eight occasions and later
delivered the February 18, 2009 National Academy of Sciences Executive Summary- Strengthening Forensic Science in the United States: A Path
Forward, ISBN: 978-0-309-13130-8, a total of 352 pages, i.e. the NAS Report 2009. (2) On March 10, 2009, a hearing before the Subcommittee
on Technology & Innovation Committee on Science and Technology, House of Representatives: ‘Strengthening Forensic Science in the United
States: The Role of the National Institute of Standards and Technology,’ convened. The hearing focused on reviewing scientific and technical
issues raised by the NAS Report 2009, along with the role of National Institute of Standards and Technology (NIST). Chairman David Wu, U.S.
Democratic Representative from the State of Oregon, opened with three considerations: the possibility of building on federal resources and
capabilities versus creating a whole new government structure, full support and agreement to the goal of improving forensic science in the U.S.,
and taking the first step in moving from ‘entertainment’ to ‘reality’ with the expectations of forensic science. Representatives present were
Adrian Smith (NE), Paul Broun (GA), and Donna Edwards (MD). The witness panel included Mr. Peter M. Marone, Ms. Carol E. Henderson, Mr.
John W. Hicks, Mr. Peter Neufeld, and Dr. J.C. Upshaw Downs. (10) On March 18, 2009, a hearing before the Committee on the Judiciary United
States Senate: The Need to Strengthen Forensic Science in the United States: ‘The National Academy of Sciences’ Report on A Path Forward,’
convened. Chairman Patrick J. Leahy, U.S. Democratic Senator from the State of Vermont, in his opening statement addressed the NAS Report
2009 confirming problems demonstrated at the heart of our whole criminal justice system and that it showed the ‘CSI Effect’ is not ‘reality’ in
the field of forensic science. Leahy gave two examples, Detroit and Houston- Case Study #3 of this thesis, of laboratories shut down when audits
found less than adequate case results. In 2005, the DOJ reported a backlog of 350,000 forensic exams nationwide and alleged 1 in 5 labs did not
meet American Society of Crime Laboratory Directors (ASCLD) accreditation standards. Leahy stated forensic science is critical to our criminal
justice system in order to punish the guilty and exonerate the innocent. He referenced the Brandon Mayfield case, where an FBI examiner
affidavit was recanted, and the Kirk Bloodsworth case as two examples of faulty forensic science in the courts. The Honorable Harry T. Edwards,
(Senior Circuit Judge and Chief Judge Emeritus, United States Court of Appeals for the District of Columbia Circuit, and Co-Chair, Committee on
the Identifying the Needs of the Forensic Science Community, National Research Council of the National Academies, Washington, D.C.) gave his
statement. Edwards stated his Committee concluded congressional action was needed to cure serious problems facing the forensic science
community and admitted his preconceived views about the practice of scientific disciplines were incorrect assumptions and the simple principal
point called for an overhaul of forensic science in the United States. Hearing Submissions for the Record were as follows: -ASCLD, Laboratory
Accreditation Board, Garner, North Carolina: Jami St Clair, Chair, Lab Board, March 16, 2009, letter, Dean Gialamas, President and Beth Greene,
President-Elect, March 17, 2009, letter, Dean Gialamas, President and Beth Greene, President-Elect, December 2008, statement, -Edwards,
Harry T., Senior Circuit Judge and Chief Judge Emeritus, U.S. Court of Appeals for the District of Columbia Circuit, and Co-Chair, Committee on
Identifying the Needs of the Forensic Science Community, National Research Council of the National Academies, Washington, D.C., statement,
-IAI, Robert J. Garret, Metuchen, New Jersey, March 18, 2009, letter, -NDAA, Joseph I. Cassilly, President, Alexandria, Virginia, letter, -Neufeld,
Peter, Co-Director, Innocence Project, New York, New York, statement. (8) On May 13, 2009, a hearing before the Subcommittee on Crime,
Terrorism, and Homeland Security of the Committee on the Judiciary House of Representatives: National Research Council’s Publication
‘Strengthening Forensic Science in the United States: A Path Forward’ convened. Chairman Robert C. Scott, U.S. Democratic Representative
from the State of Virginia, in his opening statement acknowledged the unreliable role forensic science plays in criminal investigations, the fact
that the ‘CSI effect’ reaches most jury pools across the country, and confirmed fears about the national forensic science system. Scott felt the
most disturbing findings regarded judges and trial attorneys. He stated the NAS Report 2009 found that trial judges rarely exclude forensic
evidence and trial attorneys lack scientific training to adequately assess and question the forensic expert witnesses’ conclusions. Ranking
Member Louie Gohmert, U.S. Republican Representative from the State of Texas, opened with statements from the perspective of prosecutor,
district judge, and chief justice. He stated DNA is the forensic gold standard but his most important point challenged the belief that although
particular forensic disciplines have not been scientifically validated, it does not mean they are invalid and unreliable. Representatives present
were Mr. Robert C. Scott (VA), Mr. Anthony D. Weiner (NY), Mr. Louie Gohmert (TX), and Mr. Ted Poe (TX). The witness panel included Mr.
Kenneth E. Melson, Mr. Peter M. Marone, Mr. John W. Hicks, and Mr. Peter Neufeld. (9) On September 9, 2009, a hearing before the
Committee on the Judiciary United States Senate: ‘Strengthening Forensic Science in the United States,’ convened. Chairman Patrick J. Leahy,
U.S. Democratic Senator from the State of Vermont, in his opening statement suggested the need to ensure the highest scientific standards and
maximum reliability of forensic science. Leahy referenced the Cameron Todd Willingham case where an innocent man may have been executed
for a crime he did not commit, based in large part on forensic expert witness testimony and forensic evidence without scientific basis. The NAS
Report 2009 was summarized as a foundation to move forward with mandating national standards, enforcing best practices, certification of
examiners, and accreditation of laboratories. Senators present were Mr. Richard J. Durbin (IL), Mr. Sheldon Whitehouse (RI), Ms. Amy
Klobuchar (MN), Mr. Al Franken (MN), and Mr. Jeff Sessions (AL). The witness panel included Mr. Eric Buel, Mr. Paul Giannelli, Mr. Harold Hurtt,
Mr. Barry Matson, Mr. Peter Neufeld, and Mr. Matthew F. Redle. (4) On September 17, 2009, SWGDE released their Position on the National
Research Council (NRC) Report to Congress- NAS Report 2009. The position encompassed all 13 recommendations; however, this chapter
addresses Recommendation 1- The creation of the National Institute of Forensic Science (NIFS). SWGDE recognized the time needed to create a
new federal bureaucracy, NIFS, and supposed an immediate national strategy with existing forensic organizations. SWGDE stated that minimum
efforts should include standards for recognizing new forensic disciplines, newly established analytical methods, and a community-wide code of
ethics. The position also suggested, that funding allocation could follow the competitive Technical Support Working Group (TSWG) example.
(13) On January 31, 2011, the White House Office of Science and Technology Policy (OSTP), Under Executive Order 12881, issued the National
Science and Technology Council (NSTC) Charter of the Committee on Science. The purpose was “to increase overall effectiveness and
productivity of federally supported efforts that develop new knowledge in the sciences…” Functions of the Charter included science policy-
making processes, science policy decisions and programs, integration of science policy agenda, development and implementation federally,
NSTC clearance of documents, and international cooperation in science. On March 29, 2012, OSTP issued the Charter of the Subcommittee on
Forensic Science Committee on Science (SoFS CoS) NSTC to authorize and develop a White Paper summarizing the SoFS’s recommendation to
achieve: the Goals of the NAS Report 2009, a prioritized national forensic science research agenda, and a draft detailing strategy for developing
and implementing common interoperability standards to facilitate the appropriate sharing of fingerprint data across technologies. (14) (15)
[[FIGURE 2.2 OMITTED]] On February 15, 2013, through a Memorandum of Understanding, the Department of Justice (DOJ) and NIST
announced the intent to establish a National Commission on Forensic Science (NCFS-Fig 2.2). This 30-member group would develop federal
guidance at the intersections between forensic science and the courtroom, working together to create national standards for practitioners in
the areas of professional policy, training, and certification. Deputy Attorney General James M. Cole stated forensic science is an essential tool in
the administration of justice and scientifically valid and accurate forensic analysis strengthens all aspects of our criminal justice system. (16) On
June 18, 2013, chairs for 18 of 21 SWGs gathered and discussed the NIST responsibility to create guidance groups intended to replace SWG’s
with a new infrastructure. (17) On September 27, 2013, under Docket No. 130508459-3459-01, NIST, the Department of Commerce released a
Notice of Inquiry for proposed reorganization of scientific working groups and considered open input toward the ‘Possible Model for the
Administration and Support of Discipline- Specific Guidance Groups for Forensic Science’. The goal was to explore the establishment and
structure of governance models. Comments were requested across questions concerning the structure, the impact, the representation, and the
scope of the guidance groups. (56) SWGDE and combined SWG’s released their DME Response to NIST, Federal Register Notice- September 27,
2013. Two sections spoke to ‘Possible Models for the Administration and Support of Discipline-Specific Guidance Groups for Forensic Science’.
Section I overviewed the request for model perspectives and Section II provided SWG’s opinions with a collected 35 years of direct industry
experience. In reference to Recommendation 1 of the NAS Report 2009, Section I indicated the DME discipline has already proven accepted and
successfully tested as a science by the courts at all levels of the judicial process by providing information and results to juries as expert
testimony through technical assistance and quality assurance guidance. SWGDE’s established productive history, dedicated strong leadership,
and positions with response to federal progress, display a relentless and continued commitment to DME as a forensic discipline. (57) By
November 26, 2013, the Notice of Inquiry generated 82 public comments consisting of 337 pages across numerous forensic disciplines (20) and
the overall infrastructure was defined in the NIST January Summary, renaming the guidance groups the Organization of Scientific Area
Committees (OSAC). OSAC is practice-focused, reporting only to the Forensic Science Standards Board (FSSB) and will not provide advice to the
Attorney General, NIST Director, or the NCFS (Fig 2.3). This summary also detailed the FSSB, LRC, QIC, SACs, and other infrastructure specifics.
(17) [[FIGURE 2.3 OMITTED]] On January 10, 2014, the DOJ and NIST announced the first-ever appointed National Commission of
Forensic Science and AAFS released a statement applauding the broad representation listing NCFS named members. NCFS members will work
to develop guidance and recommended policy to the U.S. Attorney General on improving forensic science (Fig 2.3). (18) (19) On February 3-4,
2014, at the first NCFS meeting, NIST presented the infrastructure summary plan and slide presentation for the new OSAC (previously called
guidance groups,) unifying and incorporating the independent scientific working groups with more than 600 practitioners. The objective of the
infrastructure is to produce standards and guidelines for improving the quality and consistency of forensic science. OSAC was then launched at
the AAFS meeting on February 18th, in Seattle, Washington. It is important to note that digital evidence was not included in the IT/Multimedia
SAC, as shown in the figure below (Fig 2.4). (58) [[FIGURE 2.4 OMITTED]] On February 12, 2014, John D. Rockefeller IV, U.S. Democratic
Senator from the State of West Virginia, introduced the Forensic Science and Standards Act 2014 to establish a national forensic science
research program. The purpose of this act is to strengthen forensic science by promoting scientific research, establishing science-based
voluntary consensus standards and protocols across forensic science disciplines, and encouraging the adoption of these standards. (22) On
March 27, 2014, Patrick Leahy and John Cornyn introduced the Criminal Justice and Forensic Science Reform Act. Leahy stated our
confidence in the criminal justice system should be strengthened by evidence and testimony, which is
accurate, credible, and scientifically grounded. Since 1989, because of faulty forensic evidence, 314 DNA
exonerates spent a total of 4,202 unnecessary years in prison and guilty men went free, possibly
continuing to commit other crimes. Law Enforcement, Defense Attorneys, Prosecutors, Judges, Scientists, and Practitioners, all
want forensic evidence that is accurate and reliable to the court and executive action is not enough. In the
interest of justice, legislation must address comprehensive forensic science reform. (23) (24) Leahy originally introduced this
landmark forensics reform legislation bill in 2011. It was read by Congress twice and referred to the Committee on the Judiciary and is still not
law. The bill is scheduled to be re-introduced in March, 2015. On April 11, 2014, after the DOJ turned the guidance groups over to NIST and
OSAC was established, NIST defined the OSAC roles and responsibilities. The Organizational Authorities and Duties outlined the FSSB, HFC, LRC,
QIC, SAC, SACsubs, and the process for application. (25) On May 2, 2014, the NSTC Report ‘Strengthening the Forensic Sciences’ was released to
summarize three years work of the OSTP SoFS in response to the NAS Report 2009 National Academy of Sciences Executive Summary. SoFS
comprised 200 experts across 23 federal agencies and delivered the first set of research findings covering issues related to laboratory
accreditation, certification of forensic science, and medicolegal personnel, proficiency testing, and ethics. (26) On
May 7, 2014, NSF and NIJ partnered as co-sponsors to solicit proposals for Industry/University Cooperative Research Centers to develop the
relationship between industry, academia, and government in the relevant areas of forensic science. Federal agencies represented are DOD,
DFSC, DFBA, DHS, DOJ, ATF, DEA, FBI, and NIST. (27) On June 26, 2014, NIST and DOJ appointed 17 members of the first Forensic Science
Standards Board (FSSB- Fig 2.5). This marked the transition from planning to doing in the effort to improve the scientific basis of forensic
evidence used in courts of law. The board consists of 5 research community members, 5 OSAC-SAC Chairs, 6 national forensic science
professional organization members, and 1 ex officio. Richard Vorder Bruegge, Ph.D., FBI, Senior Photographic Technologist, will Chair the
OSACSAC IT/Multimedia. (28) On August 19, 2014, NIST announced competition to create a Forensic Science Center of Excellence anticipating
$4 million in funding annually over 5 years. The mission of this center will focus on two branches of forensic science, pattern evidence and
digital evidence. This is just one of several centers that NIST proposes. (29)
K

Securitizing cyber space is the ONLY way to prevent large scale cyber war – the alt
can’t solve fast enough or change US doctrine
Pickin 12 (Matthew, MA War Stuides – Kings College, “What is the securitization of cyberspace? Is it a problem?”,
http://www.academia.edu/3100313/What_is_the_securitization_of_cyberspace_Is_it_a_problem)

In analysing the problems of securitization , major issues have been raised. Threat inflation, surveillance,
militarisation and the military-industrial complex are only some of the most prominent issues. There are benefits of
securitization however, and at the very end of this analysis it will be explained why securitization is necessary for now. The main
supporting arguments for securitization include, the future of cyber-attacks in conflicts, protecting
critical infrastructure and cyber-crime. The 2010 National Intelligence Annual Threat Assessment stated that the United States
was under a severe threat of cyber-attacks (Blair, 2010). Due to the amount of infrastructure connected to the internet
in the United States targets for cyber-attacks are nearly unlimited, as a superpower the United States
presents a valuable target. “As the world’s hegemonic power, the United States is also the main target
state that dissident groups, terrorists, and rogue states wish to damage (Valeriano & Maness, 2011, p. 145).”
Therefore, the United States must have some defence , or offensive capability in order to protect itself from
future conflicts and attacks on critical infrastructure. In Foreign Affairs William J Lynn the former deputy secretary of defence wrote that
the centrality of information technology in the United States makes it a prime target. He argued that extending advanced cyber-defences was
crucial for the American economy, and also stated that failure of critical infrastructure would compromise national defence, “Our assessment is
that cyber-attacks will be a significant component of future conflicts (Lynn, 2011).” Therefore in
order to protect the United
States, the government has been forced to securitize the issue. According to William J Lynn an attack could
compromise national defence; therefore the issue is very high in the national security agenda. In the article,
he also addresses the critics who argue that cyberspace is at risk of being militarized and states that US
cyber strategy has been designed to prevent this from happening , “Far from militarizing cyberspace, U.S. cyber-
strategy will make it more difficult for military actors to use cyberspace for hostile purposes (Lynn, 2011).”
In securitizing cyberspace and creating advanced cyber-defences and cyber-weapons the United States is
preparing for any future conflict or attack. If such an attack or conflict is a real existing threat then it is beneficial to prepare
through securitization, otherwise the disadvantages clearly outweigh any advantage. The other main benefit of securitizing cyberspace would
be tackling cyber-crime. According to the security company Sophos, in the first six months of 2010 it received 60,000 new malware samples
every day. Apart from malware, cyber-crime covers many different areas such as financial, piracy, hacking and cyber-terrorism. These crimes
are growing due to the constantly evolving communications system of social sharing of data, online data storage and social networking,
“Although cybercrime has formed a hidden shadow and a kind of evil doppelganger to every step of the Internet’s long history from its very
origins, its growth has suddenly become explosive in recent years by virtually any estimate (Deibert & Rohozinski, Contesting Cyberspace and
the Coming Crisis of Authority, 2012, p. 28).” Both Deibert and Rohozinski argue that the rise is cyber-crime has become a big problem for
states, in 2011 counterfeiting and copying cost the Asia-Pacific region almost $21 billion. Certainly cyber-space has become a rewarding way to
commit crimes with little risk of prosecution, “Cybercrime has elicited so little prosecution from the world’s law enforcement agencies it makes
one wonder a de facto decriminalization has occurred (Deibert & Rohozinski, Contesting Cyberspace and the Coming Crisis of Authority, 2012,
p. 29).” Due to the trouble of cyber-crime, the only way of combating it effectively would be greater state regulation and intervention. With the
whole of cyber-space effectively securitized by the United States due to the threat to national security by technological and social shifts, the
government is asserting itself increasingly to counter these threats. Conclusion In analysing what was the securitization of cyberspace, the
beginnings of the cyber-debate in the United States have been examined; this
country was used due to reliance on
information technology and the status as a superpower. The securitization model from the Copenhagen school of
thought was used to understand how issues are non-politicized, politicized and eventually securitized. A
different range of security bills have been examined with this model to understand what was needed for
cyberspace to become a securitized issue. With the definition of securitization dependent on the terms of national security, the
changing definition of this concept was also examined. Securitization has occurred due to an evolving history whereby
the military have understood the potential of information technologies in warfare and where
vulnerabilities have been recognised that could damage national security. In evaluating whether securitization of
cyberspace is a problem, it is very clear that securitization is a growing concern with many complications. There are many issues including
privacy, regulation, surveillance, internet regulation and the growing tension in the international system. However, because
the United
States is a superpower contesting with other cyber-heavyweights such as Iran, Russia and China the
issue will not be de-securitized in the short term . With the discovery and use of cyber-weapons, many
states are in the process of making their own for defensive and offensive purposes . The government of
the United States will not de-securitize the issue of cyberspace while there are rival states and groups
which prove a threat to the national security agenda. These problems will continue to exist until there is no
defensive agenda and the issue is de-securitized, for now securitization is a necessary evil.

Reps don’t shape reality


Thierry Balzacq 5, Professor of Political Science and IR @ Namar University, “The Three Faces of
Securitization: Political Agency, Audience and Context” European Journal of International Relations,
London: Jun 2005, Volume 11, Issue 2
However, despite important insights, this position remains highly disputable. The reason behind this qualification is not hard to understand.
With great trepidation my contention is that one of the main distinctions we need to take into account while examining securitization is that
between 'institutional' and 'brute' threats. In its attempts to follow a more radical approach to security problems wherein threats are
institutional, that is, mere products of communicative relations between agents, the CS has neglected the importance of 'external or brute
threats', that is, threats that do not depend on language mediation to be what they are - hazards for human
life. In methodological terms, however, any framework over-emphasizing either institutional or brute threat risks losing sight of important
aspects of a multifaceted phenomenon. Indeed, securitization, as suggested earlier, is successful when the securitizing agent and the audience
reach a common structured perception of an ominous development. In this scheme, there is no security problem except through the language
game. Therefore, how problems are 'out there' is exclusively contingent upon how we linguistically depict them .
This is not always true. For one, language does not construct reality; at best, it shapes our perception of it.
Moreover, it is not theoretically useful nor is it empirically credible to hold that what we say about a
problem would determine its essence. For instance, what I say about a typhoon would not change its
essence. The consequence of this position, which would require a deeper articulation, is that some security problems are the attribute of the
development itself. In short, threats are not only institutional; some of them can actually wreck entire political
communities regardless of the use of language. Analyzing security problems then becomes a matter of understanding how
external contexts, including external objective developments, affect securitization . Thus, far from being a departure from
constructivist approaches to security, external developments are central to it.

Focus on representations assumes deliberative rational subjects rather than time


pressured agents of routine which policymakers are—means the alt goes no where.
Pouliot, Centre for International Peace and Security Studies director, 2008
(Vincent, “The Logic of Practicality: A Theory of Practice of Security Communities”, International
Organization, Spring, ebsco)

The representational bias in modern thinking is reinforced by the logic of scientific practice and its institutional environment. In trying to see the world from a detached perspective, social scientists put themselves "in a state of social weightlessness." 18 Looking at the world from above

Contrary to practitioners, who act in and on the


and usually backwards in time implies that one is not directly involved in social action and does not feel the same proximity and urgency as agents do.

world, social scientists spend careers and lives thinking about ideas, deliberating about theories, and
representing knowledge. As a result, they are enticed "to construe the world as a spectacle, as a set of
significations to be interpreted rather than as concrete problems to be solved practically ."19 The epistemological consequences of

what scientists see from their ivory tower is often miles away from the practical logics
such a contemplative eye are tremendous:

enacted on the ground. For instance, what may appear to be the result of rational calculus may in (academic) hindsight

just as well have derived from practical hunches under time pressure . This "ethnocentrism of the scientist"20 leads to substituting the practical relation to the world
for the observer's (theoretical) relation to practice - or, to use Bourdieu's formula, "to take the model of reality for the reality of the model."21 To return to diplomacy, Kissinger, whose career spanned the divide between the academic and the policy worlds, concurs that "there is a vast

The analyst can choose which problem he wishes to study, whereas the
difference between the perspective of an analyst and that of a statesman

statesman's problems are imposed on him. The analyst can allot whatever time is necessary to come to
a clear conclusion; the overwhelming challenge to the statesman is the pressure of time The analyst has available to him all the facts
The statesman must act on assessments that cannot be proved at the time that he is making them."22 As a result, diplomacy is an art not a science.23 It is a practice enacted in and on the world, in real time, and with actual consequences for the practitioner. As such, the practicality of
diplomacy cannot be fully captured by detached, representational observation. From this perspective, the epitome of the representational bias is rational choice theory and its tendency to deduce from the enacted practice (opus operatum) its mode of operating (modus operandi). The

mistaking the outcome of practice for its process, rational choice "project[s]
problem is deeper than the well-known tautology of revealed preferences. By

into the minds of agents a (scholastic) vision of their practice that, paradoxically, it could only uncover
because it methodically set aside the experience agents have of it."24 While social scientists have all the
necessary time to rationalize action post hoc, agents are confronted with practical problems that they
must urgently solve One cannot reduce practice to the execution of a theoretical model
. social . For one thing,

action is not necessarily preceded by a premeditated design practice can be oriented toward a goal .A

without being consciously informed by it. For another, in the heat of practice, hunches take precedence over rational calculations. In picturing practitioners in the image of the theorist, rational choice theory
produces "a sort of monster with the head of the thinker thinking his practice in reflexive and logical fashion mounted on the body of a man of action engaged in action."25 In IR, the literature on the rational design of international institutions best exemplifies this representational bias.26
It is correct that states seek to mold international institutions to further their goals; but it does not follow that this design is instrumentally rational. The outcome of political struggles over institutions and the process of struggling over institutions follow two different logics - observational

In IR, the representational bias is not the


versus practical. One cannot impute to practitioners a theoretical perspective that is made possible by looking at social action backward and from above.

preserve of rational choice theory, however; most constructivist interpretations of rule-based behavior
also fall victim to it . In March and Olsen's seminal formulation, the logic of appropriateness deals with norm- and rule-based action conceived "as a matching of a situation to the demands of a position."27 This definition, however, encompasses two
distinct modes of social action.28O n the one hand, the logic of appropriateness deals with rules that are so profoundly internalized that they become taken for granted. On the other hand, the logic of appropriateness is a reflexive process whereby agents need to figure out what behavior

Problematically
is appropriate to a situation.29S ending calls these two possible interpretations "motivationally externalist" versus "motivationally internalist,"30a distinction that hinges on whether agents reflect before putting a norm into practice.

from a practice theory perspective, a vast majority of constructivist works fall in the former camp,
according to which norm based actions stem from a process of reflexive cognition base either on d

instrumental calculations, reasoned persuasion, or the psychology of compliance. Here the


representational bias shows very clearly. But even those few constructivists who theorize appropriate
action as nonreflexive assimilate it to the output of a structural logic of social action or a habit resulting
from a process of reflexive internalization Nowhere in these interpretations is there room for properly .

theorizing practical knowledge. Three main strands of constructivist research construe appropriateness as a motivationally externalist logic of social action. A first possibility is to introduce "thin" instrumental rationality in the
context of a community, that is, a norm-rich environment. Keck and Sikkink's "boomerang model" is one of the best-known frameworks of this genre: state elites' compliance with transnational norms first comes through strategic calculations under normative pressure; only at a later
stage do preferences change.31 Schimmelfennig's notion of rhetorical action— "the strategic use of norm-based arguments' —follows a similar logic of limited strategic action constrained by constitutive communitarian norms and rules. A second possibility is to conceive of
appropriateness as a logic that relics on reasoned persuasion. Building on Habermas's theory of communicative action, several constructivists theorize that the "logic of arguing" leads actors to collectively deliberate "whether norms of appropriate behavior can be justified, and that norms
apply under given circumstances."33 Other constructivists build on the notion of "social learning" to explain the workings of argumentative persuasion in social context.-*4 Finally, a third externalist interpretation of appropriate- ness emphasizes cognitive processes that take place at the
level of the human mind. Relying on psychological notions such as the acceptability heuristic, omission bias, and images, Shannon argues that "[a]ctors must feel justified to violate a norm to satisfy themselves and the need for a positive self-image, by interpreting the norm and the
situation in a way that makes them feel exempt."35 Overall, most constructivists construe appropriateness as a reflexive logic of action based on thin rationality, reasoned persuasion, or the psychology of compliance. Meanwhile, a few constructivists take the externalist route and prefer
to emphasize the nondeliberative nature of the logic of appropriateness. Yet, even though this understanding seems better in tune with the practice turn advocated in this article, it fails to capture the practicality of social life because internalist constructivists construe appropriateness
either as a structural logic devoid of agency or as a form of habituation that is reflexive in its earlier stages. To begin with the former, some constructivists claim that the internalist logic of appropriateness is plagued with a "structuralist bias" that renders it "untenable as a theory of
individual action. In this account, the essence of agency rests with choice and the capacity to deliberate among options before acting: "If the [logic of appropriateness] is to be individualistic in structure, the individual actor must be left with a reasonable degree of choice (or agency)."^^
But this restrictive notion of agency seems unwarranted within the structurationist ontology that characterizes constructivism. Agency is not simply about "defying" structures by making choices independently of them. It is a matter of instantiating structures, old or new, in and through
practice. Without practice, intersubjective realities would falter; thus agency (or the enactment of practice) is what makes social reality possible in the first place. In introducing contingency, agency need not be reflexive; and thoughtlessness does not logically imply structural
determination. Taking a different tack, a number of constructivists equate the logic of appropriateness with the internalization of taken-for-granted norms. For instance, Checkel seeks to understand how norm compliance moves from "conscious instrumental calculation" to "taken-for-
grantedness." In what he calls "type II socialization," agents switch "from following a logic of consequences to a logic of appropriateness."-'^ A similar view can be found in Wendt's discussion of internalization, from "First Degree" to "Third." This process essentially consists of certain
practices getting "pushed into the shared cognitive background, becoming taken for granted rather than objects of calculation."^' Norms begin as explicit "ought to" prescriptions but progressively fade from consciousness and become taken-for-granted. Significantly, thus, this internalist

distinguishing the "logic of


interpretation remains embroiled in the representational bias that plagues externalism: the taken-for-granted knowledge that informs appropriateness necessarily begins as representational and conscious. In

habit" from that of appropriateness, Hopf comes closest to accounting for practical knowledge in IR. As
he perceptively argues: "Significant features distinguish habitual action from normative compliance . Generally,
norms have the form 'in circumstance X, you should do Y,' whereas habits have a general form more like 'in circumstance X, action Y follows'."""^ This all important distinction, upon which this article builds, represents a significant step toward a practice turn in IR theory. That said, this
article seeks to fix three main limitations in Hopfs framework. First, it remains partly embroiled in an internalization scheme not so distant from Checkel's or Wendt's. In using the language of norm selection versus norm compliance, Hopf implies that the internalist logic of habit follows
from the externalist logic of appropriateness. By contrast, this article theorizes practical knowledge as unreflexive and inarticulate through and through. Second, while both logics of habit and practicality build on past experiences, the latter does so contingently while the former is strictly
iterative."*' Third, Hopf insists his is only a methodological distinction between the logic of habit and the logic of appropriateness, whicb entices researchers to look for evidence of norm compliance in the unsaid instead of explicit invocations.'*^ Though an important piece of

Before concluding this critique of IR literature, it is necessary


methodological advice, this point falls short of granting practicality the full ontological status it deserves in social theory.

to address the "stronger program" in IR constructivism located closer to postmodernism By its very .

epistemological standpoint, postmodernism epitomizes the representational bias: detached from, and
in fact indifferent to, the social urgency of practices, many postmodernists intellectualize discourse to
the point of distorting its practical logic and meaning. In addition, postmodernist works often embody
the "armchair analysis" that Neumann urges to overcome in taking a practice turn .'*-' Against this tendency, a number of IR constructivists move

But several analyses still fall short of accounting for the practicality of discourse
closer to Foucault's conceptualization of discourse as practice.'*'*
—that is, discourse as a practice enacted in and on the world. Fie rke's works on "language games," for instance, usefully emphasize background knowledge but do not take

In a similar fashion, the Copenhagen school asserts that security is practice; but in
the materiality of practices seriously.'*^

restricting its focus to traditional discourse analysis, it evacuates the practical logics that make the
securitizing discourse possible. Taking a practice turn promises to help overcome the representational
bias in IR theory, whether rationalist, constructivist, or postmodernist.

IR forcecating is key to relevant theory and effective policy making


Han, Nebraska political science PhD candidate, 2010
(Doug-ho, “Scenario Construction and Implications for IR Research: Connecting Theory to a Real World
of Policy Making”, 1-26, http://www.allacademic.com/one/isa/isa10/index.php?
cmd=Download+Document&key=unpublished_manuscript&file_index=1&pop_up=true&no_click_key=t
rue&attachment_style=attachment&PHPSESSID=3e890fb59257a0ca9bad2e2327d8a24f

Another example of the use of scenario analysis by defense planners can be found in a series of papers by the Rand Corporation that deal with ongoing
national security issues and develop national security policies for the United States government. A recent article by Brian Jackson and David Frelinger entitled
“Emerging Threats and Security Planning,” one of a series, deals
with issues such as the security threats the U.S. government
faces now and suggests how to discern “true” threats from “false” threats .57 Coping with a variety of emerging threats
means not just focusing on traditional and conventional ways of thinking but also concentrating on unconventional and unusual modes of reasoning, often based on
fanciful thinking that scenario planning most seeks to inspire. Again, a series of papers at the Rand Corporation have dealtwith diverse national
security issues and tried to devise various national security policies for the U.S. government on the basis of scenario thinking and analysis. One of the
early efforts in this domain could be found in a work on how nuclear war might start from the perspective of the
early twenty-first century.58 In these papers various scenarios have been unfolded ranging from the possibility of

nuclear warfare to emerging threats and new technological innovations in the military and industrial
domains. The diverse usages of scenarios in government think tanks like Rand suggest that scenarios could
have potential to be used for not only articulating alternative possibilities in a certain issue area but also
applying various thoughts of different outcomes into a real world of policy making. In a word, scenario-
based planning could make a difference in such diverse areas as business, military, economics, and
politics. Common and effective usage of scenario planning in other fields such as business, military, and
even education strategic planning, strengthened by scenario-oriented methodological approaches, has
considerable implications for the development of the field of IR in terms of the possible connection of
theory and policy. If IR scholars could derive more practical insights from these fields of studies, their
research could be more fruitful in the arena of real world policy making. This is why we need a discussion
of the necessity of introducing scenario analysis in our field, the topic of the following section. 4. Why the Study of International Politics
Needs Scenario Analysis Is the rationale for using scenarios in other disciplines still relevant for the study of

international politics? Or do we have to find some other reasons for using the scenario methodology in our field?59 The potential
relevance of the scenario method to the field of IR can be found in various efforts of IR scholars to use a
variety of theoretical insights in order to think about an unknown future. As the previous section suggested, the
scenario methodology has been primarily developed in the areas of military planning and strategic
management. In the field of IR a few scholars have reevaluated the importance of scenario analysis as a social
science methodology.60 These scholars contend that the scenario-building method could make a unique
contribution to IR research because of the alternatives to a “scientific” approach it offers to mainstream
IR theorizing.
Linearity might not be true but it’s provisionally useful
Dr. Sebastian L. V. Gorka et al 12, Director of the Homeland Defense Fellows Program at the College
of International Security Affairs, National Defense University, teaches Irregular Warfare and US National
Security at NDU and Georgetown, et al., Spring 2012, “The Complexity Trap,” Parameters,
http://www.carlisle.army.mil/USAWC/parameters/Articles/2012spring/Gallagher_Geltzer_Gorka.pdf These
prioritization is simultaneously
competing views of America’s national security concerns indicate an important and distinctive characteristic of today’s global landscape:

very difficult and very important for the United States. Each of these threats and potential threats —al Qaeda, China, nuclear

proliferation, climate change, global disease, and so on —can conjure up a worstcase scenario that is immensely
intimidating. Given the difficulty of combining estimates of probabilities with the levels of risk associated with these threats, it is challenging to establish
priorities. Such choices and trade-offs are difficult, but not impossible. 30 In fact, they are the stock-in-trade of the
strategist and planner. If the United States is going to respond proactively and effectively to today’s international
environment, prioritization is the key first step—and precisely the opposite reaction to the complacency and
undifferentiated fear that the notion of unprecedented complexity encourages . Complexity suggests a
maximization of flexibility and minimization of commitment ; but prioritization demands wise allotment of
resources and attention in a way that commits American power and effort most effectively and efficiently.
Phrased differently, complexity induces deciding not to decide ; prioritization encourages deciding which decisions matter

most. Today’s world of diverse threats characterized by uncertain probabilities and unclear risks will overwhelm us if the specter of
complexity seduces us into either paralysis or paranoia. Some priorities need to be set if the United States is to find the resources to
confront what threatens it most. 31 As Michael Doran recently argued in reference to the Arab Spring, “the United States must train itself to see a large dune as something more formidable

This is not to deny the possibility of nonlinear phenomena, butterfly effects, self-organizing
than just endless grains of sand.”32

systems that exhibit patterns in the absence of centralized authority, or emergent properties. 33 If anything, these hallmarks of complexity theory
remind strategists of the importance of revisiting key assumptions in light of new data and allowing for tactical flexibility
in case of unintended consequences. Sound strategy requires hard choices and commitments, but it need not be inflexible . We

can prioritize without being procrustean. But a model in which everything is potentially relevant is a model in
which nothing is.
2AC – AT: Topicality – Subsets – Forensic Science
We meet - we’re a wholesale reform by changing who can testify in every courtroom
across the nation and PTIV
Forensic science is the application of science to the criminal justice system
Michelle Wood, et al, 6 - Waters Corporation, MS Technologies Centre, Micromass UK Ltd (“Recent
applications of liquid chromatography–mass spectrometry in forensic science” Journal of
Chromatography A, 1130 (2006) 3–15, Science Direct,
https://doi.org/10.1016/j.chroma.2006.04.084 //DH

The term “forensic


science” covers those professions which are involved in the application of the social and
physical sciences to the criminal justice system. Forensic experts are required to explain the smallest details of the methods used, to
substantiate the choice of the applied technique and to give their unbiased conclusions—all under the critical and often mistrustful gaze of the servants of the
justice, as well as the general public and the media. The final result of the work of the forensic scientist exerts a direct influence on the fate of a given individual.
This burden is a most important stimulus, and one which determines the way of thinking and acting in forensic sciences. Consequently, the methods applied in
forensic laboratories should assure a very high level of reliability and must be subjected to extensive quality assurance and rigid quality control programs. The legal
system is based on the belief that the legal process results in justice. This has come under some question in recent years. Of course, the forensic scientist cannot
change scepticism and mistrust singlehandedly. He or she can, however, contribute to restoring faith in the judicial processes by using science and technology in the
search for facts in civil, criminal and regulatory matters.
2AC – AT: Topicality – Substantial
Substantial means important
Collins English Dictionary, 20 (“substantial”
https://www.collinsdictionary.com/us/dictionary/english/substantial //DH

substantial in British English ADJECTIVE 1. of a considerable size or value substantial funds 2.


worthwhile; important a substantial reform
2AC – AT: Disadvantage – 2020 Election – Warming

Most accurate 2018 model forecasts a Biden victory set in stone


Bitecofer 19. Rachel Bitecofer is a Senior Fellow at the Niskanen Center. A former associate professor
of politica science at Christopher Newport University. 7-1-2019, "With 16 Months to go, Negative
Partisanship Predicts the 2020 Presidential Election," Judy Ford Wason Center for Public Policy,
https://cnu.edu/wasoncenter/2019/07/01-2020-election-forecast/ - AM

In July of 2018, my innovative forecasting model raised eyebrows by predicting some four months before the midterm
election that Democrats would pick up 42 seats in the House of Representatives. In hindsight, that may not seem such a bold prediction,
but when my forecast was released, election Twitter was still having a robust debate as to whether the Blue

Wave would be large enough for Democrats to pick up the 23 seats they needed to take control of the House of Representatives
and return the Speaker’s gavel to Nancy Pelosi. Based on its 2018 performance, my model, and the theory that structures it, seem well poised to

tackle the 2020 presidential election – 16 months out. I’ll serve up that result below, but first let’s set the table by reviewing my
model’s 2018 forecasting success. Not only did I predict that they would gain nearly double the seats they needed,

but I also identified a specific list of Republican seats Democrats would flip, including some, such as Virginia CD 7, that
were listed as “Lean Republican” by the majority of race raters at the time. At a time when other analysts coded even the most

competitive House races for Democrats as Lean or Tilt Democrat , I identified 13 Republican-held districts
as “Will Flips,” 12 as “Likely to Flip,” and 6 as “Lean Democrat.” I also identified a large list of “Toss Ups,” from which I would
later identify the remaining “flippers.” In addition, I identified some “long-shot toss-up” districts that could be viable flips under some turnout scenarios. Of the

original 25 districts I identified as definitely or highly likely to flip, all but one, Colorado CD3, did so, possibly
because the party failed to invest in their nominee there. The post-election diagnostics of my forecasting model, which

departs significantly from the approaches used in conventional election forecasting models, such as those used by
FiveThirtyEight, reveal just how powerful my model was at identifying the House districts and Senate races

capable of producing Blue Wave effects powered by Trump backlash in the electorate. Indeed, the places I
went astray in my final, “handicapped” predictions are races where I ignored the clear signals of my model , such
as Georgia’s 6th congressional district, which my model was quite clear about flipping, and Kentucky’s 6th, which my model was quite clear couldn’t. Still, in other
races, my manual handicapping was necessary, and correct, because despite its overall accuracy, my model underpredicts the Democrats’ two-party vote share in
Utah’s 4th district. Looking ahead to the 2020 Electoral College map, my model delivers on two of the most critical elements of election forecasting:
lead time and parsimony, that is, simplicity. It’s probably not lost on you, dear reader, that I
am offering a forecast not for the presidential primary
election, itself still in its infancy, but for the November 2020 general election that is some 16 months away. And I am offering a
forecast free from all the trappings you are used to. There are no poll aggregators, no daily or weekly updates, no simple versus deluxe versions. Right now, there is
not even a nominee! By and large, I don’t expect that the specific nominee the Democratic electorate chooses will matter all that much unless it ends up being a
disruptor like Bernie Sanders. Indeed, the only massive restructuring I might have to make to this forecast involves a significant upheaval like the entrance of a well-
funded Independent candidate such as Howard Schultz into the general election, which our national survey in March shows would likely pull 5 votes from the
Democrats’ nominee for each one vote it would pull away from Trump. Other potential significant disruptions might be a ground war with Iran, an economic
recession, or a terrorist attack on the scale of 9/11. Otherwise, the
country’s hyperpartisan and polarized environment has
largely set the conditions of the 2020 election in stone. As unpopular as Donald Trump is today, and no matter how badly he trails,
on Election Day Donald Trump will earn the vote of somewhere around 90% of self-identified Republicans. And as 2018

demonstrated, Republicans will increase their turnout rate over 2016. This, combined with a floor for Trump among

Independents of around 38% (because of right-leaning Independents) and an infusion of cash that will dwarf his 2016 efforts, Trump has a floor that
is at least theoretically competitive for reelection and will force Democrats to compete hard to win the presidency. The polarized era doesn’t produce Reagan Era
electoral college landslide maps. Before revealing what my model has to say about 2020, I note one very important point of methodology. To construct predicted
two-party vote shares for the Democratic Party’s nominee in each state, I
use the best turnout estimate available for each state in
2018 for the Democrats. This is important because it allows me to capture the turnout surge we also saw among
Republicans in 2018. Although I predicted an enormous surge in turnout among Democrats and Democrat-leaning Independents, the size of the
corresponding surge among Republicans surprised me somewhat. I predicted the surge of Democratic turnout via negative
partisanship, activated by the tangible threat of living under a unified government controlled entirely by
Donald Trump. What I did not anticipate was that, at least among Republicans, a threat response can be artificially generated at a mass scale and at a time
when a party’s voters should be placated. Despite controlling the White House and both chambers of Congress in 2018, turnout surged nearly as much among
Republicans, leading to the highest overall midterm turnout rates we have seen since 1914. Overall turnout ended up at a whopping 50.4%, tempting many analysts
afterward to conduct comparisons between 2016 and 2018, a presidential-to-midterm comparison that is usually apparently absurd. Trump
and the RNC
accomplished this by running a base-centric mobilization campaign focused largely on stoking fear of
immigration; a strategy they will replicate for 2020 while adding socialism into the mix. Because my 2020
model relies on the 2018 vote to estimate the 2020 vote, it is naturally designed to account for this unexpected bipartisan
turnout surge. As such, my expectation is the 2020 model will be better than the 2018 model , which was built with
Virginia’s one-sided Democratic turnout surge as a turnout guide. So, with no further ado:

2020 Bitecofer Model Electoral College Predictions

Safe D (36.62%)

Likely D (4.65%)

Lean D (10.41%)

Toss Up (11.71%)

Lean R (6.32%)

Likely R (7.06%)

Safe R (23.23%)

Democrat: 278 votes Republican: 197 votes

Barring a shock to the system, Democrats recapture the presidency. The leaking of the Trump campaign’s internal
polling has somewhat softened the blow of this forecast, as that polling reaffirms what my model already knew: Trump’s 2016
path to the White House, which was the political equivalent of getting dealt a Royal Flush in poker, is probably not replicable in
2020 with an agitated Democratic electorate. And that is really bad news for Donald Trump because the Blue
Wall of the Midwest was then, and is now, the ONLY viable path for Trump to win the White House. Why is Trump in so much trouble in the

Midwest? First, and probably most important, is the profound misunderstanding by , well, almost everyone, as to
how he won Michigan, Wisconsin, and Pennsylvania in the first place. Ask anyone, and they will describe Trump’s
2016 Midwestern triumph as a product of white, working class voters swinging away from the Democrats based on the appeal of Trump’s
economic populist messaging. Some will point to survey data of disaffected Obama-to-Trump voters and even Sanders-to-Trump voters as evidence

that this populist appeal was the decisive factor. And this is sort of true. In Ohio, Trump managed the rare feat of cracking 50%. Elsewhere, that

explanation runs into empirical problems when one digs into the data. Start with the numerical fact that Trump
“won” Pennsylvania, Wisconsin, and Michigan with 47.22%, 48.18%, and 47.5% of the vote, respectively ,
after five times the normal number in those states cast their ballots for an option other than Trump or Clinton. This, combined with the

depressed turnout of African Americans (targeted with suppression materials by the Russians) and left-leaning
Independents turned off by Clinton (targeted with defection materials by the Russians) allowed Trump to pull off an
improbable victory, one that will be hard to replicate in today’s less nitpicky atmosphere. Yet, the media
(and the voting public) has turned Trump’s 2016 win into a mythic legend of invincibility. The complacent

electorate of 2016, who were convinced Trump would never be president, has been replaced with the
terrified electorate of 2020, who are convinced he’s the Terminator and can’t be stopped. Under my model,
that distinction is not only important, it is everything. Trump’s second problem is that along with a turnout surge
of Democrats that in many states like Virginia is simply larger than the turnout surges of Republicans because of
demographics, he is deeply unpopular among Independents because of all the abnormal, norm-breaking and

according to the Mueller Report, even illegal things, he does as president. This has left him with an approval rating

averaging just 34.8% in 2019 among Independents, who largely broke against Republicans in the 2018 midterms as my
theory predicted. In a follow-up piece to this forecast, I will show that much
of this swing among Independents is actually the
product of their own turnout surge, which brought more left-leaning Independents out to the polls by
the same negative partisanship mechanisms that moved their partisan counterparts. This is why even the
Democrat’s sharp drift to the left as they chase their party’s nomination, following the Republicans down the path of
ideological polarization won’t have the impact on the vote choice of Independents Republicans are hoping for in
2018. At the end of the day, Independents will be asked to weigh what Democrats might do against what

Republicans, particularly Trump, are doing; the reverse situation from 2016 when Democrats suffered from the referendum effect among
Independents. Even if the Democrat’s nominee is unabashedly liberal, it is not likely Trump can win a

referendum among college-educated Independent voters without a dramatic transformation in both tone and
style. Republicans can survive an under-maximized Democratic turnout surge, like the one we saw in 2018 (I'll be showing this in forthcoming work), but not one
that it is combined with the loss of Independent voters and not one without a corresponding Republican turnout surge which can only be accomplished via things
likely to further isolate Independent voters and agitate Democrats. Does the Democrat’s nominee matter? Sure, to an extent. If the ticket has a woman, a person of
color or a Latino, or a female who is also a person of color, Democratic Party turnout will surge more in really important places. If the nominee is Biden he’d be well-
advised to consider Democratic voter turnout his number one consideration when drawing his running mate to avoid the made by Hillary Clinton in 2016. This is
true for any of the white male candidates. If the nominee hails from the progressive wing of the party, it will provoke massive handwringing both within the party
and the media that if not controlled could become self-reinforcing.
But the Democrats are not complacent like they were in
2016 and I doubt there is any amount of polling or favorable forecasts that will make them so. That fear
will play a crucial role in their 2020 victory. We will not see a divided Democratic Party in 2020.

Non-unique---Trump is winning on CJR now – 1994 crime bill

Predictions are contingent on COVID, which matters the most to voters – even if not,
partisanship means no swing voters
Gutierrez 20 – Associate Director of Communications and Public Relations at the University of Miami
(Barbara; “Experts address the effects of COVID-19 on the presidential campaign”; 5/18/20;
https://news.miami.edu/stories/2020/05/experts-address-the-effects-of-covid-19-on-the-presidential-
campaign.html; accessed 6/25/20) dmc

The biggest issues on everyone’s mind are the pandemic and the economy. How these two phenomena
pan out will greatly affect the vote in November. If the economy can improve and voters give Trump
credit for it, he might be able to hang on. But, if record high unemployment remains shortly before the
election, Trump will likely lose. Also, how Trump handles the pandemic, in relation to how voters think
Joe Biden would handle it, will affect the outcome as well . This is all keeping in mind that most people’s votes are
already fixed in that partisanship determines how people vote long before the candidates are even
announced. —Joseph Uscinski, associate professor, political science It has significantly increased the odds
that Donald Trump will lose the 2020 election. He was already starting from behind—he lost the 2016
popular vote and his job approval ratings have hovered around 41 percent throughout his term. His
reelection strategy was centered on taking credit for the economic prosperity that began during the
Obama presidency and continued through early 2020 while minimizing attention to his policy choices
and personal behavior. The pandemic has shattered this strategy. Historical models of presidential
elections show that economic prosperity—or the lack thereof—is one of the best predictors of the outcome . The
economy has taken a sharp downturn with unemployment rates not seen since the Great Depression,
so Trump’s weak position has gotten worse. More subtly, the pandemic has focused attention on the way the
Trump administration actually works: general disregard for expertise, a failure to fill leadership
positions, and a priority on short-term political gain above all other considerations. As a result, early polling
show Biden pulling ahead of Trump nationally and in battleground states, including Florida. President
Trump’s actions and words over the past two months seem to be a response to his diminished electoral
prospects. He has sought to shift blame for the pandemic and the poor U.S. response to China, to immigrants, and to state
governors. He monopolized daily briefings with administration experts until he suggested people drink

bleach as a COVID-19 cure, then shifted to trying to convince states to relax restrictions as quickly as possible while suppressing expert guidelines on
state restrictions developed by the Center for Disease Control, in the hopes an economic recovery by November will boost his prospects. — Gregory Koger,

professor, political science

Voters are idiots AND don’t perceive policy.


Somin 16 – Ilya Somin, Law Professor at George Mason University. [Democracy and Political Ignorance:
Why Smaller Government Is Smarter, Second Edition, Stanford Law Books]//BPS

THE REALITY THAT MOST VOTERS are often ignorant of even very basic political information is one of the
better-established findings of social science. Decades of accumulated evidence reinforce this conclusion.2
Unfortunately, the situation has not improved much over time.

THE PERVASIVENESS OF IGNORANCE

The sheer depth of most individual voters’ ignorance may be shocking to readers not familiar with the research.
Rarely if ever is any one piece of knowledge absolutely essential to voters. It may not matter much if most Americans are
ignorant of one or another particular fact about politics. But the pervasiveness of ignorance about a wide range of political issues and leaders is
far more troubling.

Many examples help illustrate the point. A survey conducted not long before the 2014 election, which determined control of Congress, found
that only 38 percent of Americans knew that the Republicans controlled the House of Representatives at the time, and the same number knew
that the Democrats had a majority in the Senate.3 Not knowing which party controls these institutions makes it difficult for voters to assign
credit or blame for their performance.

One of the most contentious issues in recent American politics has been the Affordable Care Act of 2010—President Barack Obama’s health
care reform law, often known as Obamacare. Yet much of the public remains ignorant about many aspects of this program. As late as August
2013 a survey found that 44 percent did not even realize that the ACA was still the law.4

For years, there has been an ongoing debate over the future of federal spending in the United States, with sharp partisan divisions over how to
deal with increasingly serious budget deficits that are likely to get worse in the long run. Yet a September 2014 Pew Research Center survey
found that only 20 percent of Americans realize that the federal government spends more money on Social Security than on foreign aid,
transportation, and interest on the government debt.5 Some 33 percent believe that foreign aid is the biggest item on this list, even though it is
actually the smallest, amounting to about one percent of the federal budget, compared with 17 percent for Social Security.6

This result is consistent with numerous previous surveys showing that most Americans greatly underestimate the percentage of federal
spending devoted to Social Security and other entitlement programs, while vastly overestimating the amount devoted to foreign aid and other
minor programs such as the Corporation for Public Broadcasting.7 It is difficult for voters to evaluate competing approaches to reforming tax
and spending policy if they don’t have even a basic understanding of how federal funds are currently spent.

A series of polls conducted just before the Republican Party chose Representative Paul Ryan to be their vice presidential nominee in August
2012 found that 43 percent of Americans had never heard of Ryan and only 32 percent knew that he was a member of the House of
Representatives.8 Unlike Governor Sarah Palin in 2008, Ryan was not a relative unknown catapulted onto the national stage by a vice
presidential nomination. As his party’s leading spokesman on budgetary and fiscal issues, he had been a prominent figure in American politics
for several years.

One of the key policy positions staked out by President Obama in his successful 2012 reelection campaign was his plan to raise income taxes for
persons earning more than $250,000 per year, an idea much discussed during the campaign and supported by a large majority of the public—
69 percent in a December 2012 poll.9 A February 2012 survey conducted for the political newspaper The Hill actually asked respondents what
tax rates people with different income levels should pay. It found that 75 percent of likely voters wanted the highest-income earners to pay
taxes lower than 30 percent of income, the top rate at the time of the 2012 election.10 This inconsistency suggests that many people supported
increasing the tax rates of high earners because they did not realize how high taxes were already.

Even before the 2012 election, economic inequality had been a major political issue for years, in both the United States and many European
nations. Yet surveys consistently show that most Americans and citizens of other democracies have little or no understanding of either the
extent of inequality or whether it has been increasing or decreasing.11 A 2009 survey found that only somewhere between 12 and 29 percent
of Americans can roughly place the shape of the income distribution in the United States when given a choice of five different simple diagrams
with accompanying explanations.12 Even the higher figure is only slightly better than what we would expect from random guessing.13

Equally striking is the fact that in late 2003, more than 60 percent of Americans did not realize that a massive increase in domestic spending had
made a substantial contribution to the then-recent explosion in the federal deficit.14 Most of the public is unaware of a wide range of
important government programs structured as tax deductions and payments for services.15 As a result, they are also unaware of the massive
extent to which many of these programs transfer benefits primarily to the relatively affluent.16

Despite years of controversy over the War on Terror, the Iraq War, and American relations with the Muslim world, only 32 percent of
Americans in a 2007 survey could name “Sunni” or “Sunnis” as one of “the two major branches of Islam” whose adherents “are seeking political
control in Iraq,” even though the question prompted them with the name of the other major branch (the Shiites).17 Such basic knowledge may
not be absolutely essential to evaluation of U.S. policy toward the Middle East. But it is certainly useful for understanding a region that has long
been a central focus of American foreign policy, especially since the 9/11 terrorist attacks in 2001.

Such widespread ignorance is not of recent origin. As of December 1994, a month after the takeover of Congress by the Republican Party, then
led by soon-to-be Speaker of the House Newt Gingrich, 57 percent of Americans had never even heard of Gingrich, whose campaign strategy
and policy stances had received massive publicity in the immediately preceding weeks.18 In 1964, in the midst of the Cold War, only 38 percent
were aware that the Soviet Union was not a member of the U.S.-led NATO alliance.19 Later, in 1986, the majority could not identify Mikhail
Gorbachev, the controversial new leader of the Soviet Union, by name.20 Much of the time, only a bare majority know which party has control
of the Senate, some 70 percent cannot name both of their state’s senators, and the majority cannot name any congressional candidate in their
district at the height of a campaign.21

Three aspects of voter ignorance deserve particular attention. First, many


voters are ignorant not just about specific policy
issues but about the basic structure of government and how it operates.22 Majorities are ignorant of such basic aspects of
the U.S. political system as who has the power to declare war, the respective functions of the three branches of government, and who controls
monetary policy.23 Admittedly, presidents sometimes manage to initiate war without congressional approval, as in the case of recent military
interventions in Libya and against the ISIS terrorist organization in Iraq and Syria. But even under modern conditions, presidents usually seek
congressional authorization for major conflicts, and generally keep interventions that lack such authorization carefully limited, usually to air
strikes alone.24 A 2014 Annenberg Public Policy Center study found that only
36 percent of Americans could even name
the three branches of the federal government: executive, legislative, and judicial. Some 35 percent could not name
even one.25 The 36 percent result was an even lower figure than the 42 percent who could name all three branches in a similar 2006 poll.26

Another 2006 survey found that only


28 percent could name two or more of the five rights guaranteed by the
First Amendment to the Constitution .27 A 2002 Columbia University study indicated that 35 percent believed that Karl
Marx’s dictum “From each according to his ability to each according to his need” is in the Constitution
(34 percent said they did not know if it was or not), and only one-third understood that a Supreme Court decision overruling Roe
v. Wade would not make abortion illegal throughout the country.28

Ignorance of the structure of government suggests that voters often not only cannot
choose between specific competing
policy programs but also cannot easily assign credit and blame for policy outcomes to the right officeholders.
Ignorance of the constraints imposed on government by the Constitution may also give voters an inaccurate picture of the scope of elected
officials’ powers.

The second salient aspect of ignorance is that voters


often lack an “ideological” view of politics capable of
integrating multiple issues into a single analytical framework derived from a few basic principles; ordinary voters rarely
exhibit the kind of ideological consistency in issue stances that are evident in surveys of political elites.29 Some scholars emphasize the
usefulness of ideology as a “shortcut” to predicting the likely policies of opposing parties competing for office.30 At least equally important is
the comparatively weaker ability of nonideological voters to spot interconnections among issues. The small minority of well-informed voters
are much better able to process new political information and more resistant to manipulation than is the less-informed mass public.31

Finally, it is notable that the level of political knowledge in the American electorate has increased only modestly, if at all, since the beginning of
mass survey research in the late 1930s.32 A relatively stable level of ignorance has persisted even in the face of
massive increases in educational attainment and an unprecedented expansion in the quantity and quality of information
available to the general public at little cost.33

For the most part, the spread of new information technology, such as television and the Internet, seems not to have increased
political knowledge.34 The rise of broadcast television in the 1950s and 1960s somewhat increased political knowledge among the
poorest and least-informed segments of the population.35 But more recent advances, such as cable television and the Internet, have actually
diverted the attention of these groups away from political information by providing attractive alternative sources of entertainment.36 For the
most part, new information technologies seem to have been utilized to acquire political knowledge primarily by those who were already well
informed.37 This record casts doubt on the expectations of political theorists from John Stuart Mill onward that an increased availability of
information and formal education can create the informed electorate that democracy requires.38

RECENT EVIDENCE OF POLITICAL IGNORANCE

Data from the time of the recent 2004, 2008, 2010, 2012, and 2014 elections reaffirm the existence of widespread
political ignorance, as does more extensive data from the time of the 2000 election derived from the 2000 American
National Election Study (ANES).39

Sustainable business models and cheap renewables solve warming


Goodstein 19 – Director of the Center for Environmental Policy and the MBA in Sustainability at Bard
(Eban; “2 Degrees of Warming: Bad, Not the End of Civilization”; 9/10/19;
https://leadthechange.bard.edu/blog/2-degrees-of-warming-bad-not-the-end-of-civilization; accessed
6/23/20) dmc

At the same time across the coming decades, technological and social progress in many places on earth opens
up new opportunities in education, environmental education, health, and nutrition. The application of
sustainable, circular-economy business models begins to reduce most forms of pollution. Global population
peaks. And perhaps most importantly, by 2080, very cheap solar energy and storage is ubiquitous across
the planet. This solar dominance will eliminate the energy poverty that today holds back so many
people, and end the corrosive politics of global monopoly control over energy as well. Solar everywhere will also
be the key to the resilience of global systems of trade, industry and agriculture. Cheap energy is the
lifeblood of modern civilization . The ecological economist Herman Daly once wrote that following a new dark age, industrial
civilization could never again arise. The reason? We have burned up all the easily accessible, high BTU fuels—oil, coal and natural
gas—so the industrial revolution can never be replicated. Future post-apocalypse humans would be limited to animal power
and wood-burning technologies. And indeed, until very recently, the fossil fuel system was the one fragile , highly
centralized link on which the whole chain of global industrial civilization depended . In the last few years,
however, we have superseded that link . Soon, with access to cheap, decentralized energy everywhere,
regional collapse or war will no longer threaten the social and economic systems of the rest of the world.

And there is a ton of alt cases like other and developing countries
1AR
Deterrence logic reinforces ontological security – produces intersubjective norms of
cooperation that solve escalation
Lupovici 08 – Assistant Professor of Political Science at Tel Aviv University
(Amir, “Why the Cold War Practices of Deterrence are Still Prevalent: Physical Security, Ontological Security and Strategic Discourse,” presented
at the Canadian Political Science Association annual conference, Vancouver June 4-6)

Since deterrence can become part of the actors’ identity, it is also involved in the actors’ will to achieve ontological
security, securing the actors’ identity and routines. As McSweeney explains, ontological security is “the acquisition of confidence in the
routines of daily life—the essential predictability of interaction through which we feel confident in knowing what is going on and that we have
the practical skill to go on in this context.” These routines become part of the social structure that enables and constrains the actors’
possibilities (McSweeney, 1999: 50-1, 154-5; Wendt, 1999: 131, 229-30). Thus, through the emergence of the deterrence
norm and the construction of deterrence identities, the actors create an intersubjective context and intersubjective
understandings that in turn affect their interests and routines. In this context, deterrence strategy and deterrence
practices are better understood by the actors, and therefore the continuous avoidance of violence is more easily
achieved. Furthermore, within such a context of deterrence relations, rationality is (re)defined, clarifying the appropriate practices
for a rational actor, and this, in turn, reproduces this context and the actors’ identities. Therefore, the internalization of deterrence
ideas helps to explain how actors may create more cooperative practices and break away from the spiral
of hostility that is forced and maintained by the identities that are attached to the security dilemma, and which lead to
mutual perception of the other as an aggressive enemy. As Wendt for example suggests, in situations where states are restrained from using
violence—such as MAD (mutual assured destruction)—states not only avoid violence, but “ironically, may be willing to trust each other enough
to take on collective identity”. In such cases if actors believe that others have no desire to engulf them, then it will be easier to trust them and
to identify with their own needs (Wendt, 1999: 358-9). In this respect, the norm of deterrence, the
trust that is being built between
the opponents, and the (mutual) constitution of their role identities may all lead to the creation of long term
influences that preserve the practices of deterrence as well as the avoidance of violence. Since a basic level of trust
is needed to attain ontological security,21 the existence of it may further strengthen the practices of deterrence and the actors’ identities of
deterrer and deterred actors. In this respect, I argue that for the reasons mentioned earlier, the practices of deterrence should be understood
as providing both physical and ontological security, thus refuting that there is necessarily tension between them. Exactly for this reason I argue
that Rasmussen’s (2002: 331-2) assertion—according to which MAD was about enhancing ontological over physical security—is only partly
correct. Certainly, MAD should be understood as providing ontological security; but it also allowed for physical security, since, compared to
previous strategies and doctrines, it was all about decreasing the physical threat of nuclear weapons. Furthermore, the
ability to
increase one dimension of security helped to enhance the other , since it strengthened the actors’
identities and created

more stable expectations of avoiding violence . I suggest that the emergence of deterrence norm during the Cold War can be
described in the terms of Finnemore and Sikkink’s norms life cycle model.22 According to this model, in the first stage—the “norm
emergence”—entrepreneurs attempt to convince policy makers of their ideas. The second stage—the “norm cascade” stage—is characterized
by attempts to socialize other state/s to become norm followers. In the last stage—the “norm internalization” stage—the norm becomes
institutionalized (Finnemore and Sikkink, 1998: 887-909).23 The study of the emergence and institutionalization of deterrence norm and
identity in the next section demonstrates how these concepts help to explain avoidance of violence, and this is followed by a section
demonstrating how the identity of deterrence may lead to war.

You might also like