A Constitutional Quagmire - Ethical Minefields of AI, Cyber, and Privacy.pdf
1. SESSION ID:
#RSAC
Daniel Garrie
A Constitutional Quagmire: Ethical
Minefields of AI, Cyber, and Privacy
CEO; Neutral; Adjunct Professor
Law and Forensics; JAMS ADR; Harvard
LAW-R01
4. #RSAC
Legal Concerns with AI
The development and use of AI technologies raises a number of
legal concerns:
– Privacy and security
– Transparency, explainability, and accountability
– Intellectual property protections
– Fairness and acknowledgment of bias
– Inclusiveness
– Reliability and safety
4
5. #RSAC
Privacy Scenario
A healthcare AI application designed to provide personalized
treatment recommendations is found to be:
– Collecting,
– Storing, and
– Processing patients' data without explicit consent.
5
6. #RSAC
Actionable Insights to Address Privacy Challenges
Implement "privacy by design" principles, ensuring data protection
is a core element of AI development.
Obtain explicit consent from users for data collection and
processing, clearly explaining the purpose and use.
Audit data handling practices to ensure compliance with data
protection laws.
Develop a robust data breach response plan, including timely
notification to affected individuals.
6
7. #RSAC
Cybersecurity Scenario
An AI-based system is compromised due to a previously unknown
vulnerability.
Hackers bypass the AI's detection mechanisms and access sensitive
customer data.
7
8. #RSAC
Actionable Insights to Address Cybersecurity
Challenges
Establish a comprehensive cybersecurity framework for the AI
systems, including regular security assessments.
Develop and implement a rapid response plan for AI-related
cybersecurity incidents.
Update and patch AI systems against new threats.
Ensure legal and regulatory compliance in data breach
notifications and remediation efforts.
8
9. #RSAC
Trade Secret Protection Scenario
A leading tech company claims a competitor is using its proprietary
algorithms to improve their own AI systems.
9
10. #RSAC
Actionable Insights to Address Trade Secret
Protection Challenges
Implement stringent access controls and encryption for sensitive
AI algorithms and data.
Regularly review and update intellectual property protection
strategies for AI technologies.
Pursue legal remedies swiftly to deter unauthorized use of
proprietary AI technologies.
10
11. #RSAC
11
Bias Scenario
The algorithm of an AI hiring system employed by a tech company
is discovered to disproportionately favor applicants from a specific
demographic background.
The bias stems from historical hiring data used to train the AI,
which contains implicit biases against certain groups.
12. #RSAC
Actionable Insights to Address Potential Bias
Challenges
Implement routine audits of AI algorithms to identify and correct
biases.
Develop a diverse training dataset that includes a wide range of
demographics to reduce implicit biases.
Establish clear guidelines and criteria for AI decision-making to
ensure fairness.
Create a feedback mechanism for applicants to challenge and
review AI-driven decisions.
12
13. #RSAC
Transparency Scenario
A financial institution deploys an AI system for credit scoring.
When applicants are denied credit, the institution cannot provide
the reasons or mechanism for the decision due to the program’s
complex decision-making process.
13
14. #RSAC
Actionable Insights to Address Transparency
Challenges
Build in or enhance the AI system’s explainability, enabling it to
provide clear reasons for credit decisions.
Ensure compliance with consumer protection laws by
documenting practices and disclosing criteria used for AI decision-
making.
Regularly review AI systems’ decisions for fairness and accuracy.
Train customer-facing teams to explain decisions effectively.
14
15. #RSAC
Reliability Scenario
An AI system designed to predict machine failures in a
manufacturing plant provides numerous inaccurate predictions,
leading to unexpected downtimes and significant financial losses.
The plant operators sue the AI system's providers for negligence,
arguing that the providers failed to ensure the reliability of the AI
system, which they rely upon for critical operational decisions.
15
16. #RSAC
Actionable Insights to Address Reliability
Challenges
Implement rigorous testing and validation processes for AI system
before launching.
Establish contingency plans for operational failures, including
manual overrides and regular maintenance checks.
Ensure system providers carry liability insurance for potential
failures.
16
17. #RSAC
Liability Scenario
An autonomous vehicle, while in full AI control mode,
misinterprets traffic signals due to a software glitch and causes an
accident.
Parties involved:
– The car manufacturer
– The AI software developer
– The vehicle owner
17
18. #RSAC
Actionable Insights to Address Liability Challenges
Clarify liability and insurance requirements in user agreements and
terms of service.
Develop standards for AI system performance, including safety and
error-handling protocols.
Implement a continuous monitoring and update mechanism for AI
systems to prevent software glitches.
18
20. #RSAC
The Legal Challenges of AI are Only in Their
Infancy…
Integrate ethical AI design principles.
Consult with legal and compliance teams from start to finish in the
program development or implementation.
Implement rigorous testing and auditing.
Adopt privacy by design.
Assess and Prepare for liability exposure.
20