Artificial Intelligence, Law and Justice
Session 8
Algorithmic,AI,Decision Making
Dr. Krishna Ravi Srinivas
Adjunct Professor of Law &
Director, Centre of Excellence in Artificial Intelligence and Law
NALSAR University of Law
Recap
⚫In the last session we discussed how AI is being
applied in legal sector in India and the role of legal
tech in furthering innovation
⚫We highlighted the various unaddressed issues in
their use and how large scale adoption can impact
legal services and sector
⚫Further we closed with a discussion on AI agents in
legal services, the risks and what roles they can play.
Definitions
⚫Algorithm: An algorithm is a series of instructions
for performing calculations or tasks. In AI, it helps a
computer learn and perform tasks
⚫Algorithmic Decision Making (ADM): ADM refers
to using outputs produced by algorithms to make
decisions
⚫Historical Context: Algorithms have a long history
and have been applied differently over time. The
innovation in information theory and technology in
the 20th century redefined basic ideas.
History
⚫“Algorithms have been around since the beginning
of time and existed well before a special word had
been coined to describe them. Algorithms are simply
a set of step by step instructions, to be carried out
quite mechanically,so as to achieve some desired
result […]. The Babylonians used them for deciding
points of law, Latin teachers used them to get the
grammar right, and they have been used in all
cultures for predicting the future, for deciding
medical treatment, or for preparing food. Everybody
today uses algorithms of one sort or another, often
unconsciously, when following a recipe, using a
knitting pattern or operating household gadgets”
⚫Matteo Pasquinelli From Algorism to Algorithm
Algorithms
⚫Algorithms take a set of inputs, such as age,
residence, marital status, or income, and process
them through a series of steps to produce outputs or
decisions .
⚫These algorithms are used in various sectors,
including healthcare, public benefits, infrastructure
planning, and budget allocation
AI Algorithms
1. Learning from Training Data: AI algorithms are designed to
learn from training data, which can be either labelled or
unlabelled. This information is used to enhance the
algorithm's capabilities and perform tasks
2. Continuous Learning: Some AI algorithms can continuously
learn and refine their process by incorporating new data,
while others need a programmer's intervention to optimize
their performance .
3. Task Execution: The slide emphasizes that AI algorithms
use the information from training data to carry out their
tasks effectively
Data and Algorithms
⚫Data Quality Issues: data used to train AI systems can
be inaccurate, incomplete, or biased, which can lead to
significant consequences if used for critical tasks like
analyzing skin images or prioritizing patient care
⚫Bias in Data: data might be infused with bias, often
stemming from erratic and biased realities, such as
clinical trials excluding women and people of color
⚫Consequences of Biased Data: using biased data can
result in flawed decisions by AI algorithms, which can
have severe consequences in critical applications
⚫Importance of Representative Data: It is important to
ensure that AI algorithms are trained using representative
data to avoid biases and ensure equitable outcomes for all
⚫Mutual Reinforcement of Issues: problems in AI
systems can arise from data, algorithms, or a combination
of both, mutually reinforcing each other
Data and Representation
⚫Digital Divides: digital divides in many Global South
countries have led to "data invisibility," impacting historically
marginalized groups such as women, castes, tribal
communities, religious and linguistic minorities, and migrant
labor .
⚫Biases in AI Algorithms: there are potential biases in AI
algorithms due to these invisible data, emphasizing the need for
algorithmic transparency and accountability .
⚫Algorithmic Transparency Audits: whether the AI system
underwent transparency audits and how to make them less
biased and more useful ?.
⚫Socio-Technical Issue: algorithmic transparency is not just a
technical issue but an examination of socio-technical systems
that can significantly impact society .
⚫Impact on Marginalized Groups: It is important to address
data invisibility to ensure that AI algorithms do not perpetuate
biases against marginalized groups
Direct and Proxy Data
⚫Proxy data is often used when direct data is unavailable
or insufficient. However, using proxy data requires
caution as it can introduce unintended biases .
⚫Examples of proxy data include using location as a
proxy for income level or status. This can lead to biased
decision-making even if the bias is not directly evident .
⚫AI systems may make predictions based on proxy data
that resemble restricted categories of data, such as race,
even if race is not explicitly included as a parameter .
⚫It is crucial to ensure that proxy data is used exclusively
for legitimate purposes to avoid unintended biases and
ensure fairness
Evaas – Contesting Algorithm
⚫Between 2011 and 2015, Houston teachers' work performance
was evaluated using a data-driven algorithm called EVAAS . The
program enabled the board of education to automate decisions
regarding bonuses, penalties, and terminations based on the
algorithm's evaluations .
⚫The source codes for EVAAS are trade secrets owned by SAS, a
third-party vendor, preventing teachers from contesting the
decisions or understanding how the algorithm reached its
conclusions In 2017, a US federal judge ruled that the use of the
secret algorithm violated teachers' constitutional rights, requiring
transparency in the evaluation process .
⚫SAS declined to reveal the internal workings of the EVAAS
algorithm, leading the Houston school system to stop using it .
⚫The court decision emphasized the need for teachers and the
Houston Federation of Teachers to independently check and
contest the evaluation results produced by the algorithm
[Link] The Education Value-
Added Assessment System (EVAAS) on Trial: A Precedent-Setting
Lawsuit with Implications for Policy and Practice
Explainable AI and Algorithms
⚫The US NIST has issued guidance on AI explainability,
which might be part of impact assessment systems .
⚫The NIST draft guidelines suggest four principles for
explainability for audience-sensitive, purpose-driven,
automated decision-making systems (ADSs) assessment tools .
⚫These principles include providing accompanying evidence or
reasons for all outputs, offering understandable explanations,
reflecting the system's process for generating the output, and
operating under specific conditions .
⚫These principles shape the types of explanations needed to
ensure confidence in algorithmic decision-making systems,
such as explanations for user benefit, social acceptance,
regulatory and compliance purposes, system development, and
owner benefit .
⚫The source of this information is the NIST document "Four
Principles of Explainable Artificial Intelligence,"
Rights
⚫Contesting Algorithm Logic: a defendant faces challenges in
contesting the logic of an algorithm when they do not have
access to the source code, training data, or required datasets.
⚫Information for Defendants: what information should be
provided to the defendant to contest the logic of an algorithm?.
⚫Inputs and Outputs: is it sufficient for defendants to have
access simply to the inputs and outputs generated by the
algorithm?.
⚫Margin of Error: whether the defendant should receive
information on the margin of error of the algorithm(s) used.
Rights
⚫How can courts enforce due process of law if the
algorithm deploysmachine learning and no one, not
even the developer, understands the ML “analysis”
completely?
⚫• How will courts assess the accuracy of
algorithms, particularly when they forecast future
human behavior”
⚫“What legal and social responsibilities should we
give to algorithms shielded behind statistically data-
derived ‘impartiality’?
⚫• Who is liable when AI gets it wrong?”
Civil Justice, Algorithm
⚫AI has been deployed in various areas of the civil justice
system, including family, housing, debt, employment,
and consumer litigation.
⚫Civil courts are increasingly collecting data about
administration, pleadings, litigant behavior, and
decisions, offering opportunities for automating certain
judicial functions.
⚫AI is used to pre-draft judgment templates for judges,
make predictions or sentencing recommendations for
bail, sentencing, and financial calculations.
⚫AI can assess the outcome of cases based on the past
activities of prosecutors and judges, providing
information to judges that factors in a wide amount of
case law.
⚫AI tools can significantly reduce research time in the
preparation of decisions.
Predictions
1. An AI algorithm developed by researchers from
Université Catholique of Leuven, the University of
Sheffield, and the University of Pennsylvania is used to
predict judicial rulings of the European Court of Human
Rights.
2. The algorithm, led by Dr. Nikolaos Aletras, has an
accuracy rate of 79.70% in predicting these rulings.
3. AI is not intended to replace judges or lawyers but to
assist in identifying patterns in case outcomes.
4. It helps highlight cases that are most likely to be
violations of the European Convention on Human Right
Algorithm as Authority
⚫When Algorithms are embedded in decision
making they become ‘authorities’ by de facto and as
part of AI black boxes
⚫But as we will discuss in subsequent sessions use
of algorithms can impact human rights, rule of law
and adversely affect access to services
⚫Integrating them into AI systems raises questions
about ethics, accountability and responsibility.
⚫AI tools using algorithms are of concern when they
are used in law and justice
⚫Some of these will be elaborated in the subsequent
sessions.
Algorithm and Authority
⚫An AI algorithm developed by researchers from
Université Catholique of Leuven, the University of
Sheffield, and the University of Pennsylvania is used to
predict judicial rulings of the European Court of Human
Rights.
⚫The algorithm, led by Dr. Nikolaos Aletras, has an
accuracy rate of 79.70% in predicting these rulings.
⚫ AI is not intended to replace judges or lawyers but to
assist in identifying patterns in case outcomes.
⚫It helps highlight cases that are most likely to be
violations of the European Convention on Human Right
THANK YOU!