0% found this document useful (0 votes)
93 views73 pages

Artificial Intelligence Policy Implementation Plan - 0.1

The second draft of the Artificial Intelligence Policy for Lesotho aims to establish a governance structure for the responsible development and use of AI, addressing both opportunities and risks associated with its implementation. Key focus areas include creating a sustainable governance framework, enhancing AI capacity and infrastructure, ensuring ethical usage, and aligning with international standards. The policy seeks to position Lesotho as a regional leader in AI while promoting economic growth, improving public services, and safeguarding citizens' interests.

Uploaded by

Tsotang Nchoba
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
93 views73 pages

Artificial Intelligence Policy Implementation Plan - 0.1

The second draft of the Artificial Intelligence Policy for Lesotho aims to establish a governance structure for the responsible development and use of AI, addressing both opportunities and risks associated with its implementation. Key focus areas include creating a sustainable governance framework, enhancing AI capacity and infrastructure, ensuring ethical usage, and aligning with international standards. The policy seeks to position Lesotho as a regional leader in AI while promoting economic growth, improving public services, and safeguarding citizens' interests.

Uploaded by

Tsotang Nchoba
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 73

ARTIFICIAL INTELLIGENCE POLICY

AND IMPLEMENTATION PLAN


(SECOND DRAFT)
TABLE OF CONTENTS
1. Glossary and Abbreviations .................................................................................... 3
1.1 Glossary ......................................................................................................... 3
1.2 List of Abbreviations ........................................................................................... 5
2. Executive summary .............................................................................................. 6
3. Introduction ....................................................................................................... 7
3.1 Background...................................................................................................... 7
3.2 Policy Rationale and Purpose ................................................................................. 7
4. Situational Analysis ............................................................................................ 10
5. Policy framework .............................................................................................. 13
5.1 Introduction ................................................................................................... 13
5.2 Goal of Policy Reform ....................................................................................... 13
5.3 Vision Statement ............................................................................................. 13
5.4 Guiding Principles ............................................................................................ 13
5.5 Objectives .................................................................................................... 15
5.6 Policy Statements/prescriptions ........................................................................... 17
5.6.1. Sustainable Governance Structure ....................................................................... 17
5.6.2. Enabling Legal and Regulatory Environment ........................................................... 22
5.6.3. AI integration in key sectors .............................................................................. 23
5.6.4. Financial mechanisms ..................................................................................... 23
5.6.5. Feedback Mechanisms ..................................................................................... 23
6. Implementation Plan and Monitoring and Evaluation Framework .................................... 24
6.1 Logic of the Plan ............................................................................................. 24
6.2 Objectives, Tasks, Measures, and Indicators ............................................................. 25
6.3 Implementation Risks and Measures for Mitigation ...................................................... 66
7. Annexes .......................................................................................................... 67
7.1. Annex No. 1 International Context ............................................................................. 67
7.2. Annex No. 2 Institutional arrangements....................................................................... 70

2
1. GLOSSARY AND ABBREVIATIONS

1.1 GLOSSARY
Term Definition
Access Control Role-based access to datasets to minimize the possibility of combining
sensitive information.
Algorithmic Bias Systematic errors in AI systems that result in unfair outcomes, often
disadvantaging certain groups.
Artificial A machine-based system that can, for a given set of human-defined
Intelligence objectives, observe its environment and make predictions,
recommendations, or decisions influencing real or virtual
environments.
Data Governance The use of digital technologies to improve public service delivery,
enhance transparency and accountability, and promote citizen
engagement. It encompasses the policies, regulations, and practices
that guide the use of digital technologies in the public sector.
Data Practices, procedures, and technologies used to collect, store,
management process, protect, and archive data throughout its lifecycle.
Data Limited collection and sharing of data to only what is strictly
Minimization necessary.
Differential Privacy-preserving techniques that add noise to data to prevent re-
Privacy identification.
Digital Physical and virtual technologies, such as high-speed internet, cloud
Infrastructure computing, and data centres, that support digital and AI innovations.
Digital The ability of a nation to control its digital assets, data, and
Sovereignty infrastructure without external dependence.
Digital The integration of digital technologies across sectors to improve
Transformation processes, services, and outcomes.
Ethics A body tasked with ensuring AI deployments adhere to principles of
Committee fairness, transparency, accountability, and human rights.
Explainability The ability of AI systems to provide clear and understandable
explanations of their decisions or outputs.
Federated Analysis of distributed datasets without aggregating them centrally,
Learning reducing exposure risks.
Frugal Frugal innovation focuses on creating simple, affordable, and
Innovation effective solutions that meet essential needs with limited resources.
General-Purpose Broad AI systems, such as large language models, designed for a wide
Artificial range of applications and tasks.
Intelligence
Green AI Efforts to develop and deploy energy-efficient AI technologies that
Initiatives minimize environmental impact.

3
Natural The application of computational techniques to the analysis and
Language synthesis of natural language and speech.
Processing
Open Data Guidelines promoting the sharing and accessibility of anonymized
Policies datasets for AI research and development while maintaining privacy.
Parastatals A company or organisation which is owned by a country's government
Regulatory A controlled environment where AI innovations can be tested under
Sandbox relaxed regulations to foster experimentation while managing risks.
Risk-based A regulatory approach categorizing AI systems by risk levels (e.g.,
Framework minimal, limited, high, or unacceptable) to apply appropriate
governance.
Smart A government model that leverages AI to enhance service delivery,
Government decision-making, and resource management.
Sustainable A collection of global goals set by the United Nations, aiming to
Development address issues like poverty, health, education, and climate action,
Goals (SDGs) where AI can play a significant role.
Synthetic Data Artificial datasets that mimic the statistical properties of the original
data but contain no real-world information.
Triple Helix A collaborative framework involving academia, industry, and
Model government to foster innovation and co-create policies.

4
1.2 LIST OF ABBREVIATIONS
Term Definition
AI Artificial Intelligence
AU African Union
CPA Africa’s Science and Technology Consolidated Plan of Action
DSA Digital Services Act
EU European Union
FDI Foreign Direct Investment
GDPR General Data Protection Regulation
GovTech Government Technology (as a dedicated laboratory or Initiative)
GPAI General-Purpose Artificial Intelligence
HPC High-Performance Computing
LAST Lesotho Academy of Science and Technology
MICSTI The Ministry of Information, Communication, Science, Technology and
Innovation
NLP Natural Language Processing
NRD Norway Registers Development AS
NUL National University of Lesotho
OACPS Organisation of African, Caribbean, and Pacific States
OECD Organization for Economic Co-operation and Development
SADC Southern African Development Community
SDG Sustainable Development Goals
STISA 2024 Science, Technology and Innovation Strategy for Africa 2024
UNDP United Nations Cooperation Programme

5
2. EXECUTIVE SUMMARY

This Draft Artificial Intelligence Policy (hereinafter, the “Policy”) has been developed as an output
under the Project titled “The Development of Policy Frameworks (Broadband and Shared Infrastructure
Policy; Data Management Policy; Artificial Intelligence Policy) for Digital Governance and Digital
Enabling Environment in Lesotho” (hereinafter, the “Project”) implemented by Norway Registers
Development AS (NRD) for the United Nations Development Cooperation Programme (UNDP) and the
Ministry of Information, Communications, Science, Technology and Innovation (MICSTI) of Lesotho.
Artificial Intelligence (AI) is becoming increasingly prevalent worldwide, offering significant
opportunities for economic and social development. However, it also presents certain risks due to its
non-deterministic and unpredictable nature. Therefore, timely regulations and support for AI
development are crucial for Lesotho to harness the benefits of AI while mitigating potential risks.
Lesotho is already addressing various AI-related issues through its National Digital Transformation
Strategy and National Digital Policy. However, formal AI legislation is not yet available, necessitating a
more systematic approach to AI governance.
The Government of Lesotho aims to ensure responsible development, deployment, and usage of AI across
the country to improve overall quality of life. Thus, the purpose of this policy is to establish a robust
governance structure to guide the ethical and responsible adoption of AI in Lesotho. Key focus areas of
the policy include:
1. Establishing leadership and governance: creation of a sustainable and efficient AI domain
governance setup, including an AI policy maker, regulator, and a Data and AI Committee
2. Developing legislation and ensuring compliance: establishment of a legal and regulatory
environment that enables AI development and compliance
3. Building AI capacity: supporting AI capacity building across the public sector and society through
training programs, education support, and public awareness campaigns
4. Building and providing AI infrastructure: improving infrastructure and technology access,
including high-speed internet, data centers, and computational resources
5. Leveraging AI to boost productivity, diversify the economy, and create high-value jobs:
supporting businesses and academia in implementing AI-driven solutions, while facilitating the
establishment of Research and Development hubs and innovation labs
6. Ensuring inclusiveness and ethical usage of AI: development and enforcement of ethics and
human rights safeguards, AI safety protocols, and bias mitigation programs
7. Promoting international alignment and sustainability: aligning Lesotho with global standards,
participation in international AI policy discussions, and collaboration with regional and global
AI initiatives
The policy outlines an implementation plan and monitoring and evaluation framework to ensure the
effective execution of AI initiatives and continuous improvement, with an expectation to position
Lesotho as a regional leader in AI development while aligning with global standards and best practices.

6
3. INTRODUCTION

3.1 BACKGROUND
Artificial Intelligence (AI) is becoming omnipresent worldwide. It is a topic of hot discussion in all
imaginable contexts. However, it includes certain non-determinism and unpredictability intrinsically.
Moreover, AI, by its nature, enables automation by mimicking human cognitive functions such as
decision-making, learning, and problem-solving. Whilst automation means a boost in economic and
social development, in combination with non-determinism and certain unpredictability it means certain
risks as well.
Hence, both, timely regulations and support for AI development could mean, either, being among
leaders and collecting long-hanging fruits of prosperity or being left behind in the AI race.
3.2 POLICY RATIONALE AND PURPOSE
Developing a National AI Policy is a strategic objective of the Kingdom of Lesotho. It is required to
ensure responsible development, deployment, and usage of AI across the country to improve overall
quality of life. Thus, the purpose of this policy is to establish a robust governance structure to guide
the ethical and responsible adoption of AI in Lesotho.
Lesotho would greatly benefit from the development of a comprehensive AI policy for the following
reasons:
1. Economic Development and Innovation
1.1. Diversifying the Economy: today’s Lesotho relies on sectors like agriculture (e.g., precision
agriculture, optimal planning of resources), and manufacturing (especially textiles, e.g.
optimal resources planning, smart marketing). By embracing AI Lesotho could diversify its
economy and foster new sectors such as tech innovation, software development, and data
services.
1.2. Job Creation: AI can create high-value jobs in data science, machine learning, and
automation. These could supplement traditional sectors and stimulate economic growth.
1.3. Improving Productivity: AI-driven technologies would improve productivity in agriculture,
manufacturing, and services. For example, AI could help optimize crop yields or improve
supply chain management for manufacturers. E.g. in case the case of smart agriculture, AI
can assist farmers with weather predictions, pest control, and soil quality analysis, which
would be particularly beneficial in Lesotho's agricultural sector.
2. Research, Education and Skills Development
2.1. Building Local Expertise: A national AI policy is a guide for the development of local talent
in the fields of AI and its application across all fields. By promoting AI education and training
programs, Lesotho will ensure that its workforce is prepared for the technological changes
ahead.
2.2. Research and Collaboration: AI policy could foster collaboration with universities, research
institutions, industry, government, and international partners to drive research and
development in AI.
3. Social Development and Public Services
3.1. Improved Healthcare: AI allows improving healthcare services, from diagnostic tools to
predictive analytics for disease outbreaks. A policy would promote the use of AI in Lesotho's
healthcare system, addressing challenges like access to care in remote areas.

7
3.2. Public Sector Efficiency: AI could streamline government operations, improve
accountability, and improve public service delivery, enhancing governance and trust in state
institutions.
3.3. Mitigating Potential Disruptions: While AI presents numerous opportunities, it also brings
risks like job displacement due to automation. A well-designed AI policy can help address
these challenges by supporting workforce transition programs, retraining, and other
measures to mitigate the negative effects on workers, and instead, improve their
productivity and competencies.
3.4. Inclusivity: proper usage of AI could increase the inclusivity of minorities and individuals
with disabilities by providing various tools to tackle the issue, such as speech-to-text, text-
to-speech, sign language interpretation, and machine translation. Moreover, AI can provide
personalized education for challenged children. AI can help to identify and mitigate bias and
discrimination.
4. Ethics and Data Privacy
4.1. Ensuring Ethical Usage of AI: A national AI policy establishes guidelines to ensure
responsible usage of AI systems, including transparency, fairness, and accountability in AI
algorithms, avoiding discrimination, and protecting citizens' privacy.
4.2. Development of other policies. A national AI policy requires the development of related
digital policies, such as Data Governance policy, Internet Governance policy, Digital
Transformation policy, etc. However, Data Governance is a must, because AI relies on
access to data, and the policy establishing clear regulations around data collection, sharing,
and usage is critical to protect individuals' privacy and rights is necessary.
4.3. Restricting research, development, and usage of AI for malicious purposes. A national
policy should restrict usage of the AI for any illegal activities, terrorism, cybercrime, and
other type of misuse, threat, or exploitation.
5. Global Competitiveness and Strategic Positioning
5.1. Aligning with Global Trends: As AI becomes a global driving force in innovation, Lesotho
could risk being left behind if it does not develop a strategic policy framework. A well-
articulated AI policy can position Lesotho as an emerging player in the AI landscape,
attracting foreign investment and fostering international partnerships.
5.2. Regulatory Alignment: Many countries are already creating AI regulations, and Lesotho
would benefit from aligning its policy with international best practices, ensuring that its AI
initiatives are compatible with global standards.
5.3. Sustainable Development Goals (SDGs). Supporting SDGs with AI: Lesotho, as a member of
the United Nations, is committed to the Sustainable Development Goals (SDGs). AI could
contribute to achieving several of these goals, such as improving health (SDG 3), ensuring
quality education (SDG 4), advancing economic growth (SDG 8), and promoting climate
action (SDG 13). A national AI policy would help integrate AI technologies into Lesotho's
efforts to meet these goals.
6. Digital Infrastructure Development: Lesotho would need to invest in the necessary
infrastructure for AI, including internet connectivity, cloud computing, and data centres. An AI
policy would initiate the country's plans to build the digital infrastructure necessary to support
digital and AI innovation.

8
A national AI policy for Lesotho is essential to ensure that AI technologies are used in ways that promote
economic growth, improve public services, and address the unique challenges the country faces. By
creating a clear framework, Lesotho can harness the transformative power of AI while safeguarding its
citizens' interests and positioning itself as a competitive player in the global digital economy.
Thus, through this Policy, the Government of Lesotho seeks to:
1. Foster innovation, economic growth, and job creation.
2. Enhance public service delivery and governance efficiency.
3. Mitigate potential risks such as bias, data privacy violations, and job displacement due to
automation.
4. Position Lesotho as a regional leader in AI development while aligning with global standards and
best practices.

9
4. SITUATIONAL ANALYSIS

Different regional bodies and countries from the region are already active in defining their AI policies
and strategies:
1. Regional document defining AI strategy for the African Union, namely Continental Artificial
Intelligence Strategy by African Union1;
2. South Africa National AI Policy Framework2, the background for the AI policy;
3. The Digital Transformation Strategy for Africa (2020-2030)3;
4. The Science, Technology and Innovation Strategy for Africa 2024 (STISA 2024)4;
5. The African Science and Technology Consolidated Plan of Action (CPA)5;
6. The Digital Transformation Strategy for Africa 2020-20306;
7. The Computer Crime and Cybersecurity Bill7;
8. The Data Protection Act8.
Lesotho is already addressing different issues related to AI, namely:
1. Lesotho’s National Digital Transformation Strategy Agenda 20309 mentions AI several times;
2. Lesotho’s National Digital Policy 202410;
3. Lesotho Academy of Science and Technology (LAST) together with Botho University and Indaba
Deep Learning is planning a science festival “Embracing Applications of AI” (November 25, 2025),
which involves topics such as AI for the Sustainable Development, AI startups, Policy, and
governance. The goal of the event is to inspire AI-driven entrepreneurship, educate people on
responsible AI policies, and explore sustainable applications of AI in Lesotho;
4. Research related to the usage of AI in universities in Lesotho is performed11;
5. In different events, such as the 2024 Digital Innovators Summit12;
6. Selected hospitals experiment with using AI to improve medical processes13;
7. Studies:

1
H. Abou-Zeid, K. Kazaura, W. Hamdi and S. Amazouz, “Continental Artificial Intelligence Strategy,” The African Union, 2024 //
https://au.int/sites/default/files/documents/44004-doc-EN-_Continental_AI_Strategy_July_2024.pdf.
2
Department Communications & Digital Technologies, Republic of South Africa, South Africa National Artificial Intelligence Policy Framework,
2024 // https://fwblaw.co.za/wp-content/uploads/2024/10/South-Africa-National-AI-Policy-Framework-1.pdf.
3
African Union, “Digital Transformation Strategy for Africa (2020-2030),” Bureau of the Chairperson, 2024 //
https://au.int/sites/default/files/documents/38507-doc-dts-english.pdf.
4
African Union, “Science, Technology and Innovation Strategy for Africa 2024 (STISA 2024),” 2024 //
https://au.int/sites/default/files/newsevents/workingdocuments/33178-wd-stisa-english_-_final.pdf.
5
NEPAD Office of Science and Technology, “African Science and Technology Consolidated Plan of Action,” 2006.
6
Africa Centre for Disease Control and Prevention, “Digital Transformation Strategy (edition 2023),” 2023 // https://dalagroup.africa/wp-
content/uploads/2023/05/Africa-CDC-Digital-Transformation-Strategy-Edition-2023-VF.pdf.
7
The Parliament of Lesotho, “Computer Crime and Cybersecurity Bill,” 2024 // https://nationalassembly.parliament.ls/wp-
content/uploads/2024/05/COMPUTER-CRIME-AND-CYBER-SECURITY-BILL-2024.pdf.
8
Data Protection Act, 2011 // https://www.centralbank.org.ls/images/Legislation/Principal/Data_Protection_Act_2011.pdf.
9
“Lesotho’s National Digital Transformation Strategy: Agenda 2030,” 2024.
10
Ministry of Information, Communications, Science, Technology and Innovation, “National Digital Policy 2024,” 2024.
11
M. A. Ayanwale, “Evidence from Lesotho Secondary Schools on Students’ Intention to Engage in Artificial Intelligence Learning,” in IEEE
AFRICON, Nairobi, Kenya, 2023 //
https://www.researchgate.net/publication/375128417_Evidence_from_Lesotho_Secondary_Schools_on_Students'_Intention_to_Engage_in_Ar
tificial_Intelligence_Learning.
12
B. Bulane, “Promoting digital literacy in Lesotho,” The Reporter, p. 1, 16 September 2024 //
https://www.thereporter.co.ls/2024/09/16/promoting-digital-literacy-in-lesotho//.
13
N. Glaser, S. Bosman, T. Madonsela, A. van Heerden, K. Mashaete, B. Katende, I. Ayakaka, K. Murphy, A. Signorell, L. Lynen and e. al.,
“Incidental radiological findings during clinical tuberculosis screening in Lesotho and South Africa: a case series,” Journal of Medical Case
Reports, vol. 17:365, 2023 // https://jmedicalcasereports.biomedcentral.com/articles/10.1186/s13256-023-04097-4/

10
7.1. Bachelor of Science in Data Science is taught at Botho University14
(https://botswana.bothouniversity.com/courses/bsc-in-data-science/)
7.2. Bachelor Of Science in Computer Science at the National University of Lesotho covers
selected AI topics (https://www.nul.ls/technology/programmes)
7.3. AI topics are discussed in related study programs at Lesotho Polytechnic and Limkokwing
University of Creative Technology.
All of it shows the deep interest of different parties, however, formal AI legislation is not available
yet15, hence a more systematic approach is necessary.
The main challenge in the case of AI policies and governance is the dynamic development of the field
itself. While some countries or unions (European Union) are partially ahead, still, AI-related policies,
strategies, regulations, and action plans are still under development or revision.
Lesotho is just starting different activities related to AI:
1. Currently, no direct regulations of AI are in place in Lesotho. The National Digital Policy 202416
Lesotho’s National Digital Transformation Strategy: Agenda 203017 and Research and Innovation
Policy18,19 mention AI, as an important topic.
2. The Ministry of Information, Communications, Science, Technology and Innovations (MICSTI) is
responsible for the topic. Two departments are accountable for the implementation of the best
practices of AI-related activities in Lesotho:
2.1. Department of Information, Communication and Technology (ICT).
2.2. The Science and Technology Department.
3. Other important stakeholders are
3.1. The Ministry of Home Affairs.
3.2. National agency/institution responsible for data protection and management in Lesotho.
3.3. National agency/institution responsible for digital innovation in Lesotho.20
3.4. All the affected sectors:
3.4.1. Governmental sector,
3.4.2. Public business and industries,
3.4.3. Private business and industries,
3.4.4. Parastatals,
3.4.5. Utilities.

14
Botho University. Bachelor’s in data science // https://botswana.bothouniversity.com/courses/bsc-in-data-science/
15
P. Chanthalangsy, I. Khodeli and L. Xu, “Landscape Study of AI Policies and use in Southern Africa,” UNESCO Regional Office for Southern
Africa, 2022 // https://unesdoc.unesco.org/ark:/48223/pf0000385563.
16
The Ministry of Information, Communications, Science, Technology and Innovation, “National Digital Policy 2024,” 2024.
17
“Lesotho’s National Digital Transformation Strategy: Agenda 2030,” 2024.
18
Department of Science, Technology and Innovations, Ministry of Information, Communications, Science, Technology and Innovation,
Department of Science, Technology and Innovation, “Research and Innovation Policy,” 2023 // https://www.last.org.ls/wp-
content/uploads/2022/07/Research-and-Innovation-Policy-2023-FINAL-DRAFT.pdf.
19
OACPS Secretariat, “Research and Innovation Policy Recommendation Report for LESOTHO,” Organisation of African, Caribbean and Pacific
States (OACPS) Secretariat, Bruxelles, 2022 // https://www.oacps.org/wp-content/uploads/2022/01/Lesotho_PRR_OACPS_FInal_2022.pdf.
20
Such institution is envisioned in the draft innovation policy of Lesotho but has not been established yet.

11
The main challenges to overcome are the following:
1. Education on Artificial Intelligence at different levels and for different needs, such as
development, possible application, governance, etc.
2. While Lesotho is actively improving its infrastructure, AI-related infrastructure is very costly,
hence, decisions, on how it should be developed, including data governance policy, are
necessary, to proceed.
3. A high workload of highly qualified state officials slows down the process as well.

12
5. POLICY FRAMEWORK

5.1 INTRODUCTION
Artificial Intelligence is changing the world, including Lesotho. A national AI policy for Lesotho is
essential to ensure that AI technologies are used in ways that promote economic growth, improve public
services, and address the country's unique challenges. By creating a clear framework, Lesotho can
harness the transformative power of AI while safeguarding its citizens' interests and positioning itself as
a competitive player in the global digital economy.
5.2 GOAL OF POLICY REFORM
AI policy aims to create frameworks and guidelines that ensure the safe, fair, and beneficial
development and deployment of artificial intelligence technologies, enabling the management of AI's
various impacts on society, the economy, and individuals, covering the following areas:
1. Economic Development and Innovation
2. Research, Education and Skills Development
3. Social Development and Public Services
4. Ethics and Data Privacy
5. Global Competitiveness and Strategic Positioning
6. Digital Infrastructure Development
5.3 VISION STATEMENT
Lesotho envisions a future where AI empowers individuals, strengthens communities, and drives
sustainable societal progress. The policy aims to promote innovation, uphold human rights, and
ensure AI development aligns with national values of inclusivity, safety, and ethical integrity.
Lesotho’s AI policy framework aims to promote innovation, safeguard human rights, and ensure that AI
technologies are developed and deployed responsibly, contributing to a sustainable and equitable world
for present and future generations.
5.4 GUIDING PRINCIPLES
The main guiding principles of the AI policy.
1. Safety and Reliability
1.1. Ensure that AI systems are safe, reliable, and operate as intended.
1.2. Prevent unintended consequences that could harm people or infrastructure.
1.3. Promote research and standards to reduce risks related to advanced AI, including
superintelligent systems.
2. Ethics, Accountability, Fairness, Human Rights, and Inclusion.
2.1. Address bias in AI algorithms to avoid discrimination based on race, gender, age, or other
protected characteristics.
2.2. Uphold principles of fairness, transparency, and accountability to promote trustworthiness
in AI applications.
2.3. Ensure that AI development aligns with human rights principles, supporting inclusion and
equality.

13
2.4. Focus on inclusivity to prevent the exclusion of underrepresented communities in shaping AI
policies.
3. Privacy Protection (mostly regulated by data protection policies).
3.1. Safeguard users' personal data and establish clear regulations on data collection, use, and
storage.
3.2. Support user rights over their data, including informed consent and data security.
4. Economic Stability and Job Impact
4.1. Manage the economic implications of AI, such as its impact on job markets and labour shifts.
4.2. Encourage retraining and upskilling programs to help workers adapt to AI-driven changes in
employment.
5. Promoting Innovation, Global Competitiveness and Cooperation
5.1. Create an environment where responsible AI innovation can thrive.
5.2. Balance regulation and flexibility to not stifle technological advances.
5.3. Position countries or regions as leaders in AI research and development while maintaining
responsible practices.
5.4. Foster international cooperation on AI standards and governance to address cross-border
challenges like cybersecurity and ethical concerns.
6. Security and Prevention of Misuse
6.1. Prevent the use of AI in harmful and illegal ways, such as terrorism, cyberattacks, phishing,
surveillance overreach, and autonomous weapon systems.
6.2. Establish measures for identifying and mitigating potential security threats associated with
AI technologies.
AI policy should ensure that the rights of all stakeholders are taken care of.
1. General Public
2. Businesses and Industries
3. Governments and Public Sector
4. Academic and Research Communities
5. Technology Professionals
6. Marginalized and Vulnerable Communities
7. Global Community

14
5.5 OBJECTIVES
The objectives described in the table below represent the necessary steps for achieving a thriving and
safe AI ecosystem in Lesotho.

No. Objective Description

1. Establishing leadership and Establish a sustainable and efficient AI domain governance


governance setup, consisting of:
1. AI policy maker;
2. AI policy regulator;
3. Data and AI Committee, consisting of the representatives
of the government and main stakeholders (MICSTI,
regulator, academia, industry, and other stakeholders,
listed in section 5.6.1).

2. Developing legislation and Establish and support enabling legal and regulatory
ensuring compliance environment (see more in section 5.6.2).

3. Building AI capacity Support AI capacity building across the public sector and the
broader society:
1. Establish Government Training Programs: Offer training
for policymakers and public officials on AI technologies
and their implications.
2. Provide Education Support: support schools and
universities in including AI in the teaching and studies
process and introduce AI-focused courses in secondary
schools and universities to build foundational knowledge
in data science, machine learning, and AI ethics.
3. Technical Training Programs: Offer vocational training
and upskilling programs for workers to adapt to the
changing job market influenced by AI.
4. Implement Public Awareness Campaigns: Run
educational initiatives to inform citizens about their
rights and the potential benefits and risks associated
with AI.
5. Research Exchanges: Partner with international
universities and AI research organizations for student and
researcher exchange programs.

4. Building and providing AI Improve Infrastructure and Technology Access:


infrastructure
1. High-Speed Internet and Connectivity: Expand
broadband infrastructure to ensure widespread access to
digital resources and facilitate cloud-based AI services.

15
2. Data Centres: Develop local data centres to support data
storage and processing needs, ensuring data sovereignty
and faster processing capabilities.
3. Computational Resources: Provide access to high-
performance computing (HPC) for AI training and
development purposes.

5. Leveraging AI to boost 1. Provide Business Innovation Support: support businesses


productivity, diversify the in implementing AI-driven solutions, e.g. through an
economy, and create high- Innovation fund.
value jobs
2. Provide Research Support: support academia to research
and develop novel AI-based solutions, through dedicated
programs.
3. Establish a Regulatory sandbox – a controlled and
supportive environment for innovators to develop, test,
and deploy AI while working closely with regulators. It
would allow policymakers to evaluate AI applications and
risks, ensuring alignment with ethical standards, national
priorities, and societal values.
4. Establish dedicated Research and Development (R&D)
Hubs:
4.1. AI Research Centres: specialized research groups in
universities and other research institutions focusing
on AI and related fields, possibly in collaboration with
international partners.
4.2. Innovation Labs: Create spaces where startups and
researchers can experiment with AI technologies,
offering funding and resources for prototype
development.
4.3. Collaborative Research Programs: Partner with global
AI research organizations to share knowledge and
build capacity.

6. Ensuring inclusiveness and Develop and enforce Ethics and Human Rights Safeguards, AI
ethical usage of AI Safety and Risk Management Regulations:
1. AI Ethics Guidelines: Develop guidelines that align AI
practices with the principles of human rights, fairness,
non-discrimination, and inclusivity.
2. Ethical Oversight Committees: Establish independent
ethics boards to monitor AI deployments and ensure they
adhere to ethical standards.

16
3. AI Safety Protocols: Mandate the use of risk assessment
and mitigation protocols for high-risk AI systems (e.g.,
those impacting healthcare, finance, or public safety).
4. Regular Audits and Reporting: Require organizations to
conduct regular audits of their AI systems and report on
performance, safety, and ethical compliance.
5. Bias Mitigation Programs: Develop initiatives to identify
and reduce biases in AI models, particularly to prevent
discrimination and promote equity.
6. Public Input Mechanisms: Create channels for public
participation in discussions about AI deployment and its
societal impacts.

7. Promoting international International Collaboration and Alignment


alignment and sustainability
1. Global Standards Alignment: Ensure that Lesotho's
regulations are in line with international best practices
to facilitate trade, collaboration, and data-sharing
agreements.
2. Participation in Global Forums: Engage in international
AI policy discussions to stay informed on emerging trends
and adapt to global norms.
3. Regional AI Partnerships: Work with neighbouring
countries and regional bodies to share knowledge,
resources, and best practices.
4. Global AI Initiatives: Engage in international AI summits
and collaborations to stay updated on global standards
and advancements.

5.6 POLICY STATEMENTS/PRESCRIPTIONS

5.6.1. Sustainable Governance Structure


Three tiers AI governance setup will be established, consisting of the Policymaker, Regulator and Data
and AI Committee, in a way involving all the stakeholders in the AI governance.
1. Policymaker: The Ministry of Information, Communications, Science, Technology, and Innovation
(MICSTI) is responsible for formulating AI-related policies in Lesotho. A dedicated department within
MICSTI handles: Legislation preparation and approval; Strategy and action plan development;
General funding allocation; and Supervision of policy implementation.
2. Regulator: A national agency or institution focused on digital innovation in Lesotho will act as an
independent regulatory body tasked with overseeing AI development and implementation. This body
is tasked with: Implementing AI policies; Overseeing ethical standard; Managing data governance.
3. Multistakeholder advisory platform: The Data and AI Committee operates as a triple-helix model,
fostering collaboration between the public sector, private sector, and academia to co-develop
standards and guidelines that reflect industry realities and technological trends. This committee,

17
which meets periodically (preliminarily twice a year), oversees AI governance in Lesotho and
includes distinguished representatives from various stakeholder groups listed in the "Roles and
Responsibilities of the Main Stakeholders in AI Governance" table.
1. Table 3. Roles and Responsibilities of the Main Stakeholders in AI Governance

No. Stakeholder Description


The MICSTI is the primary driver of AI governance in Lesotho, responsible
for policymaking, strategies, legislation, and funding. A dedicated
department within MICSTI handles:
1. Policy Development:
1.1. Draft and implement AI policies and national strategies that
align with Lesotho's development goals.
1.2. Ensure policies are adaptive to emerging AI technologies and
trends.
2. Legislation:
2.1. Enact laws on AI ethics, data protection, privacy, and intellectual
property.
2.2. Establish regulatory frameworks to ensure compliance and
accountability in AI deployment.

Policymaker: 3. Funding and Support:


1.
MICSTI 3.1. Allocate resources for AI research, innovation, and public sector
adoption.
3.2. Provide grants and incentives to encourage AI development and
integration across various sectors.
4. Public Services:
4.1. Utilize AI to enhance healthcare, education, agriculture, and
public administration.
4.2. Implement AI-driven solutions to improve service delivery and
operational efficiency in public services.
5. Collaboration:
5.1. Partner with international organizations, regional bodies, and
private companies to ensure best practices and resource-sharing.
5.2. Foster collaborations with academic institutions and research
organizations to drive AI innovation and knowledge exchange
A national agency or institution focused on digital innovation in Lesotho
Regulator: National will act as the Regulator. This body is responsible for:
agency/institution 1. Ethics Oversight:
2. responsible for
digital innovation 1.1. Monitor the ethical use of AI, preventing discrimination, bias,
in Lesotho and misuse.
1.2. Ensure AI systems adhere to established ethical guidelines.

18
2. Data Governance:
2.1. Oversee the secure and fair use of data, ensuring adherence to
privacy regulations.
2.2. Implement data protection measures to safeguard personal
information.
3. Standards Development:
3.1. Define technical and operational standards for AI systems.
3.2. Establish benchmarks for AI performance and safety.
4. Risk Management:
4.1. Identify and mitigate risks associated with AI deployment in
critical sectors.
4.2. Develop contingency plans to address potential AI-related
issues.
5. Dispute Resolution:
5.1. Address conflicts or complaints related to AI systems and their
impact.
5.2. Provide a platform for stakeholders to raise concerns and seek
resolutions.
6. Awareness and Education:
6.1. Organize public awareness campaigns to promote understanding
of AI.
6.2. Educate the public and stakeholders about the benefits and
challenges of AI.
7. Monitoring and Evaluation:
7.1. Monitor the effectiveness of AI policies and regulations.
7.2. Evaluate the impact of AI initiatives on society and make
necessary adjustments.
The Committee supports Data and AI governance in Lesotho and includes
distinguished representatives from various stakeholder groups listed in
the “Roles and Responsibilities of the Main Stakeholders in AI
Governance” table below. The committee’s main role is to ensure
continuous engagement and dialogue among stakeholders. Its primary
Multistakeholder functions are:
Advisory Platform:
3. 1. Stakeholder Representation:
Data and AI
Committee 1.1. Include representatives from government, academia, industry,
civil society, and other relevant sectors.
1.2. Ensure diverse perspectives and expertise are considered in AI
governance.
2. Updates and Reporting:

19
2.1. Receive updates on activities, plans, and progress from the
Policymaker (MICSTI) and the Regulator.
2.2. Review reports on AI policy implementation, ethical standards,
and data governance.
3. Feedback and Recommendations:
3.1. Provide feedback on AI governance practices and policies.
3.2. Offer recommendations to improve AI strategies, legislation,
and implementation plans.
4. Subcommittees Formation:
4.1. Establish permanent or temporary subcommittees for specific
topics, such as ethics oversight, data privacy, and AI safety.
4.2. Ensure focused attention on critical areas of AI governance.
5. Public Engagement and Awareness:
5.1. Help to promote public awareness and understanding of AI
technologies and their implications.
5.2. Facilitate community engagement and gather public input on
AI-related issues.
Universities and research organizations contribute to capacity building,
innovation, and critical analysis of AI governance frameworks.
1. Research and Development: Conduct AI-related research in areas
like natural language processing, predictive analytics, and ethical
AI design.

Academia and 2. Capacity Building: Develop curricula and training programs to


3.1. Research upskill the workforce in AI technologies.
Institutions 3. Policy Advisory: Provide evidence-based recommendations to
inform policymaking and governance frameworks.
4. Collaboration: Partner with international institutions to enhance
local research capabilities and knowledge exchange.
5. Ethical Studies: Explore the social, economic, and ethical
implications of AI in Lesotho’s context.
Companies, startups, and tech firms play a pivotal role in driving
innovation, investing in AI technologies, and creating practical
applications.
1. Innovation: Develop AI solutions tailored to local challenges (e.g.,
agriculture, healthcare, financial inclusion).
3.2. Private Sector
2. Adherence to Standards: Ensure compliance with national and
international AI regulations and ethical standards.
3. Data Sharing: Collaborate with public and research sectors to
provide datasets for training AI models, while respecting privacy
laws.

20
4. Investment: Invest in AI research, infrastructure, and workforce
development.
5. Public-Private Partnerships (PPPs): Partner with the government
on AI-driven projects to improve public services
CSOs and NGOs advocate for the ethical use of AI and ensure that
vulnerable populations are not left behind in the AI revolution.
1. Advocacy: Promote the ethical and inclusive use of AI while
highlighting risks such as bias, inequality, or exclusion.
Civil Society 2. Awareness Campaigns: Educate citizens about AI, its benefits, and
Organizations potential risks.
(CSOs) and Non-
3.3. 3 3. Monitoring and Accountability: Act as watchdogs to hold
Governmental
Organizations governments and private entities accountable for the responsible
(NGOs) use of AI.
4. Community Engagement: Ensure the voices of marginalized
communities are included in AI policymaking.
5. Capacity Building: Provide training and resources for communities
to understand and benefit from AI technologies.
Global organizations and partners provide technical, financial, and policy
support to help Lesotho establish its AI governance framework.
1. Technical Assistance: Provide expertise and resources for drafting
policies, setting standards, and building infrastructure.
2. Capacity Building: Support training programs and workshops to
International enhance local expertise in AI.
Organizations and
4. 3. Funding: Offer financial aid or grants for AI research, innovation
Development
Partners hubs, and governance projects.
4. Knowledge Sharing: Share international best practices and
facilitate collaboration with other countries and regions.
5. Policy Alignment: Help Lesotho align its AI governance framework
with international standards, such as the OECD AI Principles or
UNESCO’s AI Ethics Guidelines, African Union policies, etc.
The media and citizens are crucial in shaping public discourse, promoting
transparency, and holding stakeholders accountable.
1. Media:
1.1. Educate the public about AI and its implications.
Media and the
5. 1.2. Investigate and report on potential misuse or ethical breaches
Public
in AI systems.
1.3. Facilitate dialogue between stakeholders and the public.
2. Citizens:
2.1. Provide feedback on AI systems and governance policies.

21
2.2. Advocate for transparency, fairness, and inclusivity in AI
applications.
2.3. Participate in public consultations and discussions to shape AI
policies.

5.6.2. Enabling Legal and Regulatory Environment


Creating an enabling legal and regulatory environment for AI in Lesotho involves establishing laws,
regulations, and frameworks that foster the responsible use of AI while supporting innovation and
protecting public interests. Here’s how this can be structured:
1. Foundational AI Legislation
1.1. AI-Specific Law: Introduce a comprehensive AI Act that defines AI, sets the scope for
regulation, and outlines the responsibilities of developers, businesses, and government
bodies in the AI ecosystem.
1.2. Updates to Existing Legislation: Review and update current laws in areas like data
protection, consumer rights, and cybersecurity to include AI-related provisions.
2. Data Protection and Privacy Regulations
2.1. Comprehensive Data Protection Law:
2.1.1. Ensure that AI systems comply with robust data privacy laws that protect citizens’
personal data. This could align with models like the EU's GDPR to enhance data
security and user trust.
2.1.2. Implement strong data privacy laws to safeguard user information and build trust
in AI systems. Especial care should be taken to mitigate the risk of the Mosaic
effects, which occurs when disparate pieces of seemingly innocuous data are
combined to reveal sensitive or confidential information, even when individual
datasets are anonymized or secured. Strategies, such as differential privacy,
federated learning, access control, data minimization, and synthetic data
generation can be used to minimize these risks.
2.2. Consent and Transparency Requirements: Implement clear rules requiring AI systems to
obtain user consent for data usage and provide transparent information on how personal
data is processed.
2.3. Open Data Policies: Promote an open data environment where anonymized datasets can be
shared and accessed for training AI models while ensuring data privacy.
2.4. Data Sharing Partnerships: Collaborate with local and international organizations for data-
sharing agreements that can enhance research and development.
3. Intellectual Property (IP) Rights and Innovation Protection
3.1. Adapt IP Laws: Update intellectual property laws to cover AI-generated works and
innovations, clarifying ownership rights in cases involving AI contributions.
3.2. Incentives for Innovation: Provide legal protections and incentives to encourage AI research
and development within the country.

22
4. Workforce and Labor Law Adaptation
4.1. Protection for Workers: Amend labour laws to account for changes in the job market due to
AI automation, ensuring workers have support through retraining and upskilling programs.
4.2. Fair AI Use in Employment: Regulate the use of AI in hiring and workplace management to
prevent biases and unfair practices.
5. Consumer Protection and User Rights
5.1. User Rights Framework: Establish a framework that protects users’ rights, ensuring they can
access explanations of AI-driven decisions that impact them and dispute them if necessary.
5.2. Product Liability Standards: Implement standards that make clear who is liable when AI
products malfunction or cause harm.
5.6.3. AI integration in key sectors
The Government will promote and support the development of AI in Key Sectors:
1. Healthcare: Use AI to support healthcare services, such as diagnostics, personalized medicine,
and health data analysis to improve patient care.
2. Agriculture: Implement AI technologies for precision farming, crop monitoring, and pest control
to boost agricultural productivity.
3. Education: Develop AI tools that enhance personalized learning experiences and improve access
to educational content.
4. Public Services: Integrate AI in public administration to streamline processes and enhance service
delivery, such as automated document processing and smart resource management.
5. AI for Sustainability: Use AI to support environmental monitoring, conservation efforts, and
sustainable urban development.
5.6.4. Financial mechanisms
1. Establish government grants, incentives, and venture capital programs to encourage AI startups and
innovation. Innovation fund, guided by frugal innovation principles, such as affordability, simplicity,
and scalability, and oriented towards:
1.1. Developing cost-effective solutions by rethinking traditional approaches.
1.2. Leveraging local resources, ingenuity, and constraints as drivers of innovation.
1.3. Delivering high value with minimal expense.
2. Foreign Direct Investment (FDI): Attract foreign investors by creating a favourable regulatory
environment and promoting the country as an emerging tech hub.
5.6.5. Feedback Mechanisms
1. Implement a system for continuous feedback from users and stakeholders to adjust and improve
AI policies and applications.
2. Metrics for Success: Develop indicators to measure the impact and effectiveness of AI initiatives
across different sectors.
3. Regular Policy Reviews: Update policies periodically to reflect technological advancements and
societal changes.

23
6. IMPLEMENTATION PLAN AND MONITORING AND EVALUATION FRAMEWORK

6.1 LOGIC OF THE PLAN


To implement this policy, a comprehensive Implementation Plan (hereinafter referred to as the Plan)
has been developed. This Plan is based on a detailed analysis of the country's specific needs and
incorporates best practices and guidance from international organizations.
Effective execution of the Policy and its Plan requires strong commitment at both high-level and
implementation levels, sufficient resource allocation, and relevant staff skills. With accountability and
oversight mechanisms in place, a robust governance structure can sustain momentum, meet Key
Performance Indicators (KPIs), and drive the successful implementation of broadband and shared
infrastructure initiatives. It is crucial to periodically assess stakeholder needs and evolving broadband
requirements at every phase of the Plan to inform the next steps. The governance framework, aligned
with the existing structure, prioritizes and supports all activities outlined in the Plan, facilitates
necessary decision-making, and oversees performance to ensure the successful implementation of the
Policy.
Considering the principle of agility, it is essential to periodically revisit the Plan to address shifts in
priorities, resource constraints, implementation progress, and emerging risks or challenges.

Objective
Targets set/Actions to take
Steps/measures needed

The flow highlights the interdependence between these layers: the Objective sets the high-level goal,
which then dictates the target or action to be taken. To achieve the target, detailed measures must be
implemented. Each layer builds upon the previous one, ensuring a logical and actionable progression
within the Policy, supported by relevant deliberation. Additionally, Key Indicators for each Objective
are defined to monitor.

24
6.2 OBJECTIVES, TASKS, MEASURES, AND INDICATORS
Objective 1: Establishing leadership and governance
Targets Measures Responsible Timeline
Institution
Establish a sustainable and efficient AI domain 1. Establish a workgroup (temporary MICSTI Planning: 2025 Q2
governance setup, consisting of: committee) for AI governance - Q3
implementation.
1. AI policy maker. Implementing:
2. Prepare legislation for the AI policy
2025 Q3 – 2026 Q2
2. AI policy regulator. maker, AI policy regulator and Data and AI
3. Data and AI Committee, consists of the Committee.
representatives of the government and main 3. Discuss the legislation with stakeholders.
stakeholders (MICSTI, regulator, academia, 4. Get the legislation approved.
industry, and other stakeholders, listed in 5. Establish relevant bodies (1-3).
section 5.6.1). 6. Assign
6.1. Head of AI policy maker
6.2. Head AI policy regulator.
7. Elect relevant stakeholders to the Data
and AI committee.
8. Disband AI governance implementation
committee (see item 1).
Objective 2: Developing legislation and ensuring compliance
Establish and support enabling legal and 1. Deveop regulation MICSTI Planning: 2025 Q2
regulatory environment. 2. for the establishment of governing bodies – 2026 Q4
AI policy maker
– see Objective 1 and subsection 5.6.2.
Implementation:
1. 2025 Q3 –
2026 Q2
(mostly
Temporary AI
committee)

25
2. 2026 Q2 –
continuous, AI
policy maker
Objective 3: Building AI Capacity
1. Establish Government Training Programs: 1. Define Objectives AI regulator Planning: 2026 Q2
Offer training for policymakers and public 1.1. Assess Needs: Identify the specific – 2026 Q3
MICSTI
officials on AI technologies and their areas where AI knowledge is required
Implementation:
implications. (e.g., policy-making, public service
2026 Q3 -
delivery, ethics, etc.).
continuous
1.2. Set Goals: Determine whether the
training aims to raise awareness,
build technical skills, or provide tools
for decision-making.
2. Identify the Target Audience
2.1. Roles: Focus on officials who will
directly interact with AI-related
policies, procurement, or
implementation.
2.2. Expertise Levels: Segment
participants into beginner,
intermediate, and advanced levels to
tailor the training content.
3. Design the Curriculum
3.1. General Topics: Cover AI basics,
applications in government, ethical
considerations, and data privacy.
3.2. Role-Specific Modules:
3.2.1. For policymakers: AI
governance, regulation, and
ethics.

26
3.2.2. For IT teams: Technical
implementation and AI tools.
3.2.3. For public-facing officials: AI in
citizen engagement and service
delivery.
3.3. Case Studies: Use real-world
examples relevant to the public
sector.
3.4. Interactive Components: Include
hands-on workshops, simulations, and
AI tool demonstrations.
4. Engage Expert Trainers
4.1. Collaborate with academic
institutions, industry experts, and AI
research organizations.
4.2. Include government officials from
countries or regions that have
successfully implemented AI
initiatives.
5. Choose the Format
5.1. In-Person Workshops: Ideal for
interactive sessions and group
activities.
5.2. Online Training: Offers flexibility and
scalability. Use platforms like
webinars, self-paced courses, or live
virtual classes.
5.3. Hybrid Model: Combines the
advantages of both in-person and
online methods.
6. Leverage Resources

27
6.1. Existing Frameworks: Utilize
materials from organizations like
OECD, UNESCO, or national AI
councils.
6.2. Custom Content: Develop localized
content aligned with regional
priorities and policies.
6.3. Open AI Platforms: Use free AI tools
for demonstrations.
7. Include Ethical and Legal Aspects
7.1. Address concerns about bias, fairness,
accountability, and transparency in AI
systems.
7.2. Train officials to evaluate AI tools for
compliance with ethical and legal
standards.
8. Evaluate and Certify
8.1. Conduct pre- and post-training
assessments to measure knowledge
gained.
8.2. Provide certifications to recognize
successful completion, boosting
credibility and motivation.
9. Build Continuous Learning Pathways
9.1. Establish ongoing AI forums or
communities of practice for
government officials.
9.2. Offer advanced training programs for
deeper specialization.
10. Monitor Impact

28
10.1. Track the implementation of
learned skills in real-world
government initiatives.
10.2. Gather feedback to improve
future training sessions.
2. Provide Education Support: support schools 1. Develop an AI Curriculum Framework Leading Curriculum
and universities in including AI in the 1.1. Define Learning Objectives: organization: development: 6
teaching and studies process and introduce 1.1.1. For secondary schools: Focus Ministry of months
AI-focused courses in secondary schools and on basic AI concepts, Education and
Teacher training:
universities to build foundational computational thinking, and Training
3-6 months
knowledge in data science, machine responsible AI use.
AI regulator
learning, and AI ethics. 1.1.2. For universities: Include Launch AI
foundational knowledge in data MICSTI courses: 1 year
science, machine learning, Partnership:
neural networks, and AI ethics. ongoing
1.2. Incorporate AI Across Disciplines:
Integrate AI-related topics into STEM Evaluation and
subjects (e.g., using AI in biology or improvement:
physics) and non-STEM fields like annualy
humanities and social sciences (e.g.,
ethical implications of AI).
1.3. Align with Standards: Ensure the
curriculum aligns with national
education standards and international
frameworks (e.g., UNESCO guidelines
on AI in education).

2. Support Teacher Training and


Development
2.1. Workshops and Certification
Programs: Train teachers to

29
understand AI concepts and use AI
tools effectively in the classroom.
2.2. Collaborate with AI Experts: Partner
with universities, tech companies,
and research organizations to provide
up-to-date training.
2.3. Provide Resources: Offer teaching
materials, such as lesson plans,
activity guides, and AI software
access.

3. Build Educational Resources and


Infrastructure
3.1. Learning Materials
3.1.1. Develop textbooks, online
courses, and interactive tools for
AI education.
3.1.2. Use platforms like AI4ALL,
Google AI Education, or AI4K12
for accessible resources.
3.2. Hands-on Tools: Provide schools with
AI kits, robotics platforms, and
simulation software to make learning
interactive.
3.3. Access to Technology: Ensure schools
have access to computational
resources like cloud-based AI
platforms, high-performance
computers, or learning hubs.

30
4. Establish AI-Focused Programs in Schools
and Universities
4.1. Secondary Schools:
4.1.1. Launch AI clubs and hackathons
to foster interest in AI.
4.1.2. Introduce electives in coding,
data analysis, and AI
applications.
4.1.3. Implement AI ethics discussions
to build awareness about the
social impact of technology.
4.2. Universities:
4.2.1. Develop specialized degree
programs in AI and data science.
4.2.2. Offer interdisciplinary courses,
such as AI in healthcare, law, and
education.
4.2.3. Promote research
opportunities for students to
work on AI projects with societal
impact.

5. Foster Industry and Academic Partnerships


5.1. Collaborate with Industry:
5.1.1. Work with AI companies to
create internship opportunities
and mentorship programs for
students.
5.1.2. Use industry-driven case
studies and tools in teaching.
5.2. Engage Universities: Encourage
partnerships with global universities

31
that excel in AI research to co-
develop programs.

6. Promote Equity and Inclusion


6.1. Accessible Programs: Ensure AI
education resources are accessible to
students in underserved and rural
areas.
6.2. Diverse Representation: Encourage
participation from underrepresented
groups, including girls, minorities,
and people with disabilities, through
scholarships and awareness
campaigns.

7. Introduce Competitions and Challenges


7.1. Host AI challenges, innovation fairs,
and hackathons to engage students in
problem-solving and real-world AI
applications.

8. Raise Awareness Among Stakeholders


8.1. Public Awareness Campaigns:
Promote the importance of AI
education to parents, students, and
policymakers.
8.2. Incentivize Participation: Offer grants
to schools and universities that
successfully implement AI courses.

9. Leverage Online Platforms

32
9.1. Collaborate with platforms like
Coursera, Khan Academy, or EdX to
provide high-quality AI courses.
9.2. Create a dedicated national or
regional platform for AI education.

10. Monitor and Evaluate Programs


10.1. Feedback Mechanisms: Collect
input from students, teachers, and
industry partners to refine the
programs.
10.2. Assessment Tools: Use
standardized tests and projects to
evaluate the impact of AI education.
3. Technical Training Programs: Offer 1. Assess Workforce Needs MICSTI Planning: 2026 Q2
vocational training and upskilling programs 1.1. Identify Impacted Industries: Pinpoint – 2026 Q3
Ministry of
for workers to adapt to the changing job sectors significantly influenced by AI
Education and Implementation:
market influenced by AI. (e.g., manufacturing, logistics,
Training 2026 Q3 -
healthcare, finance, and retail).
continuous
1.2. Skill Gap Analysis: Work with industry AI regulator
leaders and workers to identify the AI policy maker
skills most in demand, such as AI
operation, data analysis, machine
learning basics, and programming.
1.3. Categorize Workers: Segment the
workforce based on their current skill
levels (entry-level, mid-career,
advanced) to tailor training programs.

2. Define Program Objectives


2.1. Upskilling: Teach workers how to use
AI tools relevant to their industries

33
(e.g., predictive maintenance in
manufacturing).
2.2. Reskilling: Train workers for entirely
new roles created by AI (e.g., data
labeling, AI system monitoring, or
robotics coordination).
2.3. Basic AI Literacy: Equip workers with
foundational knowledge about AI
concepts to enhance adaptability.

3. Design Flexible Training Modules


3.1. Modular Structure: Divide training
into short, focused modules to
accommodate busy schedules.
3.2. Role-Specific Content: Tailor courses
to the specific roles workers are
transitioning to, e.g.:
3.2.1. For retail: AI-powered
customer insights and inventory
management.
3.2.2. For manufacturing: Working
with collaborative robots
(cobots) and automated systems.
3.3. Core Topics:
3.3.1. Basic AI and machine learning
concepts.
3.3.2. Data interpretation and
visualization.
3.3.3. Working alongside AI-powered
systems.
3.3.4. Ethics and implications of AI in
the workplace.

34
4. Choose Delivery Methods
4.1. In-Person Training: Suitable for
hands-on skill development, such as
operating robots or using industry-
specific AI tools.
4.2. Online Platforms: Use e-learning
platforms for self-paced courses,
webinars, and virtual simulations.
4.3. Hybrid Model: Combine in-person
workshops with online learning for
maximum flexibility and engagement.
4.4. Immersive Technologies: Leverage
AR/VR for realistic, hands-on training
in simulated environments.

5. Partner with Stakeholders


5.1. Industry Collaboration: Work with AI
companies and industry leaders to co-
develop relevant content and provide
real-world insights.
5.2. Academic Institutions: Partner with
universities and technical schools to
ensure the curriculum aligns with
future workforce needs.
5.3. Government Support: Seek funding
and policy support to subsidize
training for workers, particularly in
sectors undergoing disruption.

6. Provide Certification and Recognition

35
6.1. Skill Validation: Offer certifications
to validate workers’ skills and make
them more attractive to employers.
6.2. Micro-Credentials: Provide digital
badges for completing specific
modules or skills to allow workers to
build their qualifications
incrementally.

7. Foster Inclusivity and Accessibility


7.1. Affordability: Subsidize programs or
offer free courses to ensure
accessibility for low-income workers.
7.2. Localized Training: Develop content
in local languages and adapt
examples to regional industries.
7.3. Diverse Formats: Accommodate
varying education levels with user-
friendly, practical content.

8. Include Continuous Learning Opportunities


8.1. Establish pathways for workers to
continue learning through advanced
courses or transition to higher-level
roles.
8.2. Create AI learning communities or
forums for workers to share
knowledge and stay updated.

9. Promote Awareness and Engagement


9.1. Awareness Campaigns: Highlight the
benefits of upskilling through social

36
media, company newsletters, and
community events.
9.2. Employer Incentives: Encourage
companies to sponsor employees for
training through tax benefits or
grants.

10. Evaluate Program Effectiveness


10.1. Track Outcomes: Measure job
placements, promotions, or
productivity improvements among
participants.
10.2. Feedback Mechanisms: Use
participant surveys and employer
feedback to refine content and
delivery methods.
10.3. Adapt to Trends: Regularly
update training modules based on
emerging AI technologies and industry
needs.
4. Implement Public Awareness Campaigns: 1. Define Campaign Objectives MICSTI Planning: 2026 Q2
Run educational initiatives to inform 1.1. Educate About Rights: Inform citizens – 2026 Q3
Ministry of
citizens about their rights and the potential about data privacy, AI ethics, and
Education and Implementation:
benefits and risks associated with AI. legal protections related to AI usage.
Training 2026 Q3 -
1.2. Highlight Benefits: Explain how AI can
continuous
enhance public services, healthcare, AI regulator
education, and daily life. AI policy maker
1.3. Address Risks: Raise awareness about
issues such as bias, job displacement,
and potential misuse of AI.

2. Identify Target Audiences

37
2.1. General Public: Address broad
concerns about AI in daily life.
2.2. Specific Groups: Tailor messages for
students, professionals, parents, or
vulnerable populations who may
interact with AI differently.
2.3. Underrepresented Communities:
Ensure inclusivity by reaching groups
that may lack access to AI resources
or understanding.

3. Design Clear and Engaging Messages


3.1. Simplify Concepts: Use plain language
to explain AI concepts like machine
learning, automation, and data
ethics.
3.2. Focus on Relevance: Relate AI to
citizens’ daily lives (e.g., smart home
devices, online recommendations,
healthcare applications).
3.3. Balance Benefits and Risks: Avoid
overhyping AI while addressing valid
concerns in a transparent manner.

4. Use Diverse Communication Channels


4.1. Traditional Media:
4.1.1. TV and radio programs
explaining AI basics.
4.1.2. Print materials like brochures,
posters, and newspaper articles.
4.2. Digital Platforms:

38
4.2.1. Social media campaigns using
infographics, short videos, and
live Q&A sessions.
4.2.2. Dedicated websites or
microsites with resources, FAQs,
and real-life examples of AI
applications.
4.3. Community Events:
4.3.1. Workshops, town halls, or
seminars in schools, libraries,
and community centers.
4.3.2. AI demonstration booths at
public fairs or festivals.

5. Collaborate with Stakeholders


5.1. Government Agencies: Ensure
alignment with national AI policies
and regulations.
5.2. Educational Institutions: Partner with
schools and universities to provide
resources and organize awareness
programs.
5.3. Private Sector: Engage tech
companies to share insights and
sponsor campaigns.
5.4. Nonprofits: Collaborate with
organizations focused on digital
literacy and ethics.

6. Address Ethical and Privacy Concerns


6.1. Educate the public about their data
rights, such as:

39
6.1.1. Knowing how AI systems collect
and use their data.
6.1.2. Options for opting out or
protecting personal information.
6.2. Discuss ethical AI practices, including
fairness, transparency, and
accountability.

7. Create Interactive Learning Opportunities


7.1. Hands-On Workshops: Teach citizens
how to use AI tools responsibly (e.g.,
privacy settings, recognizing AI-
generated content).
7.2. Gamification: Use apps or online
quizzes to test and improve AI
literacy in an engaging way.
7.3. AI Simulations: Provide accessible
demonstrations of AI, such as
chatbots or image recognition, to
demystify the technology.

8. Highlight Real-World Examples


8.1. Share success stories of AI improving
healthcare, transportation, or
disaster management.
8.2. Provide case studies of challenges,
such as algorithmic bias, and how
they were addressed.

9. Monitor Campaign Reach and


Effectiveness

40
9.1. Metrics: Track attendance at events,
website visits, and engagement on
social media platforms.
9.2. Feedback: Collect citizen opinions
through surveys, polls, and focus
groups to refine future initiatives.
9.3. Impact Assessment: Measure changes
in public understanding and attitudes
toward AI over time.

10. Establish a Long-Term Plan


10.1. Regular Updates: Keep the
public informed about new AI
developments and regulations.
10.2. Continuous Learning: Provide
ongoing opportunities for citizens to
deepen their understanding of AI.
10.3. Community Forums: Create
platforms where people can voice
concerns, ask questions, and share
experiences with AI.
5. Research Exchanges: Partner with 1. Define Program Objectives MICSTI Planning: 2026 Q2
international universities and AI research 1.1. Enhance Expertise: Facilitate – 2026 Q3
Ministry of
organizations for student and researcher knowledge exchange in AI, machine
Education and Implementation:
exchange programs. learning, and data science.
Training 2026 Q3 -
1.2. Foster Innovation: Encourage
continuous
collaborative research on cutting- AI regulator
edge AI topics such as explainable AI, AI policy maker
ethics, or applications in healthcare,
education, and climate science.
1.3. Build Networks: Strengthen
relationships between institutions

41
and create a global community of AI
researchers and students.

2. Identify Partner Institutions


2.1. International Universities: Target
institutions renowned for their AI
research and innovation.
2.2. Research Organizations: Collaborate
with global AI labs, tech companies,
or organizations like OpenAI,
DeepMind, or national AI research
bodies.
2.3. Mutual Interest: Select partners with
aligned research goals or
complementary expertise.

3. Develop Program Framework


3.1. Duration: Define the length of
exchanges (e.g., 6 months, 1 year, or
short-term visits).
3.2. Eligibility Criteria:
3.2.1. For students: Graduate-level
AI, data science, or related field
enrollees.
3.2.2. For researchers: any level,
especialy junior researchers
working on AI projects.
3.3. Research Areas: Focus on specific AI
topics such as ethics, robotics,
natural language processing, or AI for
social good.

42
3.4. Deliverables: Require participants to
produce outcomes like joint
publications, patents, or prototypes.

4. Secure Funding and Resources


4.1. Funding Sources:
4.1.1. Government grants for
research and education.
4.1.2. Sponsorships from tech
companies or international
organizations.
4.2. Joint funding with partner
institutions.
4.3. In-Kind Support: Access to labs,
computational resources, or datasets
during exchanges.

5. Formalize Agreements
5.1. Memoranda of Understanding (MoUs):
Establish clear agreements covering
goals, roles, responsibilities, and
intellectual property rights.
5.2. Visa and Legal Assistance: Simplify
visa processes and ensure legal
compliance for participants.

6. Design Participant Support Systems


6.1. Orientation Programs: Provide
cultural and logistical support to help
participants adjust to new
environments.

43
6.2. Mentorship: Pair participants with
mentors from host institutions for
guidance on research and
collaboration.
6.3. Facilities: Ensure access to housing,
labs, libraries, and other resources.

7. Promote and Recruit Participants


7.1. Awareness Campaigns: Share program
details through university networks,
research conferences, and online
platforms.
7.2. Application Process:
7.2.1. Open applications for
interested students and
researchers.
7.2.2. Evaluate candidates based on
research proposals, experience,
and alignment with program
goals.

8. Foster Collaborative Projects


8.1. Joint Research: Encourage
collaborative papers, AI model
development, or data sharing
between host and home institutions.
8.2. Workshops and Seminars: Organize
events where participants present
findings and share ideas.
8.3. Cross-Cultural Learning: Promote
exchange of diverse perspectives to
tackle global challenges.

44
9. Measure Impact
9.1. KPIs: Track outcomes like the number
of joint publications, patents, or
prototypes developed.
9.2. Feedback Mechanisms: Collect
participant and partner feedback to
refine the program.
9.3. Alumni Networks: Create a network
of past participants to foster long-
term collaboration.

10. Scale the Program


10.1. Expand Partnerships: Add more
institutions and organizations to
broaden the program’s reach.
10.2. Diversify Research Topics:
Explore emerging AI fields like
quantum computing, edge AI, or AI in
sustainability.
10.3. Increase Accessibility: Provide
scholarships and grants to make
exchanges accessible to
underrepresented groups.
Objective 4. Building and providing AI infrastructure
Improve Infrastructure and Technology Access: 1. High-Speed Internet and Connectivity AI regulator Planning: 2026 Q2
Expand broadband infrastructure for – 2026 Q3
1. High-Speed Internet and Connectivity: Expand AI policy maker
widespread digital access and cloud-based
broadband infrastructure to ensure widespread Implementation:
AI services. MICSTI
access to digital resources and facilitate cloud- 2026 Q3 -
based AI services. continuous

45
2. Data Centres: Develop local data centres to 1.1. Partner with telecom providers to Ministry of
support data storage and processing needs, extend fiber-optic networks to Education and
ensuring data sovereignty and faster processing underserved areas. Training
capabilities. 1.2. Implement government subsidies or
public-private partnerships to fund
Computational Resources: Provide access to high-
connectivity in rural regions.
performance computing (HPC) for AI training and
1.3. Deploy 5G networks in urban and
development purposes.
semi-urban areas for enhanced speed
and reliability.
1.4. Monitor and maintain connectivity to
ensure consistent service quality.

2. Local Data Centres


Establish data centers to support storage,
processing, and data sovereignty.
2.1. Incentivize private investment in data
center construction via tax breaks or
grants.
2.2. Ensure centers meet international
energy efficiency standards to
minimize environmental impact.
2.3. Develop policies to secure sensitive
local data and ensure compliance
with data protection laws.
2.4. Use modular and scalable designs to
accommodate future growth in data
demands.

3. Computational Resources
Provide access to high-performance
computing (HPC) for AI training and
development.

46
3.1. Establish centralized HPC facilities
accessible to researchers, businesses,
and startups.
3.2. Create a cloud-based platform for
shared HPC resources to reduce
individual setup costs.
3.3. Partner with universities and tech
companies to provide cutting-edge
computational tools.
3.4. Offer training programs to build
expertise in utilizing HPC resources
for AI.
Leveraging AI to boost productivity, diversify the economy, and create high-value jobs
1. Provide Business Innovation Support: support 1. Establish an Innovation Fund AI policy maker Planning: 2026 Q2
businesses in implementing AI-driven solutions, 1.1. Allocate a dedicated budget to – 2026 Q3
AI regulator
e.g. through an Innovation fund. support AI-driven business initiatives.
Implementation:
1.2. Define funding tiers for startups, MICSTI
2026 Q3 -
SMEs, and larger organizations based
continuous
on project scope and potential
impact.

2. Set Clear Eligibility and Evaluation


Criteria
2.1. Focus on businesses with innovative
AI applications addressing industry
challenges or societal needs.
2.2. Use criteria like feasibility,
scalability, market potential, and
alignment with strategic goals.

3. Offer Expert Guidance and Resources

47
3.1. Provide mentorship and consulting
services from AI and business experts.
3.2. Create toolkits and templates for AI
project implementation.

4. Facilitate Training and Knowledge Sharing


4.1. Organize workshops, webinars, and
hackathons to build AI literacy.
4.2. Encourage collaboration through
networking events or innovation
hubs.

5. Foster Public-Private Partnerships


5.1. Collaborate with tech firms,
academic institutions, and
government bodies for co-funding and
support.
5.2. Leverage partnerships to access
cutting-edge AI research and
infrastructure.

6. Monitor and Evaluate Projects


6.1. Set milestones and KPIs to track the
success of funded initiatives.
6.2. Share success stories to inspire other
businesses and attract new
participants.

7. Ensure Long-Term Sustainability


7.1. Encourage recipients to develop self-
sustaining business models.

48
7.2. Provide follow-up funding or
resources for scaling successful
projects.
2. Provide Research Support: support academia to 1. Establish Dedicated Research Programs AI regulator Planning: 2026 Q2
research and develop novel AI-based solutions, 1.1. Create funding initiatives for AI- – 2026 Q3
AI policy maker
through dedicated programs. focused academic research.
Implementation:
1.2. Prioritize areas with high societal or MICSTI
2026 Q3 -
industrial impact, such as healthcare, Ministry of continuous
climate, and automation. Education and
Training
2. Provide Grants and Scholarships
2.1. Offer research grants to institutions
and scholarships for AI-focused
students.
2.2. Fund interdisciplinary projects to
promote collaboration between
fields.

3. Foster Academic-Industry Collaboration


3.1. Partner with industry to align
research goals with real-world
applications.
3.2. Facilitate internships, joint research
initiatives, and knowledge-sharing
programs.

4. Develop Research Infrastructure


4.1. Invest in AI research labs,
computational resources, and access
to datasets.
4.2. Support open-source platforms and
tools for broader accessibility.

49
5. Organize Competitions and Conferences
5.1. Host AI innovation challenges to
encourage breakthrough ideas.
5.2. Fund and promote academic
participation in global AI
conferences.

6. Monitor and Evaluate Impact


6.1. Regularly review funded projects to
measure progress and outcomes.
6.2. Showcase successful research to
attract more talent and funding.
3. Establish a Regulatory sandbox – a controlled 1. Define Objectives and Scope AI regulator Planning: 2026 Q2
and supportive environment for innovators to 1.1. Set clear goals for the sandbox, – 2026 Q3
AI policy maker
develop, test, and deploy AI while working focusing on innovation, risk
Implementation:
closely with regulators. It would allow assessment, and regulatory MICSTI
2026 Q3 -
policymakers to evaluate AI applications and alignment.
continuous
risks, ensuring alignment with ethical 1.2. Identify key areas such as healthcare,
standards, national priorities, and societal finance, and autonomous systems for
values. pilot projects.

2. Create a Controlled Environment


2.1. Develop a secure and monitored
infrastructure to test AI solutions.
2.2. Provide access to datasets,
computational resources, and
technical support.

3. Engage Stakeholders

50
3.1. Involve regulators, innovators,
academics, and industry experts to
guide the sandbox.
3.2. Establish a dedicated advisory board
for policy alignment and oversight.

4. Establish Testing Protocols


4.1. Define evaluation criteria, including
ethical standards, safety,
transparency, and societal impact.
4.2. Set clear timelines and milestones for
testing and feedback.

5. Provide Legal and Technical Support


5.1. Offer guidance on regulatory
compliance and intellectual property
protection.
5.2. Ensure innovators can access
mentorship and domain-specific
expertise.

6. Monitor, Evaluate, and Iterate


6.1. Collect data on AI system
performance and potential risks
during testing.
6.2. Use findings to refine policies,
standards, and sandbox operations.

7. Promote Knowledge Sharing


7.1. Publish case studies and best
practices from sandbox initiatives.

51
7.2. Encourage collaboration to enhance
understanding of AI’s benefits and
challenges
4. Establish dedicated Research and Development 1. AI Research Centres AI policy maker Planning: 2026 Q4
(R&D) Hubs: 1.1. Identify universities and institutions – 2027 Q2
MICSTI
4.1. AI Research Centres: specialized research with strong research potential in AI.
Implementation:
groups in universities and other research 1.2. Establish specialized AI research Ministry of
2027 Q3 -
institutions focusing on AI and related groups focused on priority areas like Education and
continuous
fields, possibly in collaboration with machine learning, NLP, and robotics. Training
international partners. 1.3. Foster international collaboration by
4.2. Innovation Labs: Create spaces where partnering with global AI research
startups and researchers can experiment leaders and organizations.
with AI technologies, offering funding and
resources for prototype development. 2. Innovation Labs
4.3. Collaborative Research Programs: Partner 2.1. Develop state-of-the-art labs
with global AI research organizations to equipped with computational
share knowledge and build capacity. resources, tools, and datasets for AI
experimentation.
2.2. Provide grants and funding for
startups, researchers, and innovators
to develop and test AI prototypes.
2.3. Host regular hackathons, workshops,
and mentorship programs to drive
creativity and collaboration.

3. Collaborative Research Programs


3.1. Form partnerships with global AI
research bodies to share knowledge
and access cutting-edge
advancements.

52
3.2. Support joint research initiatives and
exchange programs to build local
expertise.
3.3. Leverage international networks to
co-develop solutions addressing
global challenges.

4. Operational and Governance Framework


4.1. Establish a central governing body to
oversee the hubs, allocate resources,
and ensure alignment with national
priorities.
4.2. Monitor and evaluate R&D outcomes
to measure impact and continuously
improve programs.
Objective 6. Ensuring inclusiveness and ethical usage of AI
1. AI Ethics Guidelines: Develop guidelines 1. Define Core Principles AI regulator Planning: 2026 Q4
that align AI practices with the principles 1.1. Establish foundational values such as – 2027 Q2
of human rights, fairness, non- human rights, fairness, transparency,
Implementation:
discrimination, and inclusivity. non-discrimination, and inclusivity.
2027 Q3 -
1.2. Align with global frameworks like
continuous
UNESCO’s AI Ethics Recommendation
and OECD AI Principles.

2. Engage Stakeholders
2.1. Consult diverse stakeholders,
including policymakers, academics,
industry leaders, and civil society, to
ensure broad representation.

53
2.2. Include marginalized groups to
address inclusivity and non-
discrimination effectively.

3. Draft Comprehensive Guidelines


3.1. Cover areas like data privacy,
accountability, bias mitigation, and
ethical AI design.
3.2. Provide specific directives for
industries and applications with high
societal impact.

4. Establish Implementation Frameworks


4.1. Develop tools, checklists, and metrics
for organizations to operationalize
the guidelines.
4.2. Offer training programs to educate
developers and decision-makers on
ethical AI practices.

5. Ensure Compliance and Monitoring


5.1. Create mechanisms to assess
adherence, such as audits or
certification programs.
5.2. Regularly update guidelines based on
technological advancements and
societal feedback.

6. Promote Public Awareness


6.1. Disseminate the guidelines through
campaigns, workshops, and

54
partnerships to foster public trust in
AI.
2. Ethical Oversight Committees: Establish 1. Define Mandate and Objectives AI regulator Planning: 2027 Q4
independent ethics boards to monitor AI 1.1. Set the committee’s scope to include – 2028 Q2
deployments and ensure they adhere to monitoring AI deployments, ensuring
Implementation:
ethical standards. adherence to ethical standards, and
2028 Q3 -
addressing public concerns.
continuous

2. Assemble Diverse Expertise


2.1. Include members from academia,
industry, government, civil society,
and ethics specialists to ensure
balanced perspectives.
2.2. Ensure representation from
underrepresented groups to address
inclusivity.

3. Establish Operational Framework


3.1. Define processes for reviewing AI
projects, assessing risks, and
providing recommendations.
3.2. Develop protocols for regular audits,
reporting, and stakeholder
engagement.

4. Provide Resources and Authority


4.1. Equip committees with access to
necessary data, technical tools, and
legal authority to enforce
recommendations.

5. Monitor and Evaluate Deployments

55
5.1. Regularly assess AI systems for
compliance with ethical guidelines,
focusing on fairness, accountability,
and transparency.

6. Ensure Public Transparency


6.1. Publish reports and findings to
maintain public trust and
demonstrate accountability.

7. Adapt and Update Practices


7.1. Continuously refine oversight
mechanisms based on emerging
technologies and societal needs.
3. AI Safety Protocols: Mandate the use of risk 1. Define High-Risk AI Systems AI regulator Planning: 2026 Q4
assessment and mitigation protocols for 1.1. Identify sectors and applications – 2027 Q2
high-risk AI systems (e.g., those impacting (e.g., healthcare, finance, public
Implementation:
healthcare, finance, or public safety). safety) where AI poses significant
2027 Q3 -
risks.
continuous
1.2. Establish criteria for categorizing
systems as high-risk based on
potential impact.

2. Develop Risk Assessment Standards


2.1. Create standardized protocols for
assessing risks, including bias
detection, data security, and system
reliability.
2.2. Mandate scenario testing, stress
testing, and fail-safe mechanisms for
high-risk systems.

56
3. Mandate Mitigation Measures
3.1. Require safeguards like human-in-
the-loop mechanisms, explainability
features, and real-time monitoring.
3.2. Ensure systems are designed to
minimize harm and allow for quick
intervention in case of failures.

4. Enforce Regulatory Compliance


4.1. Establish legal requirements for risk
assessments and documentation
before deployment.
4.2. Introduce penalties or suspension for
non-compliance.

5. Monitor and Audit Systems


5.1. Implement regular auditing and post-
deployment monitoring to evaluate
safety and compliance.
5.2. Use independent third parties to
review systems objectively.

6. Provide Training and Resources


6.1. Offer guidelines, tools, and training
to developers and operators for
implementing safety protocols
effectively.

7. Continuously Update Protocols


7.1. Adapt protocols based on
technological advancements and
feedback from stakeholders.

57
4. Regular Audits and Reporting: Require 1. Establish Audit Standards AI regulator Planning: 2027 Q4
organizations to conduct regular audits of 1.1. Define clear criteria for evaluating AI – 2028 Q2
their AI systems and report on systems, including performance,
Implementation:
performance, safety, and ethical safety, fairness, and ethical
2028 Q3 -
compliance. compliance.
continuous
1.2. Tailor standards to industry-specific
requirements and risks.

2. Mandate Audit Frequency


2.1. Require periodic audits (e.g.,
annually or bi-annually) for all
operational AI systems.
2.2. Increase audit frequency for high-risk
or critical systems.

3. Require Transparent Reporting


3.1. Develop templates for organizations
to report audit findings, highlighting
compliance, risks, and mitigation
efforts.
3.2. Make key results accessible to
stakeholders and regulators.

4. Engage Independent Auditors


4.1. Encourage the use of third-party
auditors to ensure objectivity and
credibility.
4.2. Establish accreditation standards for
audit firms specializing in AI.

5. Implement Regulatory Oversight

58
5.1. Create a central regulatory body to
review audit reports, track
compliance, and enforce corrective
actions.
5.2. Impose penalties for non-compliance
or incomplete reporting.

6. Support Organizations
6.1. Provide resources, tools, and training
to help organizations conduct
effective audits.
6.2. Share best practices to improve audit
quality across industries.

7. Update Processes Regularly


7.1. Revise audit and reporting
requirements based on technological
advancements and feedback.
5. Bias Mitigation Programs: Develop 1. Establish Guidelines AI regulator Planning: 2026 Q4
initiatives to identify and reduce biases in 1.1. Define standards for identifying and – 2027 Q2
AI policy maker
AI models, particularly to prevent addressing bias in AI models.
Implementation:
discrimination and promote equity.
2027 Q3 -
2. Implement Bias Audits
continuous
2.1. Mandate regular evaluations of AI
systems for fairness and equity.

3. Develop Tools and Resources


3.1. Provide open-source tools for bias
detection and mitigation.

4. Offer Training Programs

59
4.1. Educate developers and stakeholders
on bias prevention techniques.

5. Monitor and Adapt


5.1. Continuously improve programs based
on advancements and feedback.
6. Public Input Mechanisms: Create channels 1. Establish Engagement Platforms AI regulator Planning: 2026 Q2
for public participation in discussions about 1.1. Create online portals and community – 2026 Q3
AI deployment and its societal impacts forums for public feedback on AI
Implementation:
initiatives.
2026 Q3 -
continuous
2. Host Public Consultations
2.1. Organize town halls, webinars, and
workshops to discuss AI deployment
and societal impacts.

3. Use Surveys and Polls


3.1. Collect public opinions through
targeted surveys on AI-related
policies and applications.

4. Promote Awareness
4.1. Educate the public on AI technologies
to facilitate informed participation.

5. Incorporate Feedback
5.1. Regularly review and integrate public
input into AI policy-making and
deployment plans.
Objective 7: Promoting International Alignment and Sustainability

60
1. Global Standards Alignment: Ensure that 1. Review International Standards AI policy maker Planning: 2026 Q4
Lesotho's regulations are in line with 1.1. Study global AI regulations, such as – 2027 Q2
AI regulator
international best practices to facilitate trade, GDPR, OECD AI Principles, and
Implementation:
collaboration, and data-sharing agreements. UNESCO guidelines, to identify
2027 Q3 -
relevant frameworks.
continuous

2. Adapt Local Regulations


2.1. Align Lesotho’s AI policies and
regulations with international best
practices, ensuring compatibility for
trade and collaboration.

3. Engage with Global Bodies


3.1. Participate in international AI forums
and working groups to stay updated
and contribute to global standard
development.

4. Establish Data-Sharing Agreements


4.1. Create legal frameworks that
facilitate secure cross-border data
exchange in line with international
norms.

5. Monitor and Update Regulations


5.1. Continuously review and update local
regulations based on evolving
international standards and emerging
technologies.
2. Participation in Global Forums: Engage in 1. Identify Key Forums AI policy maker Planning: 2026 Q4
international AI policy discussions to stay 1.1. Select relevant global AI policy – 2027 Q2
AI regulator
forums, such as OECD, G7, or AI-

61
informed on emerging trends and adapt to related UN initiatives, for Implementation:
global norms. engagement. 2027 Q3 -
continuous
2. Delegate Representation
2.1. Appoint government representatives,
experts, and stakeholders to actively
participate in discussions and
knowledge exchange.

3. Monitor Trends
3.1. Regularly track emerging AI trends,
policies, and best practices shared in
these forums.

4. Adapt National Policies


4.1. Align local AI policies with global
norms and trends based on insights
gained from international
participation.

5. Foster Global Collaboration


5.1. Build partnerships with international
organizations to collaborate on AI
research, regulation, and ethical
standards.
3. Regional AI Partnerships: Work with 1. Identify Regional Stakeholders AI policy maker Planning: 2026 Q4
neighbouring countries and regional bodies to 1.1. Engage neighboring countries, – 2027 Q2
AI regulator
share knowledge, resources, and best regional bodies, and local institutions
Implementation:
practices. to form AI collaboration networks.
2027 Q3 -
continuous
2. Establish and Join Knowledge-Sharing
Platforms

62
2.1. Create forums, workshops, and joint
research initiatives to exchange AI
knowledge and expertise.

3. Collaborate on Policy Development


3.1. Develop shared AI policies and best
practices that address regional
challenges and opportunities.

4. Pool Resources and Infrastructure


4.1. Collaborate on AI infrastructure,
funding, and research to maximize
regional impact.

5. Foster Cross-Border Projects


5.1. Launch joint AI initiatives to solve
common regional issues and improve
socio-economic development.
4. Global AI Initiatives: Engage in international 1. Identify Key Summits and Collaborations AI policy maker Planning: 2026 Q4
AI summits and collaborations to stay updated 1.1. Select major international AI – 2027 Q2
AI regulator
on global standards and advancements summits, conferences, and
Implementation:
partnerships to participate in, such as
2027 Q3 -
the Global AI Summit or World
continuous
Economic Forum.

2. Send Delegates and Experts


2.1. Ensure representation from
government, industry, and academia
to actively engage in discussions and
partnerships.

3. Track Global Advancements

63
3.1. Stay informed on cutting-edge AI
trends, regulatory developments, and
technological breakthroughs.

4. Contribute to Global Dialogue


4.1. Actively share local insights and
innovations to influence global AI
discussions and policies.

5. Implement Learnings Locally


5.1. Adapt and integrate international
best practices, standards, and
advancements into national AI
strategies.

Intermediate
Baseline End Target
Indicator Targets
2025 2028 2030
1. AI governance establishment (AI policy
maker, AI regulator, Data and AI 0 3 3
Committee)
2. Research funding To be
30% increase 50% increase
assessed
3. AI research groups in universities 0 1 2
4. Trained policymakers and public officials All policy makers
10 100 and public
officials
5 Updated school curricula 0 5 All
6 Updated university programs 0 50% 100%
7 Participants of upskilling and reskilling
0 100 500
programs

64
8 Educational portal 0 1 1
9 TV programs 0 1 1
10 Social media campaigns (channels) 1 3 3
11 Community events per year 2 4 5
12 Research exchanges per year 0 3 5
13 Innovation fund (establish and increase
0 1 (20%) 1 (50%)
funding)
14 Regulatory sandbox 0 1 1
15 R&D hubs 0 0 1
16 Channel for public input 0 1 1
17 Participation in the global forums
2 4 4
(events) per year

65
6.3 IMPLEMENTATION RISKS AND MEASURES FOR MITIGATION
Note for challenges related to risk assessment and mitigation measures for the Policy Implementation.
21

Acknowledging the importance of each Objective and measures to be taken considering the principles,
the key challenges should be indicated to evaluate the risk of the Policy not being implemented in the
context of agreed measures or time. The key challenges are:

1. Stakeholder coordination: insufficient coordination might result in potential delays in aligning


priorities and ensuring consensus among all involved parties.
2. Resource limitations: ensuring sufficient financial, human, or technical resources to support timely
and effective implementation. Lack of any of these resources might result in an inability to achieve
the intended Objectives within the agreed timeline.
3. Regulatory and market situation: Unforeseen changes or delays in the regulatory framework or
market dynamics might affect the planned content or timeline.
4. External circumstances: Geopolitical, technological, or economic factors might cause unforeseen
delays or need for readjustments.

Risk assessment is vital to properly monitor the Policy and Implementation plan. Planned mitigation
measures enable a timely response to ensure that the Objectives are reached without significant
deviation. Mitigation measures are usually divided into preventive actions to reduce the likelihood of
risks, comprehensive mechanisms to identify issues early, corrective steps to address problems as they
arise, risk transfer where the responsibility is passed to other parties, and contingency plans to minimize
their impact and ensure continuity. As per Policy, the mitigation measures should be considered:

• Collaborative approach: foster close collaboration and information distribution among


stakeholders to ensure the commitment to the target and respective measures.
• Proactive monitoring: implement a robust progress monitoring framework with regular reporting
and assessment milestones.
• Timely identification of issues: establish mechanisms to detect insufficient progress early,
taking into account agility, and enabling corrective actions to be taken without undue delays.
• Clear accountability and transparency: define roles and responsibilities clearly to ensure
accountability at every stage of implementation. The Policy encourages that relevant authorities
from different frameworks should be involved in processes from the very beginning of the
processes (even before legislative initiatives) to ensure a proper level of collaboration and
capacity building during and after the adoption of particular requirements.

21
The consultant proposes for the MICSTI to consider including this note - although it might sound repetitive, considering the
principles, that risk assessment stems from, the key point is to acknowledge risk and take measures accordingly.

66
7. ANNEXES

7.1. ANNEX NO. 1 INTERNATIONAL CONTEXT


1. Lessons from the best practices worldwide
1.1. European Union
The European Union (EU) has developed one of the world's most comprehensive approaches to artificial
intelligence regulation, primarily through the AI Act, which officially entered into force on August 1,
2024. This legislation establishes a risk-based regulatory framework to govern AI development and
deployment while safeguarding citizens' rights, health, and safety.
Key Aspects of the EU AI Act:
1. Risk-Based Approach:
1.1. Minimal Risk: Applications like spam filters or AI-enabled games have no mandatory
obligations.
1.2. Limited Risk: Systems such as chatbots must disclose that users are interacting with AI.
1.3. High Risk: Areas such as healthcare or recruitment require stringent oversight, including
quality data standards and human oversight mechanisms.
1.4. Unacceptable Risk: Practices like AI-driven “social scoring” or harmful biometric
surveillance are banned outright.
2. General-Purpose AI (GPAI):
2.1. The Act includes provisions for GPAI systems like large language models. A Code of Practice
is under development to ensure transparency, copyright compliance, and risk management.
This will be enforced by the EU's newly established AI Office.
3. Timeline:
3.1. Full compliance for most provisions begins in 2026, though bans on prohibited uses and initial
rules for GPAI are effective sooner.
4. Broader Goals:
4.1. Promote innovation while ensuring AI aligns with EU values such as privacy, human rights,
and fairness.
4.2. Create a harmonized regulatory environment to foster cross-border AI deployment within
the EU.
5. The EU AI Act is a model for balancing technological progress with ethical concerns and is seen
as a global benchmark for AI regulation.
Key references to European Union AI policies and regulations:
1. Overall EU approach to AI22, which provides a comprehensive compendium of diverse EU policies
and documents.
2. The EU AI Act23,24.

22
European Commission, “European approach to artificial intelligence,” // https://digital-strategy.ec.europa.eu/en/policies/european-
approach-artificial-intelligence.
23
Directorate-General for Communication, “AI Act enters into force,” 1 8 2024 // https://commission.europa.eu/news/ai-act-enters-force-
2024-08-01_en.
24
European Parliament, “Artificial Intelligence Act,” European Parliament, 2024 // https://www.europarl.europa.eu/doceo/document/TA-9-
2024-0138_EN.pdf.

67
2.1. Risk-Based Framework: The Act categorizes AI systems into four risk levels: minimal risk,
limited risk, high risk, and unacceptable risk. High-risk systems face strict compliance
requirements, while unacceptable-risk AI practices, such as social scoring, are banned.
2.2. General-Purpose AI (GPAI): Includes provisions for large models like ChatGPT to ensure
transparency, safety, and ethical use.
2.3. Implementation Timeline: Entered into force in August 2024, with full application expected
by 2026, after a phased rollout.
3. Ethical Guidelines for Trustworthy AI by the High-Level Expert Group on AI, these guidelines
focus on promoting human-centric AI by emphasizing fairness, transparency, and accountability
in AI applications25.
4. Related policies, such as GDPR26 and Digital Services Act (DSA)27.
5. OECD.AI Policy Observatory, a Global Partnership on AI28.
1.2. Lithuania

Lithuania has adopted a strategic yet evolving approach to AI policy and regulation, leveraging national
and European frameworks to align with technological advancements and address potential risks. The
key elements of Lithuania's AI policies and regulatory initiatives are the following:
1. Strategic Objectives.
1.1. Research and Innovation: Lithuania focuses on strengthening its AI research ecosystem
through initiatives like the establishment of national research centres, fostering AI adoption
in sectors such as manufacturing, healthcare, and agriculture, and encouraging collaboration
between public and private stakeholders29, 30.
1.2. Education and Skills Development: AI education is emphasized at all levels, from early
schooling to advanced degrees. Initiatives include updating STEM education, promoting AI in
vocational training, and offering lifelong learning opportunities31.
2. Regulatory Framework.
2.1. Ethics and Trust: Lithuania is working on establishing an AI ethics committee and developing
ethical guidelines to ensure transparency, fairness, and safety. The regulatory framework is
designed to align with the upcoming EU AI Act and other European standards32.
2.2. Testing Environments: Regulatory sandboxes are being created to allow testing and piloting
of AI solutions within controlled conditions, fostering innovation while managing risks.
The Innovation Agency is responsible for this topic in Lithuania.

25
European Commission, “Ethics guidelines for trustworthy AI,” European Commission, 2019 // https://digital-
strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai.
26
European Parliament, “Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural
persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (GDPR),” 27
4 2016 // https://eur-lex.europa.eu/eli/reg/2016/679/oj.
27
European Parliament, “Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural
persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (GDPR),” 27
4 2016 // https://eur-lex.europa.eu/eli/reg/2016/679/oj.
28
ECD, “Policies, data and analysis for trustworthy artificial intelligence,” // https://oecd.ai/en/.
29
A. Macijauskienė, V. Stančikė and R. Jankauskytė, “AI, Machine Learning & Big Data Laws and Regulations 2024 – Lithuania,” Global Legal
Insights, 2024 // https://www.globallegalinsights.com/practice-areas/ai-machine-learning-and-big-data-laws-and-regulations/lithuania/.
30
European Commission, “National strategies on Artificial Intelligence: Country Report - Lithuanian,” The European Commission’s Science and
Knowledge Service, 2020 // https://ai-watch.ec.europa.eu/countries/lithuania/lithuania-ai-strategy-report_en/.
31
European Commission, “National strategies on Artificial Intelligence: Country Report - Lithuanian,” The European Commission’s Science and
Knowledge Service, 2020 // https://ai-watch.ec.europa.eu/countries/lithuania/lithuania-ai-strategy-report_en/.
32
AI Watch, “Lithuania: Public Sector dimension of AI Strategy,” 2019 // https://ai-watch.ec.europa.eu/topics/public-sector/public-sector-
dimension-ai-national-strategies/lithuania-public-sector-dimension-ai-strategy_en.

68
3. Public Sector Integration.
3.1. AI is being integrated into public administration to enhance efficiency and citizen services.
This includes building an AI-friendly data environment, improving data management, and
enabling public-private partnerships. GovTechLab33 initiative is providing services and
support for the different public entities interested in deploying and using AI.
4. Data and Infrastructure.
4.1. Lithuania has initiated a centralized open data hub to improve the accessibility and usability
of data for AI systems, adhering to FAIR (Findable, Accessible, Interoperable, Reusable)
principles34.
5. Alignment with EU Policies:
5.1. The country is actively preparing for the implementation of the EU AI Act, which will provide
a uniform regulatory structure across member states while addressing local needs35,36.
6. Lithuania has certain elements of enabling policies and regulations, such as GDPR, Information
systems, etc.
Lithuania is still working on its standalone national AI legislation yet; these strategies and
recommendations position it as a forward-thinking participant in the global AI ecosystem.
1.3. Compendiums of the best international practices

Based on today’s best AI practices the following items are of the utmost importance in developing AI
policies.
1. Adopt a Risk-Based Framework
Categorize AI systems based on their risk to safety, privacy, and societal impact. For
example, the EU AI Act classifies AI into minimal, limited, high, and unacceptable risk
categories, applying stricter regulations to higher-risk systems.
2. Ensure Transparency and Explainability
Mandate that AI systems provide clear explanations of how decisions are made,
especially in sensitive areas like healthcare or legal settings. Transparency builds public
trust and accountability.
3. Prioritize Ethical AI Development
Implement guidelines to promote fairness, prevent bias, and ensure AI respects
fundamental rights. The OECD AI Principles emphasize human-centric AI, benefitting
society without causing harm.
4. Protect Data Privacy and Security
Integrate data protection laws like the GDPR into AI policies to ensure ethical handling
of data, protect user privacy, and prevent unauthorized exploitation

33
GovTech Lab, “GovTech Lab,” 2024 // https://govtechlab.lt.
34
AI Watch, “Lithuania: Public Sector dimension of AI Strategy,” 2019 // https://ai-watch.ec.europa.eu/topics/public-sector/public-sector-
dimension-ai-national-strategies/lithuania-public-sector-dimension-ai-strategy_en.
35
GovTech Lab, “GovTech Lab,” 2024 // https://govtechlab.lt.
36
A. Macijauskienė, V. Stančikė and R. Jankauskytė, “AI, Machine Learning & Big Data Laws and Regulations 2024 – Lithuania,” Global Legal
Insights, 2024 // https://www.globallegalinsights.com/practice-areas/ai-machine-learning-and-big-data-laws-and-regulations/lithuania/.

69
5. Establish Clear Governance Structures
Create independent oversight bodies to monitor AI use, enforce compliance, and provide
recommendations. For example, the EU AI Act introduces an AI Board to ensure
consistent implementation across member states
6. Promote Responsible Innovation
Provide incentives and regulatory sandboxes for testing AI in controlled environments to
encourage innovation without compromising safety.
7. Engage Stakeholders
Involve diverse stakeholders, including governments, industry, academia, and civil
society, in policymaking to address broad perspectives and ensure policies are practical
and inclusive.
8. Address Socio-Economic Impacts
Develop strategies for workforce reskilling, upskilling, and social protections to mitigate
the economic impacts of AI-driven automation.
9. Encourage International Collaboration
Work with global organizations like the UN, OECD, or GPAI to align AI regulations and
standards, facilitating cross-border innovation and mitigating risks like AI misuse. In the
case of Lesotho, cooperation with neighbouring countries and regional unions such as
South Africa, The African Union37, the Southern African Development Community38, the
Commonwealth39, and the African Development Bank40 could be beneficial.
10. Focus on Education and Awareness
Promote AI literacy through public campaigns and education systems to ensure citizens
understand and responsibly engage with AI technologies.
7.2. ANNEX NO. 2 INSTITUTIONAL ARRANGEMENTS
1. Roles and responsibilities of the main stakeholders
Roles and responsibilities of the main stakeholders are provided in Table 1. Roles and Responsibilities
of the Main Stakeholders in AI Governance. However, it is important to mention, that tight cooperation
of all the stakeholders is a must to secure the smooth implementation of responsible AI governance in
Lesotho, e.g. only joint groups can ensure such activities as follows:
1. Data Sharing Agreements: Establish frameworks for securely sharing data across sectors.
2. Joint Research Initiatives: Combine resources from academia, government, and private sectors
for impactful AI research.
3. Public AI Committee, consisting of actors outlined in the table below:

37
The African Union, “The African Union,” // https://au.int/en/.
38
The Southern African Development Community, “The Southern African Development Community,” // https://www.sadc.int.
39
Commonwealth Secretariat, “The Commonwealth,” // https://thecommonwealth.org.
40
African Development Bank, “African Development Bank,” // https://www.afdb.org/en.

70
Table 1. Roles and Responsibilities of the Main Stakeholders in AI Governance

No. Stakeholder Description


The government is the primary driver of AI governance,
responsible for policymaking, regulation, and implementation.
1. Policy Development: Draft and implement AI policies and
national strategies that align with Lesotho's development
goals.
2. Legislation: Enact laws on AI ethics, data protection,
privacy, and intellectual property.
Government (MICSTI 3. Funding and Support: Allocate resources for AI research,
1. as a representative of innovation, and public sector adoption.
the government) 4. Regulatory Oversight: Establish regulatory bodies to oversee
AI deployment, ensure compliance with laws, and mitigate
risks.
5. Public Services: Use AI to improve healthcare, education,
agriculture, and public administration.
6. Collaboration: Partner with international organizations,
regional bodies, and private companies to ensure best
practices and resource-sharing.
Independent or semi-autonomous regulatory bodies ensure
compliance with AI policies and address ethical, legal, and
technical challenges.
1. Ethics Oversight: Monitor the ethical use of AI, preventing
discrimination, bias, and misuse.
Regulatory 2. Data Governance: Oversee the secure and fair use of data,
2. ensuring adherence to privacy regulations.
Authorities
3. Standards Development: Define technical and operational
standards for AI systems.
4. Risk Management: Identify and mitigate risks associated
with AI deployment in critical sectors.
5. Dispute Resolution: Address conflicts or complaints related
to AI systems and their impact.
Universities and research organizations contribute to capacity
building, innovation, and critical analysis of AI governance
frameworks.
1. Research and Development: Conduct AI-related research in
areas like natural language processing, predictive analytics,
Academia and and ethical AI design.
3.
Research Institutions 2. Capacity Building: Develop curricula and training programs
to upskill the workforce in AI technologies.
3. Policy Advisory: Provide evidence-based recommendations
to inform policymaking and governance frameworks.
4. Collaboration: Partner with international institutions to
enhance local research capabilities and knowledge
exchange.

71
5. Ethical Studies: Explore the social, economic, and ethical
implications of AI in Lesotho’s context.
Companies, startups, and tech firms play a pivotal role in
driving innovation, investing in AI technologies, and creating
practical applications.
1. Innovation: Develop AI solutions tailored to local challenges
(e.g., agriculture, healthcare, financial inclusion).
2. Adherence to Standards: Ensure compliance with national
and international AI regulations and ethical standards.
4. Private Sector 3. Data Sharing: Collaborate with public and research sectors
to provide datasets for training AI models, while respecting
privacy laws.
4. Investment: Invest in AI research, infrastructure, and
workforce development.
5. Public-Private Partnerships (PPPs): Partner with the
government on AI-driven projects to improve public
services.
CSOs and NGOs advocate for the ethical use of AI and ensure
that vulnerable populations are not left behind in the AI
revolution.
1. Advocacy: Promote the ethical and inclusive use of AI while
highlighting risks such as bias, inequality, or exclusion.
Civil Society 2. Awareness Campaigns: Educate citizens about AI, its
Organizations (CSOs) benefits, and potential risks.
5. and Non- 3. Monitoring and Accountability: Act as watchdogs to hold
Governmental governments and private entities accountable for the
Organizations (NGOs) responsible use of AI.
4. Community Engagement: Ensure the voices of marginalized
communities are included in AI policymaking.
5. Capacity Building: Provide training and resources for
communities to understand and benefit from AI
technologies.
Global organizations and partners provide technical, financial,
and policy support to help Lesotho establish its AI governance
framework.
1. Technical Assistance: Provide expertise and resources for
drafting policies, setting standards, and building
International
infrastructure.
Organizations and
6. 2. Capacity Building: Support training programs and workshops
Development
to enhance local expertise in AI.
Partners
3. Funding: Offer financial aid or grants for AI research,
innovation hubs, and governance projects.
4. Knowledge Sharing: Share international best practices and
facilitate collaboration with other countries and regions.
5. Policy Alignment: Help Lesotho align its AI governance
framework with international standards, such as the OECD

72
AI Principles or UNESCO’s AI Ethics Guidelines, African Union
policies, etc.
The media and citizens play a crucial role in shaping public
discourse, promoting transparency, and holding stakeholders
accountable.
1. Media:
1.1. Educate the public about AI and its implications.
1.2. Investigate and report on potential misuse or ethical
breaches in AI systems.
7. Media and the Public 1.3. Facilitate dialogue between stakeholders and the
public.
2. Citizens:
2.1. Provide feedback on AI systems and governance
policies.
2.2. Advocate for transparency, fairness, and inclusivity in
AI applications.
2.3. Participate in public consultations and discussions to
shape AI policies.

73

You might also like