0% found this document useful (0 votes)
50 views30 pages

Trading Intelligence Ch3 e

The document discusses the policies surrounding AI and international trade, highlighting the disparity in AI adoption between wealthy and low-income economies. It emphasizes the need for investment in digital infrastructure and human capital to bridge the AI divide, and advocates for international collaboration to support developing economies. The text also notes the importance of inclusive strategies to ensure that the benefits of AI are accessible to all, rather than being concentrated in wealthier nations and industries.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views30 pages

Trading Intelligence Ch3 e

The document discusses the policies surrounding AI and international trade, highlighting the disparity in AI adoption between wealthy and low-income economies. It emphasizes the need for investment in digital infrastructure and human capital to bridge the AI divide, and advocates for international collaboration to support developing economies. The text also notes the importance of inclusive strategies to ensure that the benefits of AI are accessible to all, rather than being concentrated in wealthier nations and industries.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

CHAPTER 3: THE POLICIES OF AI AND TRADE

3 The policies
of AI and trade
35
CHAPTER 3: THE POLICIES OF AI AND TRADE

Wealthier economies, including advanced and some

(a) AI and trade: emerging market economies, are generally better
prepared than low-income economies to adopt AI.
key policy As illustrated in Figure 3.1, both the Digital Infrastructure

considerations Index and the Human Capital and Labor Market Policies
Index — components of the IMF’s AI Preparedness Index —
are positively correlated with income levels. Higher-income
economies tend to have stronger digital infrastructure and
The discussion of how AI might reshape international more trained human capital, making them more equipped
trade raises important policy questions. The future of to adopt AI technologies.
AI and international trade hinges on the policy choices of
governments and on the strategies and priorities of industries To address the AI divide, it is crucial to invest in digital
and businesses. infrastructure to ensure that low-income economies
have the necessary technological foundation to
support AI adoption. Governments and the private sector
(i) Addressing the growing could collaborate to expand high-speed internet access,
improve electricity infrastructure, particularly through renewable
AI divide energy generation, enhance data storage capabilities, and
develop robust cybersecurity measures. Public policies need
to incentivize infrastructure development in underserved
To leverage the opportunities of AI, the digital areas, and international cooperation should focus on providing
divide between economies, in terms of both digital technical and financial assistance to developing economies.
infrastructure and skills, must be addressed. As In addition to infrastructure, bridging the AI divide requires a
discussed in Chapter 2, ensuring that workers and firms are substantial investment in human capital to equip individuals
prepared to adopt AI involves robust digital infrastructure with the skills they need to utilize AI technologies effectively.
and trained human capital. Digital infrastructure is a crucial Education and training programmes should include AI literacy,
determinant of information and communications technology coding, data analysis and other relevant skills. Public-private
(ICT) adoption, and can lay the foundation for the diffusion partnerships can play a key role in these efforts, as companies
and localized application of AI technology (Nicoletti et al., can offer practical training and resources, while governments
2020). Nonetheless, such infrastructure is of limited use can provide the necessary regulatory support, access to
without a skilled workforce capable of leveraging digital affordable devices and connectivity, and funding (see the
platforms for innovative workplace applications. opinion piece by James Manyika).

Figure 3.1: AI preparedness is higher in advanced economies

0.9
0.9 0.9
0.9
Human Capital and Labor Market Policies Index

0.8
0.8 0.8
0.8

0.7
0.7 0.7
0.7
Digital Infrastructure Index

0.6
0.6 0.6
0.6

0.5
0.5 0.5
0.5

0.4
0.4 0.4
0.4

0.3
0.3 0.3
0.3

0.2
0.2 0.2
0.2

0.1
0.1 0.1
0.1

0.0
0.0 0.0
0.0
00 20,000 40,000
20,000 40,000 60,000
60,000 80,000
80,000 100,000
100,000120,000
120,000 00 20,000 40,000
20,000 40,000 60,000
60,000 80,000
80,000 100,000
100,000120,000
120,000

GNIper
GNI percapita
capita(current
(currentUS$)
US$) GNIper
GNI percapita
capita(current
(currentUS$)
US$)

Low-income economies Emerging market economies Advanced economies

Source: International Monetary Fund (IMF) AI Preparedness Index (Cazzaniga et al., 2024). The indices are rescaled to range
between 0 and 1.

36
CHAPTER 3: THE POLICIES OF AI AND TRADE

Opinion piece James Manyika


Harnessing technology to advance Senior Vice President of Research,
shared prosperity Technology and Society at Google;
Co-Chair of the United Nations
Artificial intelligence is the most important technology Secretary-General’s High-Level
of the present era, offering the potential to make Advisory Body on
people’s everyday lives easier, power economic Artificial Intelligence
growth, help middle-class and lower-income workers,
drive scientific and health advances, and address
longstanding development challenges. In a world
that is on track to meet only 15 per cent of the UN’s
Sustainable Development Goals (SDGs), AI provides this means providing access to AI-capable cloud
an opportunity to reverse that trendline and contribute infrastructure, computing capacity, developer tools
to progress on 79 per cent of our shared global goals and datasets relevant to AI development, while
(Hoyer Gosselink et al., 2024). equipping workers and students with foundational AI
skills that provide pathways to the modern workforce.
However, while the economic and societal opportunity Most crucially, we must place small businesses and
offered by AI is immense, we must always remember traditional industries like manufacturing and agriculture
that the benefits of new technologies are not automatic. at the forefront of AI leadership.
At this moment of excitement, it is important to take
a step back and consider the history of trade and At the international level, driving a vision of shared
technology – and make a concerted effort to build prosperity will require expanding what we think of as
an inclusive trading system around AI that avoids the “trade” – not just removing barriers to cross-border
creation of an “AI divide.” goods and services, but advancing a global strategy
to drive alignment on AI governance and security,
The combined forces of technology and trade helped support trusted data flows, enable economic integration,
lift over a billion people out of extreme poverty – an and build solutions to cross-border challenges.
achievement unparalleled in human history.1 This An inclusive AI strategy must also drive investment
dramatically changed global development, speeding in the subsea and terrestrial cables that enable
up the flow of data and enabling smaller companies participation in the modern economy (Quigley, 2024).
and economies to participate in trade. By 2016, digital
flows – often a key aspect of other global flows such At its heart, this is a modern form of capacity building,
as manufacturing, services and financial flows and with governments, industry and civil society working
other intangibles – had begun to exert a larger impact together to invest in AI infrastructure, build a global
on GDP growth than the centuries-old trade in goods resource for AI research and develop training
(Manyika et al., 2016), with a surprising share of programmes that promote AI diffusion across
the benefits from digital trade going to the services, sectors and geographies.
manufacturing and retail sectors (The White House,
2024). These trends on digital and digitally enabled In contrast, if economies cannot align trade with the
global flows have only accelerated since 2016, with mission of shared prosperity, there is a risk that AI
some estimating that up to 40 per cent of GDP now will only be adopted by wealthier economies, and
depends on global flows (Seong et al., 2022). by the wealthiest industries within those economies.
This would be harmful not just from an equity
Now with AI, economies stand on the verge of an even perspective but also from an economic perspective
more profound economic and scientific transformation – the trillions in potential economic benefits from AI
that may fundamentally shift and reshape trading are conditional on broad-based adoption of these
patterns. But it is critical to avoid creating an “AI divide” technologies, not usage by the privileged few.
(Ossa, 2023). As of 2023, 93 per cent of people in
high-income economies use the internet, compared The choice is ours. Together, let’s build a trade
with only 27 per cent of people in low-income agenda that harnesses the transformative power
economies (ITU, 2023). The UN’s AI Advisory Body of AI for all people, regardless of geographical
has rightly concluded that these divides cannot location or economic status.
persist into the AI era (United Nations, 2024a).
Disclaimer
We must work together across companies, Opinion pieces are the sole responsibility of their
governments and civil society to harness AI to advance authors. They do not necessarily reflect the opinions
a vision of shared prosperity. At a national level, or views of WTO members or the WTO Secretariat.

37
CHAPTER 3: THE POLICIES OF AI AND TRADE

literature’s influence.3 As illustrated in Figure 3.3, the affiliation


Figure 3.2: Number of AI patent filings by of teams researching and publishing on AI systems that
geographic regions, 2013-22 “demonstrate the ability to learn, show tangible experimental
results, and contribute advancements that push the
140,000 boundaries of existing AI technology” showed an important
switch in the second part of the 2010s: whereas most of
the research was led by academia prior to 2016, industry
120,000
took the lead in the number of publications afterwards.

100,000 Investment in AI is accelerating rapidly, with the


United States leading in private investment. AI funding
stems from a variety of sources, including private companies,
80,000
venture capital firms, government funding, academic
institutions, corporate partnerships and angel investors. The
60,000 United States leads in terms of total AI private investment.
In 2023, the US$ 67.2 billion invested in AI in the United
States was roughly 8.7 times greater than the amount invested
40,000
in the next highest country, China (US$ 7.8 billion), and
17.8 times the amount invested in the United Kingdom
20,000 (US$ 3.8 billion). Since 2022, the United States has
experienced a notable increase in private investment in AI
(22.1 per cent) (Maslej et al., 2024).
0
2013

2014

2015

2016

2017

2018

2019

2020

2021

2022

The disparity in private investment in AI becomes


particularly pronounced in generative AI. Despite a
Australia European Union and Republic of Korea
United Kingdom recent decline in overall AI private investment, funding for
China India United States generative AI has surged, reaching US$ 25.2 billion in 2023
Canada Japan Other (Maslej et al., 2024). However, this surge is heavily
concentrated in a few economies, with the United States
Source: Centre for Security and Emerging Technology (2024).
taking the lead. In 2023, the United States surpassed the
combined investments of the European Union plus the United
Kingdom in generative AI by approximately US$ 21.1 billion.
Venture capital investments in generative AI have also been
In addition to differences in the ability to adopt AI led by the United States, with a steep jump in 2023 to over
technologies, the AI divide across economies, reflected US$ 16 billion going towards generative adversarial networks
in AI research and development (R&D), investment (machine learning models that generate new data mimicking a
and expertise, highlights the need to address gaps in given dataset) for AI training and generative AI for text, image
AI capabilities. There is a significant divide between and audio.4
economies leading in AI R&D and the rest, especially
developing economies and least-developed countries The demographics of professionals with AI skills are
(LDCs). As illustrated in Figure 3.2, China is by far the leading largely male and located in Europe and North America.
economy in terms of the number of patents registered, with According to a developer survey by Stack Overflow, a
86,663 AI patent applications and 15,869 patents granted question-and-answer platform for programmers,5 94.24 per
in 2022, followed by the Republic of Korea and the United cent of data scientists and machine learning professionals are
States.2 This disparity reflects the underlying technological male, and the majority are located in Europe and North America
differences between economies and underscores the (OECD.AI, 2024). Moreover, data from the Computing
importance of facilitating technology dissemination and Research Association,6 although limited to the United States
technical assistance to bridge the gap globally. There is a and Canada, reveal that the representation of women among
marked division between where the research, patents and new AI and computer science PhDs has remained stagnant at
investments in AI are located and where they are lacking, and approximately 20 per cent since 2010. This persistent gender
there is a growing risk of further exacerbating this division, gap underscores an ongoing challenge within the field.
which exists both between and within economies and
between urban and rural, less digitally connected areas. This imbalance may be further exacerbated by the
race to nurture AI through government subsidies.
The number of published articles on AI has increased As discussed in Chapter 3(b), a number of governments
steadily, with industry taking the lead. The number of are launching domestic initiatives to promote AI, backed by
published articles on AI has increased steadily, except in generous state support. However, as most of this support is
the United States which saw a drop in 2022. Although being provided by high-income economies, it may exacerbate
China is leading in terms of the volume of published scholarly the AI divide among economies. The relative concentration
articles, it is worth noting that the United States ranks of AI supply chains can also result in trade imbalances in
first in terms of the number of citations, an indicator of the AI-related goods and services.

38
CHAPTER 3: THE POLICIES OF AI AND TRADE

Figure 3.3: Affiliation of publication research teams, 1950-2022

120

100

80

60

40

20

0
1950
1952
1954
1955
1956
1957
1959
1960
1961
1962
1968
1970
1974
1975
1976
1977
1979
1980
1981
1982
1983
1984
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
Academia Academia and industry collaboration Industry Other

Source: Epoch (2024).

Beyond the digital divide across economies, industrial advantage. Traditional antitrust frameworks may struggle to
concentration is prevalent in AI within economies adapt to the dynamic nature of AI-driven markets, requiring
due to increasing returns and network effects. competition authorities to develop new analytical tools,
As discussed in Section 2(b)(iv), as the development of AI data access mechanisms and regulatory frameworks to
models progresses and their development costs escalate, effectively safeguard competition and consumer welfare in
only large firms can afford the substantial up-front investments the AI era (see the opinion piece by Shin-yi Peng).
required. This creates a significant barrier for newcomers and
smaller enterprises, making it difficult for them to compete. There is growing scrutiny of mergers in the AI market
The high initial costs of developing cutting-edge AI models and and growing interest in better understanding the
the necessity for extensive data and computational resources implications of AI on competition.8 Traditional antitrust
further consolidate the dominance of established players. policies, which apply after the fact, when market competition
has already been impacted, are slow and focus on prices,
The widespread adoption of AI in markets can and are not sufficient to address competition issues
heighten the risk of collusion between companies. raised by AI. The competition challenges raised by AI have
AI systems integrated into pricing strategies and market led to renewed calls for a collective international approach
analysis can enable companies to monitor competitors’ to regulation and for enforcement of competition in
pricing behaviour and adjust their own prices accordingly. digital markets.
While this may optimize profits individually, it can collectively
lead to tacit agreements or collusion among competitors to
maintain higher prices (Assad et al., 2024; OECD, 2021a).
Moreover, AI’s ability to process vast data and predict market
(ii) Preventing further
trends may enhance firms’ coordination in pricing strategies, digital trade barriers
exacerbating market concentration.7

The special features of AI present challenges for Cross-border data flows are essential to AI.
competition authorities. The opacity of AI algorithms and As discussed in Section 2(a)(i), amassing vast datasets
the sheer volume of data they process can obscure anti- is vital in order to train algorithms, and data flows are
competitive practices such as price collusion, exclusionary integral to the real-time use of AI technologies. Breadth and
behaviour and discriminatory practices. Moreover, AI-driven variety of data are as important as volume.9 For AI to be
mergers and acquisitions may raise concerns about effective and deliver accurate predictions that are not
market dominance and barriers to entry, as algorithms susceptible to bias and discrimination, algorithms need to
and data assets become pivotal assets for competitive be built on high-quality, accurate and representative data.

39
CHAPTER 3: THE POLICIES OF AI AND TRADE

Cross-border data flow restrictions can negatively firms, but may do so at the expense of overall quality,
impact AI innovation and development, and can thereby undermining innovation and the full potential of AI
increase costs for firms. Such restrictions have a general (Goldfarb and Trefler, 2018).10 Cross-border data flow
negative impact on productivity, economic growth and restrictions also impose extra costs on firms wanting to do
innovation domestically and globally (Aaronson, 2019; business internationally. A recent study on the implications
Goldfarb and Tucker, 2012; Luintel and Khan, 2009; of data flow restrictions on global GDP and trade finds
Maskus and Reichman, 2004; OECD, 2016), but are of that if all economies fully restricted their data flows, it could
particular concern for AI innovation and development. result in a 5 per cent reduction in global GDP and a 10
Because AI requires vast amounts of good quality data per cent decrease in exports (OECD and WTO, 2024).
in order to be trained, and this often involves merging different To comply with data flow restrictions, firms may need to
data sources together, cross-border data flow restrictions establish a presence and duplicate activities across various
are likely to affect the quality and accuracy of AI models jurisdictions and devise a system to ensure that data are
and the scalability of AI applications significantly. By not routed internationally. While technically feasible, doing
limiting the ability of foreign firms to access data from a this can be particularly costly, especially for small businesses
given jurisdiction, such measures could favour domestic (Goldfarb and Trefler, 2018).

Box 3.1:
AI and consumer protection
AI opens significant opportunities In addition, while AI technologies can to harness algorithms and protect
for consumers, but also increases contribute to effective moderation consumers is being discussed in
the possibilities of covert influence, for the benefit of consumers, they various jurisdictions, including the
raising significant concerns over can also deliver inaccurate, biased European Union12 and the United
the exploitation of personal or discriminatory responses that Kingdom (Holmes, 2024).
information and violation of privacy, can also harm consumers.
manipulation and disinformation. However, national approaches
Finally, sellers may not fully take do not adequately protect
For consumers, AI can provide into account potential harm consumers in the case of cross-
major benefits, such as caused to consumers as a result border transactions (Jones, 2023).
individualized recommendations of consumer data misuse due to Obtaining redress in case of harm
and time-saving (e.g., AI voice the difficulty in tracing that harm remains particularly challenging
assistants can order groceries back to the original data collector. in the event of international
instantly, saving consumers Consumers may not, therefore, transactions. Although some level
hours of shopping time). The challenge data use after the data of international collaboration and
ability of AI models to establish is collected (Agrawal et al., 2019). regulatory discussions exists among
correlations between consumers’ national bodies, this cooperation
data and possible responses to Notwithstanding the fact that remains fragmented and does not
advertisements in order to predict traditional consumer protections establish an effective, transparent
consumers’ behaviour provides firms laws may apply to most scenarios framework for enforcing consumer
using AI with the unprecedented of AI use cases, often providing rights across borders (Goyens,
ability to trigger specific reactions adequate legal remedies without 2020), leading some experts to
through individualized advertisements the need for new regulations, call for new forms of international
and communications. However, this measures to regulate algorithmic regulation and cooperation to
can also exacerbate asymmetry of harm and protect consumers have protect consumers, especially
information between companies emerged in recent years in various against AI harm (Jones, 2023).
and consumers, and can lead to jurisdictions. Under the EU General
manipulation and exploitation of Data Protection Regulation, for Besides potential violations of
consumer behaviour. example, individuals have the right privacy and personal integrity,
to contest decisions made by disinformation and manipulation,
The use of algorithms to fix prices algorithms, request human oversight, the difficulty of assessing the
can lead to price efficiencies and withdraw from personalized safety and security of AI-enabled
passed on to customers, but can advertising driven by algorithmic products and services adds to
also be used to exploit consumers’ methods. China has also developed the complexity of protecting
willingness to pay a certain price comprehensive regulations to govern consumers in an AI-driven age
in the interest of the firm. algorithm use. Further legislation (see Chapter 3(a)(iii)).

40
CHAPTER 3: THE POLICIES OF AI AND TRADE

Opinion piece Shin-yi Peng


AI: Amplifying the digital trade issues Distinguished Professor of Law,
National Tsing Hua University
Many of the challenges brought by digital technologies
that the WTO has faced in the past decades are
now amplified by AI, including issues associated with
classification, non-discrimination, data governance and Data governance in the age of AI requires perspectives
competition policy. that safeguard social values, including privacy, security,
free speech, cultural expression and algorithmic ethics.
First of all, AI acts as a facilitator for complex products
that bundle goods and services, which calls for further Finally, competition authorities worldwide are increasingly
thinking about how to adjust the goods/services legal taking or considering approaches that impose additional
silos under the WTO to address issues stemming from obligations on AI-powered big tech companies. Their
the merging of physical and digital realms. It seems likely potential abuses of market power, including self-
that the goods/services dichotomy in applying trade rules preferencing practices that promote their own services
will increasingly trigger new levels of inconsistency and within search results, algorithmic cartels and other
legal uncertainty. cross-border collusive arrangements, can be more
meaningfully addressed through competition disciplines
Second, more and more AI-based services will be able to at the international level. To what extent does algorithmic
compete directly with or substitute human professionals. practice constitute a trade barrier to goods or services,
Questions such as to what extent automated legal advice and how do anti-competitive market concentrations
tools and human attorneys should be considered to exclude foreign suppliers from a market? The reactivation
be “like services suppliers” may emerge sooner than of the WTO Working Group on Trade and Competition
expected. It remains to be seen how far concepts such as is more urgent than ever. If a set of general or sector-
“technological neutrality” or “evolutionary interpretation” specific competition disciplines could be established
can serve to clarify the scope of the GATS commitments at the WTO, it would be less necessary for competition
of market access and national treatment. authorities in developing countries and LDCs to enforce
competition law after the fact.
Third, AI presents new challenges for data governance.
Digital platforms’ advertising algorithms, or, more Disclaimer
generally, their overall business models, intensify the Opinion pieces are the sole responsibility of their
perils associated with the data-driven economy. WTO authors. They do not necessarily reflect the opinions
rules can play a more active role in reducing such perils. or views of WTO members or the WTO Secretariat.

However, the large datasets required by AI models Restrictions on cross-border data flows also
raise significant privacy concerns. AI introduces new negatively impact trade in AI-enabled products. While
privacy issues for individuals and consumers, leading there is empirical evidence that AI significantly enhances
to a trade-off between the necessity of accessing large international trade in digital services, cross-border data
amounts of data to train AI models and privacy protection. regulation can impede such trade. Sun and Trefler (2023)
The continuous tracking and profiling of individuals’ online find that restrictions on data flows can reduce the value of
and offline interactions by AI algorithms raise significant AI-enabled apps, making them less attractive to international
concerns about data privacy, consent and control over users. While AI leads to a 10-fold increase in the number of
personal information. Furthermore, as AI algorithms become foreign users, the impact of AI on foreign users is halved if
increasingly sophisticated in their ability to infer insights the foreign users are in an economy with strong restrictions
and predict behaviours based on user data, there is a on cross-border data flows. Thus, economies with strict data
pressing need for robust privacy regulations, transparent regulations may lose out on AI-driven trade opportunities.
data practices and enhanced user control mechanisms Striking the right balance between protecting privacy and
to safeguard individuals’ privacy rights and ensure fostering innovation is therefore crucial for maximizing the
ethical and responsible AI deployment.11 AI also benefits of AI for international trade. However, cross-border
introduces new privacy concerns for consumers data flow measures, when aimed at protecting privacy, can
(see Box 3.1), and the use of data as inputs into AI help to build trust in AI systems and promote their wider use.
models also raises IP concerns (see Section 3(a)(iv)). A study by OECD and WTO (2024) on the implications of
As a result, a delicate balance needs to be found between data flow restrictions finds that, although removing data flow
privacy concerns and the need to access large amounts of regulations across all economies would reduce trade costs,
data to train AI models (see also the opinion piece by it would also undermine trust, leading to reduced consumer
Shin-yi Peng). willingness to pay for products and a negative effect on GDP.

41
CHAPTER 3: THE POLICIES OF AI AND TRADE

(iii) Ensuring the which has been described as presenting two dimensions,
legal and technical. Addressing the legal dimension requires
trustworthiness accessing the source code. This may prove difficult as source
of AI without codes are normally proprietary, i.e., protected by IP, normally
in the form of trade secrets. There are, however, regulatory
hindering trade ways to deal with this challenge, for instance, by allowing
forced source code disclosure for regulatory or law
enforcement purposes,18 even if in practice this may not be
Standards and technical regulations play a key role easy or warranted.19 To some, the technical dimension of the
in ensuring that AI is trustworthy and, through this, black box problem may be even more significant, as the
in promoting trade in AI-enabled products. There is opacity of an AI system may persist even when access to
growing consensus concerning the pivotal role that the source code is free or has been voluntarily or mandatorily
regulations, standards and other government interventions granted. Indeed, there may be instances when AI applications
can play in ensuring that AI is trustworthy, i.e., that it meets are so complex that even programmers themselves are not
expectations in terms of criteria such as reliability, security, able to divine an intelligible explanation from the source code
privacy, safety, accountability and quality in a verifiable and other proprietary information and data as to why and
way.13 Ultimately, this means striking a regulatory balance, how certain decisions and classifications were reached by
whereby the benefits of AI are harnessed while its risks are the AI system. For some, this means that, until this technical
mitigated. Ensuring trustworthiness is not only important for challenge is satisfactorily addressed, regulatory solutions
what happens within economies. It is also relevant for what based on open-source disclosure may be “significantly
happens outside economies and between borders. Indeed, frustrated” (Lin, 2021; Mitchell et al., 2023; Pasquale, 2015).
the internal regulations that governments adopt to protect
their consumers can help to build consumers’, importers’ and Adding to the difficulty in pinpointing the source
other stakeholders’ trust in AI-enabled products, thereby of vulnerability of an AI-enabled product is the fact
fostering trade in such products. that their evolving nature may be also triggered by
external factors. Such factors include customization: the
Striking a balance between regulating AI for legitimate ability of millions of individuals to “personalize” their AI-
policy reasons and enabling trade to flow as smoothly enabled products in almost infinite different ways, posing
as possible can be particularly challenging. While the a challenge for regulators to anticipate potential risks
challenge of striking the right balance between regulation and associated with each unique customized product. Another
free trade is not new, AI’s evolving, opaque and multifaceted factor is connectivity, which may render products vulnerable
nature, and the new types of risks associated with it, are making to cyberattacks or cyberthreats by bad actors that can be
this balancing act in AI regulation and governance particularly located anywhere in the globe. These factors further increase
complex (see also the opinion piece by Eduardo Paranhos). the difficulty for regulators in anticipating and addressing
a wide range of possible unforeseeable and unintended risks
Regulating AI requires regulating a product’s over the lifecycle of these products (Lund et al., 2023).
“behaviour”. As mentioned in Chapter 2, “autonomy” is one
of the unique attributes of AI. The fact that AI systems can AI’s dual-use potential may add another layer of
imbue products with various degrees of “autonomy” means complexity. As noted in Chapter 2(a), AI’s dual-use nature
that they may generate new forms of risks stemming, not from means that it can be employed for both civil and military
problems related to the physical components of the product, purposes. This may add a domestic security and geopolitical
but instead from the way AI can make the product “behave”.14 dimension to AI’s governance, making regulatory interventions
Such risks are not easy to foresee, control or even quantify.15 and cooperation even more complex (Csernatoni, 2024;
As Judge et al (2024) note, a unique, defining technical Klein and Stewart, 2024; Pouget, 2023; Raul and Mushka,
characteristic of AI is that, unlike all other engineered systems, 2024). A related issue concerns policy and regulation in
AI’s “behaviour” is not dictated or pre-determined by its the area of AI and cybersecurity (see Box 3.2).
programme code; it is an “emergent” property. Therefore, AI-
enabled products may generate risks for reasons other than For goods, “traditional” regulations and standards that
those inherent to the tangible elements in the products normally focus on tangible, visible, static product
themselves. For instance, some consider that the “behaviour” requirements may not be able to address risks
of AI-enabled co-bots (i.e., collaborative or companion stemming from the integration of AI into “traditional”
robots), if unchecked, could provoke mental health problems products. The changeability of AI-enabled products, resulting
in the humans they accompany.16 from the evolutionary nature of AI, makes regulation a
perennial moving target. AI systems confer new properties
The opacity of the behavioural nature of AI can make and functions to the products into which they are embedded.
regulation even more challenging. Risky “behaviours” of As stressed in Chapter 2, these products’ properties and
AI-enabled products may be linked to the way their algorithms functions can be described as “dynamic”, i.e., they change
are designed. AI algorithms are notoriously opaque (Lim, 2021; overtime as a consequence of constant changes occurring
Lund et al., 2023). As noted in Chapter 2, transparency and throughout the AI system’s lifecycle via software updates
explainability are critical for understanding how and why AI or other self-improvements resulting from the algorithmic
systems work and behave the way they do.17 This challenge “learning” process. This contrasts with the “static” properties
is commonly referred to as the AI “black box” problem, of more traditional products, which normally remain

42
CHAPTER 3: THE POLICIES OF AI AND TRADE

Box 3.2:
AI, cybersecurity and technical barriers
to trade (TBT)
AI’s ability to analyse large datasets adopting cybersecurity-related raised in the last three and a half
can help in countering cyber measures and policies, many in years alone.
threats and responding to malicious the form of TBT measures, i.e.,
cyber-attacks. However, there are technical regulations, standards Cybersecurity was the focus, for
concerns related to potentially and conformity assessment the first time, of a specific thematic
biased decision-making, the lack of procedures. session of the TBT Committee
transparency and explainability of organized in 2023. Given the
AI systems, and potential misuse Indeed, cybersecurity-related global nature of the problem, it was
or abuse. Bad actors can use AI to TBT measures have recently argued in that session that unilateral
create new malware, to design new, become one of the most prominent government interventions in this
sophisticated, or targeted phishing digital-technology-related issues area should be avoided, as they
attacks, to identify new avenues of discussed in the WTO TBT could ultimately undermine global
attack, and to create deep fakes. Committee. To date, more than cybersecurity efforts. The need for
Unsurprisingly, cybersecurity is a 90 cybersecurity-related TBT governments and the private sector
core concern expressed not only measures have been notified to to work in a more coordinated and
in domestic AI policies but also the Committee, around 65 per collaborative manner to address
in international AI principles and cent of these in the last three and rising regulatory fragmentation
governance discussions. a half years. Members have also and divergence in this area and
increasingly raised specific trade find better ways to fight increasing
Cybersecurity vulnerability risks are concerns (STCs) in the cybercrime and cyber incidents was
growing as digital technologies are TBT Committee against also underscored. In this respect,
permeating more and more societies cybersecurity-related TBT efforts to develop ambitious, fair and
and economies. In response, measures: of the 29 STCs raised inclusive cybersecurity international
governments are increasingly since 1995, 38 per cent were standards were highlighted.

essentially the same throughout their lifecycle. Many of the For instance, in an AI-enabled autonomous vehicle, mechanical
constant changes to properties and functions in AI-enabled malfunctions and/or algorithmic flaws in its internal and
products are meant to be beneficial improvements (some external cameras can both cause injuries (material) and affect
even call this “evolution”).20 However, this dynamic process the privacy of passengers or pedestrians (immaterial).21
means that known risks and concerns may also be constantly Likewise, product specifications laid down in one-size-fits-all
changing, or new ones may be emerging. For AI-enabled regulations and standards may be ill-suited for regulating AI-
products, as is the case for most other products, specifications enabled products with different customized solutions (Lund
and requirements will continue to be needed to address et al., 2023). To address such regulatory challenges, while
risks associated with their “physical” aspects (e.g., hazards supporting the deployment of, and trade in, trustworthy AI-
from defective mechanical components of an autonomous enabled products, it has been proposed that regulators think
vehicle). However, for some, such “traditional” specifications of creative ways to ensure that product requirements and
and requirements may be ill-suited or insufficient to address specifications are dynamic and adaptable to the behavioural
situations where the root cause of a risk is not a mechanical and evolutionary nature of these technologies, to ensure that
or “physical” failure, but an algorithmic design flaw or they do not become obsolete as AI characteristics, risks
other problem with the AI system embedded in the and vulnerabilities evolve throughout the product lifecycle.22
product and which may cause it to display risky “behaviour”
(e.g., an autonomous vehicle that causes injuries to people or The constantly evolving nature of AI-enabled products
damage to property). may also necessitate new approaches to certify their
compliance with regulatory requirements. Indeed, if
AI-enabled products may cause not only material but an AI-enabled product has successfully undergone testing,
also immaterial risks. An AI-enabled product may present verification or other certification procedures prior to being
both material risks, which are easy to quantify and measure placed on the market, this may not necessarily mean that
(e.g., physical injuries or damage) and immaterial risks the product will remain certifiable throughout its lifecycle.
(e.g., privacy or other fundamental rights), which are more AI-enabled products, in particular internet-connected IoTs
difficult to quantify and measure. Material and immaterial or robotics, may generate new risks after their deployment
risks can sometimes even stem from the same situation. due to “mutability” factors such as new updates, new data,

43
CHAPTER 3: THE POLICIES OF AI AND TRADE

unforeseen changes of attributes and functions due to difficult to separate value judgements from technical detail”.
user customization, or unforeseen autonomous behaviours Some have even questioned whether this could ever be done
(see Box 3.2).23 As already discussed, assessing the in practice.26
conformity of some AI-enabled products with underlying
technical regulations and standards may also require Such non-typical or immaterial AI-triggered risks and
access to source code, which raises IP-related issues concerns may also be intrinsically prone to regulatory
(see Chapter 3(a)(iv)). Regulators may also face challenges fragmentation, which could hinder trade. Indeed, it might
in assessing the compliance of AI-enabled products with be difficult for legislators to agree on common international
various novel regulatory requirements that aim, for example, denominators with respect to some AI-related societal values
to assess the quality of data used in such products. In and concerns such as ethics, privacy or human rights, the
light of such a multiplicity of challenges, some consider relative importance of which may vary across economies.27
that regulators may need to re-evaluate their conformity Unnecessary or avoidable regulatory fragmentation could,
assessment approaches and come up with methods of in turn, hamper the opportunities and benefits associated
ensuring effective continuous compliance of ever-changing with AI (Bello Villarino, 2023; OECD, 2022a). In particular,
AI products with underlying technical regulations and it could result in high regulatory compliance burdens and
standards (Lund et al., 2023; Meltzer, 2023).24 costs, and consequently create non-tariff barriers to trade for
AI businesses.
The integration of AI in goods and services has
also broadened the scope, number and nature of
risks and concerns that regulations and standards
need to address. As mentioned above, in addition to
(iv) How AI is shaped by
“traditional” regulatory concerns, such as interoperability, and may reshape IP
safety, security, quality, and the protection of human life or
health, the use and deployment of AI may also create
various “non-typical” risks, that some even qualify as AI poses new conceptual challenges for the
“existential” (UNDRR, 2023), and may raise complex ethical traditional, human-centric approach to IP rights.
and societal questions affecting public morals and human Balanced IP rights and their enforcement have an important
dignity.25 If AI is trained on biased and skewed datasets, role to play in ensuring both equitable access to AI technology
it may perpetuate or exacerbate biases or discrimination and a fair distribution of economic gains from its use.
against minority groups and infringe upon individual rights AI raises several important questions in this respect.
and freedoms (see Chapter 2). AI-enabled goods and
services are also a cause for significant concern with regard A first question concerns what form of IP protection
to data privacy, as they involve the collection, processing AI algorithms are granted. If the IP protection is based
and storage of vast amounts of user data (see Chapter 3(a)). on the fact that these algorithms are trade secrets – and
In addition, as mentioned above, AI is a technology prone thus that secrecy is an essential requirement on which to
to dual use, which may raise complex geopolitical and establish IP protection – this raises issues concerning a lack
domestic security issues and lead to further regulatory of transparency. Alternatively, new and inventive algorithms
fragmentation. Finally, both AI “inputs” and “outputs” raise may be protected by patents in some jurisdictions, with
new and complex issues of IP protection and ownership the patent system’s mandatory disclosure mechanism
(see Chapter 3(a)(iv)). yielding extensive information about AI technologies, which
directly passes into the public domain in many economies
These concerns render it challenging to design proper (WIPO, 2024). However, patent protection may constrain
regulatory solutions to ensure the trustworthiness of development of algorithms in economies in which patents
and support trade in AI and AI-enabled products. As have been taken out. Copyright, another type of IP
already mentioned, AI raises societal and ethical concerns protection, can be automatically extended to both source
(“immaterial” risks) that, unlike “traditional” concerns such as and object code, which may constrain analysis and
health and safety (“material” risks), are not typically a subject use of algorithms. As envisaged by the objectives of
for technical regulations and standards. Such “non-technical” the international IP system, appropriate exceptions and
concerns are more difficult to regulate, monitor and enforce limitations to IP rights protection are needed to balance
compared to more traditional regulatory objectives, such the different interests and to ensure appropriate access
as product safety or the protection of human health or life, and dissemination of AI technology. These regulatory tools
which can be addressed in more “technical” and objective may have to be adapted for this specific context, and some
ways. It has been argued that AI governance and regulatory jurisdictions have taken legislative steps or developed
frameworks may require norms, regulations and standards policies to encourage the development of open-source
that are perhaps better described not as purely “technical”, AI technologies.28
but instead as “socio-technical” instruments, i.e., combining
technical issues with broader societal considerations A second question concerns the use of copyright-
(Dentons et al., 2023; Kerry, 2024; Meltzer, 2023). Pouget protected data as AI inputs. Under the current
(2023) argues that developing socio-technical regulations international IP legal framework, materials such as original
is challenging in situations where both the technology and texts, images and compilations of data may be subject to
the harms it can cause are “so complex that it becomes copyright protection. This may raise the question of whether

44
CHAPTER 3: THE POLICIES OF AI AND TRADE

Opinion piece Eduardo Paranhos


Navigating AI regulation: Lawyer, LL.M – University of
balancing innovation, risks and London (LSE); Chevening Scholar;
regulatory defragmentation Head of AI Working Group at Brazilian
Software Association (ABES)
It is not a simple task to determine when a new social
or economic phenomenon warrants regulation. Those
challenges can be further amplified when the new relation to privacy, product safety, consumer protection
scenarios take the form of innovative technologies, and internet regulations, as well as some downstream
posing both risks and opportunities. In addressing this, regulations, such as Brazil’s health authority
regardless of the nature of the changes, it is important ordinance on software as a medical device (SaMD).29
to reflect on a few foundational questions: (i) what risks For instance, a data breach that occurs as a result
and opportunities are at stake; (ii) how well understood of using an AI system is subject to the same controls
those new technologies are, so that the tools to tackle and remedies provided for in Brazil’s privacy law as
the possible risks can be properly balanced; and (iii) other breaches that occur without the use of AI.
which aspects of the technological progress indeed The privacy law also covers the potential misuse of
require new rules, vis-à-vis the existing laws. sensitive personal data that generates biased outputs
in the same way as similar misuse in offline settings.
AI is transforming the way we work, communicate
and create content faster than ever before. Fostering However, it is of pivotal importance to map the gaps
the development and implementation of AI solutions in the current legislation clearly, so that fresh AI
has the potential to increase efficiency and job regulations can address that very gap, avoiding
quality, as noted, for example, in recent studies by the overlaps and the resulting legal uncertainty.
International Labour Organization and the consultancy Another critical point which could make prescriptive
firm McKinsey. Yet, for these benefits to materialize, models problematic is the emphasis on regulating the
we should consider which traits of the AI systems “development” of AI, instead of focusing on high-risk
should really be regulated and aim to establish a “uses” of the technology.
model that, at the same time, is capable of protecting
society and promoting – rather than discouraging – One final consideration refers to the level of
research and development. preparedness that the upcoming regulations should
display to be able to evolve with the technology. It
It is an oversimplification to say that economies are seems more realistic that context-based regulations
mostly weighing up either context-principle based should gradually progress into stricter forms “if” and
formats to regulate AI, or prescriptive models with “when” needed, than the other way round. Similarly, in
a more detailed set of obligations and sanctions. the context of trade-related concerns over the adverse
effects of regulatory fragmentation, AI regulations that
In Brazil, the debates around AI regulation have so concentrate on high-risk uses – not on the development
far examined aspects from each of those possible of the technology – could facilitate the pursuit of a more
structures: a prescriptive model, openly inspired by the harmonized approach across markets. Notably, some
European regime, and another proposal for a context- economies that are global protagonists in investments
based framework anchored on widely recognized and implementation of AI30 have been leaning towards
principles for governance and risk mitigation – e.g., evolutive regulatory formats, balancing AI governance
those of the United Nations Educational, Scientific and and risk mitigation goals, while helping to raise living
Cultural Organization (UNESCO), the Organization for standards, creating quality jobs and improving
Economic Co-operation and Development (OECD) – people’s lives through responsible innovation.
reaffirming the role of existing legislation.
Disclaimer
Indeed, it is possible that most situations raising Opinion pieces are the sole responsibility of their
concerns about the deployment of AI could be dealt authors. They do not necessarily reflect the opinions
with through existing federal laws, particularly in or views of WTO members or the WTO Secretariat.

their use in training AI amounts to copyright infringement. protected materials). The application of such limited
This question translates into whether such use is now, or exceptions to generative AI, in particular, is more complex
should be, automatically permissible under exceptions to than in traditional cases, due to factors like the scale of data
exclusive copyright (e.g., for educational use of copyright- used and the purpose of the use.

45
CHAPTER 3: THE POLICIES OF AI AND TRADE

A third set of questions concerns whether AI outputs


generated autonomously can be subject to IP
protection. An AI output is created based on the patterns (b) T
 he global race
and rules learned during the training process, by means of to promote and
the input data. However, this output (including in the form
of content) is not a mere reproduction or recombination of regulate AI
this material. Rather, the output can take the form of novel and the risk of
and creative material, reflecting the AI system’s ability to
understand and mimic the complexities of human-generated fragmentation
content. Thus, AI output may encompass a wide range of
creations and innovations,31 including artwork, literary works,
music, design, films, video games and inventions. As AI is The immense potential of AI has prompted
increasingly capable of producing outputs autonomously, governments around the globe to take action to
the lines between human and AI contributions to creation promote its development and use while mitigating
or inventions are increasingly becoming blurred, making its potential risks. However, the increasing number
the question of inventorship and authorship more pressing of domestic, regional and international initiatives and
and complex. their design are fragmenting the policy landscape, with
possibly negative consequences for companies trading
Various approaches have been taken, or proposed, internationally. The economic costs of fragmentation highlight
for finding balanced and equitable answers to the importance of mitigating regulatory heterogeneity.
some of the above questions, both in terms of AI
inputs and outputs. In terms of AI inputs, proponents of
the use of copyrighted material to train AI argue that this (i) Domestic initiatives
constitutes a “transformative” use, as the model does not
replicate the copyrighted works, but instead generates
new content inspired by the learned patterns. They also Governments are using a variety of instruments to
argue that such use does not negatively affect the market promote AI and to address and mitigate its risks.
for the original works and that, in some cases, it could These range from AI-specific strategies and policies to
potentially complement that market. This is still an sector-specific legislation (including data regulations) and
ongoing debate. A balanced approach taking into account trade policy measures. However, there are already signs
both the moral and the economic interests of creators of that heterogeneity in the design of these measures may
original works and users of AI needs to be found, and the be leading to regulatory fragmentation. The sheer number
legal community continues to explore these issues. of domestic strategies and policy initiatives related to AI
Approaches to this issue differ significantly across indicates that AI is an area of priority and that a sustained
jurisdictions (see Chapter 3(b)). high level of intervention can be expected in the near future,
with a potential risk of growing regulatory fragmentation.
The question of the protection and ownership of AI
generated outputs necessitates a re-evaluation of
existing IP legal frameworks. As noted above, AI may AI strategies and policies
generate outputs in the form of a work or invention. This in
turn raises questions about whether and/or under which An increasing number of jurisdictions is putting in
circumstances IP rights can be granted for AI-generated place AI strategies and policies at the domestic
creations or innovations, and if so, who, if anyone, owns level. The number of economies that has implemented
the resulting IP, or who is liable if the output violates the AI strategies34 increased from three in 2017 (Canada, China
IP of others. Is it the developer of the AI tool, the user who and Finland) to 75 in 2023. Canada initiated the first
prompted the AI output, or neither, since the creator or domestic AI strategy in March 2017. In 2023 alone,
inventor is not human? Current IP laws attribute authorship eight new strategies were added by economies in the
and inventorship, as well as resulting economic rights, to Middle East, Africa and the Caribbean, showcasing the
humans. In some jurisdictions, AI itself is not recognized as worldwide expansion of AI policymaking (Maslej et al.,
the creator or inventor within the current IP legal framework.32 2024). In addition, or as part of domestic AI strategies,
In other jurisdictions, the issue is open to judicial interpretation governments around the world have taken over 1,000 AI
based on existing laws.33 With the widespread deployment policy initiatives.35 The majority of AI policy initiatives are
of AI, which can now be used by individuals across the concentrated in Europe, followed by Asia, the Americas and
globe, the question of protection and ownership of Africa (see Figure 3.4). Most AI-related legislation passed
AI generated outputs becomes a constant and global since 2016 aims to enhance an economy’s AI capabilities,
concern. This question not only challenges the traditional such as establishing a network of publicly accessible
understanding of creativity and ownership, but also urges supercomputers, as opposed to restrictive legislation,
a re-evaluation of existing IP legal frameworks in the age of which imposes conditions or limitations on AI deployment
AI (see also Chapter 4(f)). or usage (Maslej et al., 2024).

46
CHAPTER 3: THE POLICIES OF AI AND TRADE

March 2024, more than a third (36 per cent) of AI policy


initiatives listed by the OECD Artificial Intelligence Policy
Figure 3.4: Share of AI policy initiatives
by region (2024) Observatory42 concerned governance of AI. Governance
aspects usually focus on the establishment of frameworks
for AI development and deployment, including vertical and
13% horizontal coordination, AI’s integration into public sector,
47% public consultation and evaluation mechanisms, and the
creation of regulatory bodies or committees to oversee
AI-related activities. Close to 19 per cent of domestic AI
23%
policy initiatives aim to provide financial support and
incentives for AI research, development and adoption.43
Around 18 per cent of these initiatives include the
development of guidelines and regulations (on issues such
as data privacy, algorithmic transparency, bias mitigation
and safety standards to promote the responsible and
ethical development and use of AI technologies), the
establishment of regulatory oversight and ethical advice
3% bodies to provide guidance and supervision in navigating
these regulations effectively, and the development of
14% standards and certification processes to facilitate the
development and adoption of AI technologies in compliance
Africa Asia Australia and New Zealand with regulatory requirements and ethical principles. Finally,
Americas Europe a large number of domestic AI policy initiatives (around
27 per cent) focuses on fostering an environment conducive
Source: OECD database of domestic AI policies to AI innovation and adoption (i.e., AI enablers). These include
(https://oecd.ai).
initiatives to enhance AI-related skills and education to attract
talent, public awareness campaigns to raise awareness
about AI, and the establishment of collaborative platforms,
to bring together stakeholders within the innovation
The European Union, a WTO member in its own ecosystem, and business advisory services, to support
right,36 has been particularly active, with the adoption innovation and entrepreneurship (Maslej et al., 2024). Some
of a series of policy measures to support the of these initiatives, like the 2023 US “Executive Order on the
development of trustworthy AI at the EU level.
Policy measures to support the development of
trustworthy AI include the AI Innovation Package,37 the
Coordinated Plan on AI,38 the “Proposal for standard
Figure 3.5: Share of AI policy initiatives by
contractual clauses for the procurement of Artificial
level of development (2024)
Intelligence (AI) by public organisations”, and the EU AI
Act (AIA) (European Union, 2024). The AIA, which was
formally adopted in 2024, is the world’s first comprehensive 1.1%
horizontal legal framework on AI.39 The main stated
29.42%
objective of the AIA is to ensure that AI systems within
the EU are safe and comply with existing laws on
fundamental rights, norms and values.40 The AIA adopts a
risk-based approach to regulating AI systems.41

Most domestic AI policy initiatives are implemented Category of


by developed economies, reflecting the growing country
AI divide. While a reasonable share (around 30 per cent)
of developing economies have put AI policy measures
in place, only one LDC, Uganda, has done so, with two
sector-specific policies and one general policy on AI
governance (see Figure 3.5). The increasing attention being 40.57%
paid to AI in policymaking can also be seen in references to
AI in legislative proceedings, which have increased almost
tenfold across the globe since 2016, and nearly doubled
between 2022 and 2023 (Maslej et al., 2024). Developed Developing Least-developed country

Domestic AI policy initiatives can be classified into Source: OECD database of domestic AI policies
(https://oecd.ai).
four broad categories: governance, financial support,
guidelines and regulations, and AI enablers. As of

47
CHAPTER 3: THE POLICIES OF AI AND TRADE

Safe, Secure, and Trustworthy Development and Use of AIA objectives is to assure the “environmental protection
Artificial Intelligence”, contain measures to support AI-related against harmful effects of [AI] systems in the Union and
hardware, such as computing infrastructure, as well as supporting innovation.”54 See also Box 2.1 on AI’s
competition and innovation in the semiconductor industry. environmental impacts.

Governments seem to be preparing or adopting an An increasing number of jurisdictions are also putting
increasing number of detailed rules and regulations in place AI-related “sandboxes”. The objective of these
related to implementing and enforcing AI legislation. is to test new economic, institutional and technological
According to Stanford University’s 2024 “AI Index”, the approaches and legal provisions under the supervision
number of AI-related regulatory measures has risen of a regulator for a limited period of time.55 About a dozen
significantly in the United States and the European Union jurisdictions, including Colombia, Estonia, the European
over the past few years. There were 25 AI-related regulatory Union, France, Germany, Lithuania, Malta, Norway, Singapore
measures adopted in the United States in 2023, including and the United Kingdom have such structures in place
three related specifically to international trade and (OECD, 2023).
international finance, compared to just one in 2016. The
total number of AI-related regulatory measures grew by Some jurisdictions are also developing “GovTech”
56.3 per cent in 2023 alone to reach 83. As for the tools (digital tools used to optimize public services)
European Union, it has passed almost 130 AI-related to address the new regulatory challenges raised by
regulatory measures since 2017, including 13 led by the AI and to promote trustworthy AI. A notable example
Directorate-General for Trade and the Directorate-General for is Singapore’s “AI Verify” tool, developed by the Infocomm
Competition (Maslej et al., 2024). Several economies are Media Development Authority and Personal Data Protection
also developing strategies or putting in place specific Commission.56 AI Verify is an open-source software tool to
initiatives to develop AI standards (see Box 3.3). assess the trustworthiness of AI systems according to a
set of criteria and factors. The tool, which is at minimum-
Environmental concerns are currently high on the viable-product stage,57 aims to automate transparency
policy agenda. Governments are therefore also starting assessment of AI systems, which would allow companies to
to draft regulatory frameworks to address the see whether new AI systems comply with relevant
potential negative environmental impacts of AI and to international standards and regulations (see the case study
harness its many benefits. For example, one of the EU’s on Singapore’s approach to AI in Box 3.4).

Box 3.3:
Domestic standards on AI
44

Standards play an important role in a variety of broad topics, most often Australia’s intention to participate
domestic AI policy approaches and those related to data management, in international standards-setting
several economies are developing quality, processing and protection, processes,50 while China’s Global
strategies or putting in place specific as well as risk management, safety AI Governance Initiative encourages
initiatives to develop AI standards.45 and security, interoperability, and international cooperation for
Some economies even recognize AI organizational governance. These developing AI standards based on
as one of the priority areas in their standards address specific technical broad consensus.51 In the same
general standardization strategies.46 requirements such as process, vein, the US Executive Order on
management and governance, Safe, Secure, and Trustworthy
As of July 2024, almost 170 measurement and test methods, Development and Use of
standards are being developed terminology, interface and architecture Artificial Intelligence mandates
or have already been published specifications, and product and relevant agencies to cooperate
by various domestic standards- performance requirements.48 with standards development
setting bodies (such as BSI, CEN, organizations to drive the development
CENELAC, NIST).47 Most of such One common feature across various of AI-related consensus standards.52
domestic standards seem to be of domestic standardization approaches
horizontal application, while others is the recognition of the importance In this respect, as in other regulatory
seem to be sectorial, i.e. only covering of engagement and cooperation on areas, domestic standardization
specific industries and sectors such AI standardization at the international efforts on AI will tend over time
as transportation, healthcare or level (Kerry, 2024).49 For instance, to rely on international standards-
energy. AI-related standards cover Australia’s AI Action Plan reflects setting work.53

48
CHAPTER 3: THE POLICIES OF AI AND TRADE

Box 3.4: Case study:


Singapore’s approach to AI

For Singapore, AI is a necessary In 2019, Singapore issued a With China, Singapore is enhancing
means to overcome natural framework for responsible AI use. mutual understanding of approaches
constraints, such as a small labour The Model AI Governance to AI governance, such as under the
force on a small landmass, and to Framework provides detailed inaugural Singapore‑China Digital
raise the productivity and strengthen and practical guidance to address Policy Dialogue. Singapore also
the competitiveness of its industries, key ethical and governance issues participates in the G7 Hiroshima
both in globally tradable sectors, when deploying AI solutions. Process, the AI Safety Summit
such as trade, finance, and in In 2024, Singapore further series, the OECD AI Principles,
domestic services, such as retail extended the Model Framework the Global Partnership on AI (GPAI)
and food and beverages. beyond traditional AI to address and the World Economic Forum’s
generative AI and the novel risks it AI Governance Alliance.
For example, Singapore has poses. Within the Association of
leveraged AI in order to continue Southeast Asian Nations (ASEAN), In 2022, Singapore launched AI
to act as a global hub facilitating Singapore has spearheaded the Verify, an AI governance testing
trade and connectivity. Singapore’s development of an ASEAN Guide framework and a software toolkit,
Changi Airport, which handled more of AI Governance and Ethics. At the which contains baseline standardized
than 59 million travellers last year, United Nations, Singapore convenes tests, covering core principles
uses AI to screen and sort baggage, the Forum of Small States (FOSS), of fairness, explainability and
and to power facial recognition a grouping of 108 small economies, robustness. In 2024, Singapore
technology for seamless immigration and introduced a Digital Pillar in launched AI Verify Project Moonshot,
clearance. The Port of Singapore, 2022, which provides baseline which broadens the original toolkit
which handled cargo capacity of capacity-building for issues including to cover generative AI and return
39 million twenty-foot equivalent AI, most recently through the AI intuitive results on the quality and
unit (TEUs) in 2023, uses AI to Playbook for Small States, which safety of large language models.
direct vessel traffic, map anchorage was co-developed with Rwanda Given that the science of AI testing
patterns, coordinate just-in-time and launched at the UN Summit and governance is still nascent,
cargo delivery, process registry of the Future in September 2024. Singapore has also set up the AI
documents, and more. To facilitate Verify Foundation to harness the
communications and business Singapore works closely with a collective power and contributions
exchanges across a linguistically range of partners, bilaterally and in of the global open-source community
diverse region of 680 million people various groupings, on guidelines for to jointly develop AI Verify testing
who speak over 1,200 different AI developments and innovations. tools. The Foundation has grown
languages, Singapore has also With the United States, Singapore to more than 110 members and
invested in developing the world’s has deepened information-sharing includes companies such as
first large language model tailored and consultations on international Google, IBM, Microsoft, Red Hat,
to Southeast Asia’s languages and AI security, safety, trust and Meta and Salesforce.
cultures; this open-source model standards development through
is dubbed SEA-LION, short for collaborations in AI, including Source: Based on inputs from
Southeast Asian Languages in the US‑Singapore Critical and the Ministry of Digital Development
One Network. Emerging Technologies Dialogue. and Information, Singapore.

The heterogeneity of domestic initiatives may lead to Unintended fragmentation extends to non-AI-specific,
unintended fragmentation. Analysing eleven AI rulebooks sector-specific legislation, such as AI-relevant IP
from seven jurisdictions (i.e., Argentina, Brazil, Canada, and data regulations. Approaches to copyright “fair use”,
China, the European Union, the Republic of Korea and the for example, differ significantly across jurisdictions (see
United States), Fritz et al. (2024) find that governments Chapter 3(a)(iv)). While Japan modified its Copyright Act in
prioritize different objectives with their AI regulation, use 2018 to allow machine learning models to use copyrighted
substantially different regulatory requirements to achieve works for any purpose, including commercial use, without
the same priorities, and choose different scopes and needing explicit permission from copyright holders,58 the
formulations to achieve the same regulatory requirement EU AIA is much less permissive. According to the EU AIA,
for a shared priority, leading to unintended fragmentation at the provider of a generative AI model, whether open-source
the level of priority, requirement and scope. or closed, must establish a policy to respect EU copyright law,

49
CHAPTER 3: THE POLICIES OF AI AND TRADE

Box 3.5:
The challenge of navigating AI regulations:
The case of Canvass AI

Invited to speak at a WTO to regulate AI make achieving border market entry. She added
workshop on regulatory global reach difficult. Divergent that “minimizing complexity and
cooperation on digital products, regulations strain resources, make promoting convergence would
Humera Malik, CEO of Canvass the navigation of rules without greatly ease compliance efforts”.
AI, a startup that provides specialized knowledge difficult,
industrial AI solutions to enhance and impact market entry. Moreover, Source: https://www.wto.org/
operational efficiency, profitability, data protection regulations impose english/tratop_e/tbt_e/
and sustainability, explained additional restrictions, affecting tbt_2006202310_e/
that the diverse approaches AI development and cross- tbt_2006202310_e.htm.

including the EU Directive 2019/790 on Copyright and authorities in various jurisdictions to put in place measures
Related Rights in the Digital Single Market (CDSM). to promote the development of AI. These include the
Under the CDSM, research organizations are permitted to creation of “AI factories”, to give AI start-ups and small
reproduce and extract copyrighted works for text- and data- businesses access to supercomputers on which to build
mining purposes without requiring the authorization of the their own models, research initiatives to connect researchers
copyright-owner, provided that these research organizations and educators to computational, data and training
have lawful access to the works, and that the use is for resources to advance AI research and research that employs
the purposes of scientific research. The use of copyrighted AI, and subsidies for firms that purchase domestically
materials for text- and data-mining for any reason is produced AI chips. Some of these measures appear to
also permitted beyond scientific research, but in this limit opportunities to domestic entities or to provide
context, copyright-owners have the option explicitly to reserve incentives on the condition that domestic products are used
their rights and thereby prevent the use of their works for (Aaronson, 2024b).
text- and data-mining without their approval (European
Parliament, 2024). Similarly, the AI Bill pending adoption in The economic costs of regulatory fragmentation
the Brazilian Congress provides, for example, for a limited highlight the importance of mitigating regulatory
copyright exception when the extraction, reproduction, heterogeneity. The impact of fragmentation can be felt at
storage and transformation taking place in data- and text- various levels, including lost trade opportunities, diminished
mining processes are carried out by research and journalism productivity gains and stifled innovation, with potentially
organizations and institutions, museums, archives and important economic consequences for vendors of AI-
libraries. As for the United States, while there is still enabled goods and services (Fritz and Giardini, 2024).
no legislation or regulation on this issue, a high profile As AI technologies become increasingly embedded in
ongoing litigation case was filed by the New York Times goods and services across a wide range of sectors, in the
against OpenAI for the unauthorized use of its content absence of efforts to mitigate regulatory heterogeneity,
in December 2023. Another example is the diverging the resulting costs and other negative impacts are likely to
approaches to algorithmically authored works (see Chapter grow significantly. The impact is likely to be particularly
3(a)(iv)). While the United Kingdom protects algorithmic important for small businesses, which are already struggling
creations, albeit without recognizing AI itself as an author,59 to navigate through divergent regulatory approaches on AI
Australia and the United States make it clear that a (see Box 3.5).
human author is needed (Liu and Lin, 2020). Finally,
some jurisdictions provide expansive protection to trade
secrets, applying proprietary protection to source code, Data regulations
algorithms, training materials and datasets used to train AI
models, while others do not provide them with exclusive Regulating data stands high on policy agendas. With
IP protection (Kilic, 2024). Beyond IP, data regulations the rise of digital technologies, including AI, initiatives
are also marked by a high level of fragmentation. promoting access to data to foster domestic innovation
and competition, protecting privacy and controlling
The design of some measures may affect market the flow of data across borders stand high on policy agendas.
competitors in other economies and have trade- However, what is emerging is a landscape of measures
distortive effects, leading to further fragmentation. that is not only fragmented, but that may also have trade-
The significant economic potential of AI is leading political distortive impacts beyond fragmentation.

50
CHAPTER 3: THE POLICIES OF AI AND TRADE

Open government data and data-sharing European Data Spaces – in strategic domains, involving
initiatives to foster innovation and competition both private and public players. Training AI systems is
listed as one of the key benefits of the initiative (European
An increasing number of jurisdictions is taking Commission, 2024b). The EU Data Act, which entered into
initiatives to promote open government data to foster force in January 2024, complements the DGA and creates
business creation and innovation and to increase the processes and structures to facilitate data-sharing by
competition in domestic markets. Recognizing the value companies, individuals and the public sector (European
of data as a public good, some jurisdictions, both in Commission, 2024a). The Act protects EU businesses in
developed and developing economies,60 are pursuing open data-sharing contracts from unfair contractual terms that
government data initiatives to promote business creation may be imposed unilaterally by one contracting party
and innovation and stimulate the domestic digital and AI on another; the aim is to enable small businesses, in
economy by encouraging the use, reuse and free distribution particular, to participate more actively in the data market.
of government datasets under open data licences. Examples Other economies that have put in place data-sharing
include the EU’s Open Data Directive, India’s Open initiatives include Colombia, Japan and the Republic
Government Data platform61 and Singapore’s “Smart Nation” of Korea. Some jurisdictions, such as Australia and the
initiative.62 These initiatives come in addition to ex ante European Union, are also experimenting with legally
competition regulations put in place in some markets to better mandated data-sharing to foster a competitive environment
address competition issues raised by the digital economy.63 in which AI startups also have access to large datasets
Open government data is a goal that is also being pursued (Mayer-Schönberger and Ramge, 2018; Prüfer, 2020).
at the regional and international levels, including in the
context of the WTO Joint Statement Initiative on E-commerce The extent to which open government data and
(see Chapter 4(a)(v)). data-sharing initiatives support innovation and level
the playing field both within and across economies
Other approaches aim to promote data-sharing remains unclear. There are concerns that such initiatives
across sectors to foster innovation or to mandate it may in fact disproportionately benefit large AI firms, as
to counterbalance winner-takes-all dynamics in the these have the capacity to collect open data and to correlate
digital economy. The EU Data Governance Act (DGA), it with the “closed data” they possess and control to generate
for example, seeks to increase trust in data-sharing and data new data. As a result, large AI firms stand to gain more
availability. It entered into force in 2023 and supports the than those who lack such capabilities and have to rely on
setup of trustworthy data sharing systems – called Common open data entirely, which could amplify the growing AI divide

Figure 3.6: Data localization is growing and becoming more restrictive

(number of measures)

120

100

80

60

40

20

0
1967

1995

1997

2000

2004
1992

1993

1999

2005

2006

2007

2009

2010

2011

2012

2013

2014

2020

2021
2015

2016

2017

2022
2018

2019

Draft

Storage only Storage and flow condition Storage and flow prohibition

Source: Del Giovane et al. (2023).

51
CHAPTER 3: THE POLICIES OF AI AND TRADE

between companies (see Chapter 3(a)). Such policies with two data privacy agreements brought down by the
could also have geopolitical implications, as those operating European Court of Justice of the European Union.64
out of relatively big, closed digital economies are able to
capture open data elsewhere in addition to the data they Cross-border data flow restrictions and data
collect domestically without much external competition, which localization requirements
could result in further imbalances across economies
(Streinz, 2021). Cross-border data flow restrictions aim to limit the
flow of data, and measures to control where data is
In addition, while some-data sharing initiatives stored or processed are on the rise. Motivations behind
are clearly open to foreigners, uncertainty remains cross-border data flow restrictions and data localization
concerning other initiatives. These could raise potential requirements (i.e., explicit requirements that data be stored
most-favoured-nation (MFN) issues and result in trade- or processed domestically) vary, ranging from concerns
distortive effects. Japan, for example, announced in 2024 over sensitive data, related to national security, to privacy
that its data spaces would be open to foreigners, but the considerations. Such measures are sometimes seen as
programmes of some other jurisdictions seem designed an incentive to boost local competitiveness (Aaronson,
to support data-sharing within the jurisdiction concerned, 2024b; McKinsey, 2022). By early 2023, there were 96
which could have a trade-distortive effect (Aaronson, 2024). data localization measures across 40 economies in place,
with nearly half of the identified measures having emerged
Privacy and data protection after 2015 (see Figure 3.6). Not only has the number of
data localization measures increased, but the measures
Over the last decades, many governments have themselves are also becoming more restrictive, with more
enacted regulations for personal data protection to than two-thirds of identified measures involving not only a
address growing concerns over privacy. According to storage requirement but also a prohibition for data to flow
UN Trade and Development (UNCTAD), more than 70 per from one economy to another (Del Giovane et al., 2023).
cent of jurisdictions – 137 out of 194 – adopted legislation These data regulations apply to different types of data,
to secure the protection of data and privacy in 2021, including personal data, and to different sectors. As noted in
with significant differences across levels of development Chapter 3(a), striking the right balance between fostering AI
(UNCTAD, 2021a). The share of jurisdictions having passed innovation through access to data and protecting privacy is
such legislation is lowest in LDCs (48 per cent). The most crucial for maximizing the benefits of AI for international trade.
well known of these is the EU’s General Data Protection
Regulation, which became effective in May 2018. The global fragmentation of data flow regulations
underscores the need for increased international
AI raises new privacy concerns for individuals cooperation. While there are legitimate reasons for diversity
and consumers. This is leading to an increasingly complex in regulation, the current landscape is increasingly complex
trade-off between the need to access large amounts of data and fragmented, imposing additional costs on firms, especially
to train AI models and privacy concerns. As seen in Chapter those located in small markets, creating uncertainty, and
2, AI’s reliance on large amounts of data, including personal hindering the cross-border flow of data that plays such
data, and its capacity to process and analyse vast datasets an essential role in AI development and innovation, in particular
and to correlate data can lead to privacy breaches and for small economies. The economic costs of the fragmentation
information spillovers, introducing new privacy challenges. of data flow regimes along geo-economic blocks are
potentially sizeable, amounting to a loss of more than 1 per
Privacy and personal data protection regulations cent of real GDP, according to an OECD-WTO study (OECD
differ markedly across jurisdictions, affecting the flow and WTO, 2024). A global approach that balances the
of data. Most governments have introduced data protection need for robust data oversight and protection of privacy,
laws, but these regulations vary significantly from one while ensuring that data can be accessed and can flow
jurisdiction to another. Whereas some economies, like the freely across borders, is needed (Jones, 2023).
United States, primarily rely on the industry to self-regulate
the protection of personal data, others follow different
approaches that focus on state intervention to defend state Border measures
sovereignty, citizens’ rights, security or domestic development
(Bradford, 2023; Jones, 2023; Mitchell and Mishra, Many of the hardware components and raw materials
2018; UNCTAD, 2021b). These include limitations on the crucial to AI systems face increasing export
international transfer of personal data, aimed at maintaining restrictions. Export restrictions applied to industrial raw
jurisdictional oversight. These different approaches to data materials, many of which play a critical role in the manufacturing
governance are creating distinct “data realms” that are of advanced chips needed to power AI systems and in
fostering a new digital divide between these jurisdictions and communications equipment, increased more than five-fold
others that are rule-takers, creating regulatory uncertainty between 2009 and 2020 (OECD, 2023b). More recently,
and barriers to the flow of data across borders (Aaronson the race to dominate AI development, combined with broader
and Leblond, 2018; Jones, 2023). The divergence in economic, geopolitical and security considerations linked to
regulatory approaches between the European Union and the dual-use nature of AI systems, has led a growing number
the United States has been a particular case in point, of advanced economies to impose export restrictions on

52
CHAPTER 3: THE POLICIES OF AI AND TRADE

advanced chips central to AI systems and on the tools used AI governance working group focused on advancing shared
to manufacture them.65 In reaction, China, one of the main principles for safe, trustworthy and responsible AI innovation,
targets of these measures, requested consultations under and calls for strengthened collaboration through joint
the WTO Dispute Settlement Understanding (DSU) in research and educational funding and for exploring reciprocal
December 202266 and imposed export restrictions on two certification programmes for American and Singaporean AI
metals used in chipmaking and communications equipment in professionals on the basis of shared standards, tests and
July 2023. benchmarks. The Dialogue includes cooperation on standard
development and a mapping exercise between domestic
There is a risk that these restrictions will affect the standard-setting bodies to align approaches. And in April
global development and deployment of AI technologies 2024, the United States and Uruguay signed a Memorandum of
and increase economic and, potentially, technical Understanding (MoU) to foster cooperation on certain critical
fragmentation. In the short term or when limited alternatives and emerging technologies, such as semiconductors, AI,
are readily available, restrictions can impact access to the data flows, telecommunications and cybersecurity, including
technology by importing economies. A longer-term effect may by identifying opportunities to support the development and
be that new technological developments will be postponed use of relevant international standards and by encouraging
due to a lack of access to advanced technology, interoperability and global compatibility, as well as greater
compounding risks of economic and technical fragmentation. cooperation in multilateral and international organizations.

China’s bilateral initiatives prioritize AI safety and


(ii) Bilateral and regional governance, as well as development issues. Dialogue
cooperation initiatives between China and the United States primarily focuses on
AI safety and governance. Announced in November 2023,
to address AI the first dialogue took place in May 2024 (The White House,
2023b; Murgia, 2024). Global governance of AI is also of
high importance in China’s discussions with African leaders
The increasing number of bilateral and regional in the context of the China-Africa Internet Development
cooperation initiatives on AI governance focusing on and Cooperation Forum. The last forum, which took place
different priorities adds to the risk of creating multiple in April 2024, called for more representation of developing
fragmented approaches. economies in the regulation of AI. Beyond the above-
mentioned examples, agreements to maintain bilateral
Bilateral cooperation initiatives that touch upon dialogues on AI have also been included in some regional
issues relevant to AI and trade prioritize different trade agreements (RTAs) and digital economy agreements
issues. Cooperation between the United States and the (see Chapter 3(b)(iii).
European Union in the context of the Trade and Technology
Council (TTC), which was established to promote EU- Approaches to AI governance initiated at the regional
US cooperation, focuses primarily on aligning terminology level take different forms. Various regional initiatives
and taxonomy and on monitoring and measuring AI risks have emerged in Africa, Asia and Latin America. Some of
(NIST, 2021). The first AI-related outcome of the TTC these take the form of ministerial declarations, such as the
was the launch, in December 2022, of a Joint Roadmap November 2023 Southern Common Market (MERCOSUR)
on Evaluation and Measurement Tools for Trustworthy AI Ministerial Declaration on the principles of human rights in
and Risk Management. The Joint Roadmap aims to guide the field of artificial intelligence,70 the October 2023 Santiago
the development of tools, methodologies and approaches Declaration to Promote Ethical Artificial Intelligence in Latin
to AI risk management and trustworthy AI, to develop a America and the Caribbean,71 and the May 2018 Declaration
common understanding of key terms, to support and lead on AI in the Nordic-Baltic Region by the Nordic Council
development of international standards, and to monitor and of Ministers.72 Others take the form of guides, such as the
measure existing and emerging AI risks. In May 2023, the TTC February 2024 Association of Southeast Asian Nations
adopted the “EU-US Terminology and Taxonomy for Artificial (ASEAN) Guide on AI Governance and Ethics (ASEAN,
Intelligence – First Edition”, which builds on existing standards 2024), or strategy documents, such as the 2024 African
such as International Organization for Standardization (ISO) Union Development Agency-New Partnership for Africa’s
standards to define key AI-related terms, but does not define Development (AUDA-NEPAD) White Paper,73 which led to
AI.67 In April 2024, the European Union and United States the adoption, on 17 June 2024, of the African Continental
launched a Research Alliance in AI for the Public Good. The Artificial Intelligence Strategy.74
Research Alliance aims to foster scientific cooperation to
better harness AI for the benefit of the environment, energy Some regional initiatives prioritize human rights and
optimization, disaster reduction and emergency responses.68 ethics, while others focus on economic development
and growth. The MERCOSUR Declaration strongly
Other bilateral initiatives in which the United States emphasizes human rights and transparency, stressing the
is involved focus more on collaboration to promote importance of avoiding discrimination, and of privacy and the
alignment in general terms. In October 2023, the integrity of information for democracy and the preservation
United States and Singapore launched a Critical and of culture, while the Santiago Declaration focuses on human
Emerging Technology Dialogue,69 which establishes a bilateral rights and ethics. The ASEAN Guide encourages alignment

53
CHAPTER 3: THE POLICIES OF AI AND TRADE

on AI governance and ethics standards based on seven in particular those signed by the United Kingdom, also
guiding principles,75 but does not list inclusive growth, recognize the importance of a risk-based and outcome-
sustainable development and well-being – as per Principle 1 based approach and of the principles of technological
of the OECD AI Principles76 – as a key principle. The ASEAN interoperability and technological neutrality,81 and include
Guide includes recommendations for both domestic and various cooperation provisions on exchanging information and
regional initiatives77 that governments in the ASEAN region sharing experiences and good practices on laws, regulations,
can take to ensure the responsible design, development, and policies, enforcement and compliance;82 ethical use, human
deployment of AI systems. Meanwhile, the AUDA-NEPAD diversity and unintended biases, industry-led technical
White Paper and the African Union Continental Artificial standards and algorithmic transparency;83 research;84 and
Intelligence Strategy focus mainly on harnessing the potential playing an active role in international fora,85 with the UK-
of AI for economic development and growth, while promoting Australia and UK-Ukraine agreements explicitly referring to
ethical use, minimizing potential risks and leveraging cooperation in the development of international standards,
opportunities. The white paper stresses the importance of regulations and conformity assessment procedures.
promoting innovation and building African multilingual tools
through AI to support a “pan-African renaissance with AI” Several AI-specific provisions explicitly refer to
and lists five pillars of action: human capital development trade. Three agreements – United Kingdom-Ukraine,86 United
for AI, infrastructure and data, enabling environments for AI Kingdom-Singapore87 and United Kingdom-Australia88 –
development and deployment, AI economy and encouraging explicitly recognize the role of AI in promoting competitiveness
investment in AI, and building sustainable partnerships. The and facilitating international trade. The United Kingdom-
Continental Strategy, adopted in June 2024, identifies four Australia agreement also encourages activities aimed at
priority sectors: agriculture, healthcare, education and climate facilitating and promoting trade in emerging technologies,
change adaptation. Likewise, the Arab AI Working Group and the agreements between the United Kingdom and
focuses primarily on cooperation to reduce the digital divide Ukraine and between the United Kingdom and Singapore
and encourage capacity-building. encourage active participation in international fora “on matters
concerning the interaction between trade and emerging
technologies”.
(iii) R
 egional trade
agreements and digital Digital trade provisions included in RTAs are also
important for AI development and use. The number of
economy agreements RTAs with digital trade provisions has been growing steadily
since the early 2000s. The first digital trade provision can be
found in the 2000 Jordan-United States Free Trade Agreement.
AI-specific provisions have started to be incorporated By the end of 2022, 116 RTAs – representing 33 per cent
into regional trade agreements (RTAs) and digital of all existing RTAs – had incorporated provisions related to
economy agreements,78 but mainly take the form of digital trade (López-González et al., 2023). These provisions
soft – i.e. non-binding – provisions. While their typically include provisions on data flows, data localization,
incorporation into these agreements is positive, such protection of personal information and access to government
provisions will not be sufficient to prevent regulatory data, which, as seen in previous sections, play an important
fragmentation. Six agreements include AI-specific role in determining access to data needed to train AI models.
provisions. These are the United Kingdom-Australia Free Provisions that ban measures mandating disclosure of source
Trade Agreement, the United Kingdom-New Zealand Free code, software and algorithms have also been included in a
Trade Agreement, and the recently signed digital economy number of trade agreements, most notably agreements led
agreements between Australia and Singapore (SADEA), by the United States. Such provisions typically aim to protect
between Chile, New Zealand and Singapore (DEPA), between technology firms from government measures requiring trade
Singapore and the United Kingdom (UKSDEA), and between secrets to be disclosed as a prerequisite for operating in
the Republic of Korea and Singapore (KSDPA), as well as a certain industries (Jones et al., 2024). Access to source code
recently signed free trade agreement between Ukraine and can, however, be important to assess the trustworthiness of AI
the United Kingdom, which has not yet come into force. systems (see Chapter 3 (a)(iii)). In addition, prohibitions on
disclosure of source code can impact technology access and
AI provisions essentially take the form of best- market competition, and limit the availability of open-source
endeavour clauses (i.e., which require parties to do software (Jones et al., 2024). Provisions on source code
everything possible to achieve the desired result). can, therefore, have a significant impact on the development
AI-specific provisions typically recognize the increasing and use of AI and on promoting AI trustworthiness. Other
importance of AI within the global economy and include provisions related to the adoption of standards and conformity
best-endeavour clauses to either “collaborate and promote”79 assessment can also play a critical role in promoting
the development of governance frameworks to promote trustworthy AI (see Chapter 3(a)(iii)), while provisions on
trusted, safe and responsible use of AI or “to develop”80 competition in the digital market are important to address the
such frameworks taking into account international guidelines, market concentration power of AI (see Chaper3(a)(i)). Finally,
with the UK-Australia and UK-New Zealand agreements provisions on customs duties on electronic transmissions
specifically referring to the 2019 OECD Principles have been important in fostering an environment conducive
(OECD, 2019a) (see Chapter 3(b)(iv)). Some agreements, to digital trade (IMF-OECD-UN-WBG-WTO, 2023).

54
CHAPTER 3: THE POLICIES OF AI AND TRADE

The depth of digital trade provisions included in while the United Kingdom-New Zealand Free Trade
RTAs varies significantly, reflecting diverging Agreement includes binding but non-specific language.
approaches. Analysing the digital trade provisions of 12 As for disclosure of source code, agreements led by the
agreements concluded between March 2018 and January United States and digital economy agreements include
2023, Jones et al. (2024) find a high degree of heterogeneity extensive and binding protection of source code, although
between the agreements (see Figure 3.7). For example, digital economy agreements do not mention algorithms.
while most agreements contain binding obligations on the In contrast, agreements signed by New Zealand and the
free flow of data, the United Kingdom-European Union RTA Regional Comprehensive Economic Partnership (RCEP) do
does not contain any provision on non-financial data flows. not include such provisions.89
Regarding personal data protection, agreements led by
the United States consider voluntary undertakings by Few developing economies and LDCs have negotiated
private companies as sufficient to safeguard personal data, digital trade provisions. The inclusion of detailed digital
which contrasts with the European Union’s comprehensive trade provisions tends to be more common in RTAs negotiated
approach to data protection under the EU General Data by high-income and certain middle- to upper middle-income
Protection Regulation. Language on open government data economies. Only a handful of LDCs have engaged in RTAs
takes the form of best-endeavour language in agreements that contain provisions related to digital trade (IMF-OECD-
led by the United States and digital economy agreements, UN-WBG-WTO, 2023).

Figure 3.7: The depth of digital trade provisions included in RTAs varies significantly

Free flow of financial data Protection of personal information

Localization of financial data Data localization


(non-financial)

Access to Free flow of data


government data (non-financial)

Supporting Moratorium on
data innovation customs duties on
e-transmissions

Mandatory disclosure Facilitating


of source code digital inclusion

Adoption of standards & Cooperation on


conformity assessment cybersecurity matters

Governance of Al and Competition policy in


emerging technologies digital markets

CPTPP (US) 03/2018 USMCA 11/2018 JPN-US 10/2019 DEPA (SG) 06/2020 AUS-SG DEA 08/2020 SG-UK DEA 02/2022

KOR-SG 01/2023 JPN-UK 10/2020 EU-UK 12/2020 AUS-UK 12/2021 NZ-UK 02/2022 RCEP 11/2020

Source: Authors’ visualization based on Jones et al. (2024).


Notes: The value “0” (inner circle) means that the agreement does not contain a provision on a given issue. The value “1” means that
the agreement contains a provision couched in purely hortatory, or exhortatory, language. The value “2” means that the commitment
made is binding but non-specific. The value “3” means that commitments are binding and specific, with actions to be taken (or not
taken) described in clear and precise language. The value “4” means that the commitments are binding and specific, with obligations
that are more extensive in scope and very detailed. Points located between two lines correspond to the higher value in terms of
commitments, but with flexibilities. The greater the flexibilities, the closer the point is to the lower-value inner circle.

55
CHAPTER 3: THE POLICIES OF AI AND TRADE

Disciplines on trade in services in RTAs are also an rights and the ethics of AI, such as the United Nations
important channel through which governments’ trade Educational, Scientific and Cultural Organization (UNESCO)
policies and trade obligations can affect the policy Recommendation on the Ethics of AI, while others are centred
environment for AI. However, the level of commitments around safety, security, the trustworthiness of AI or its
undertaken differs significantly across economies. Services interoperability, such as the Bletchley Declaration on AI Safety.
RTAs provide significantly higher levels of market access
and national treatment commitments than under the WTO A number of initiatives also contain various common
General Agreement on Trade in Services (GATS) for elements that have an important trade and WTO angle.
different modes of supply and services sectors, including These include:
for digital and AI-related services. For example, in the context
of computer services, all WTO members from Europe, the • the recognition of the role of regulations and standards
Middle East and North America have undertaken some (including certification procedures) in governing AI and the
market access commitments on data processing services importance of interoperability between such tools;
under the GATS and/or RTAs, and most WTO members • the need to avoid regulatory fragmentation by using
have done so in Latin America and the Caribbean (88 international standards to govern AI;
per cent) and in Asia (91 per cent). However, in Africa, • the importance of an appropriate and balanced approach to
26 per cent of WTO members have market access protecting and enforcing IP rights;
commitments on data processing services, whether • the importance of privacy, personal data protection and
under the GATS or RTAs, although that proportion will data governance;
increase when the services commitments of the African • the importance of international cooperation, coordination
Continental Free Trade Area (AfCFTA) enter into force and and dialogue.
are notified to the WTO (Roy and Sauvé, forthcoming).90
Importantly, explicit references to the WTO were included
in the Final Report of the UN AI Advisory Body. The Final
(iv) International initiatives Report stresses the need for “proper orchestration” and

to address the coordination among the many international processes and


organizations producing key documents related to AI
challenges raised by AI governance, to enable a “shared normative foundation for
all AI-related efforts”, expressly referring to various WTO
agreements, such as the General Agreement on Tariffs and
Policy initiatives Trade (GATT), the General Agreement on Trade in Services
(GATS), the Agreement on Trade-Related Aspects of
The last few years have witnessed a wave of various Intellectual Property Rights (TRIPS), the Technical Barriers
initiatives related to AI. This impetus has been driven by to Trade (TBT) Agreement, the Information Technology
the realization that the inherently international nature of the Agreement (ITA) and the Trade Facilitation Agreement (TFA)
risks and benefits associated with AI require discussion, (paragraph 76 and Figure 9 of the Final Report). The Final
cooperation and solutions that are also international in Report also notes the pivotal role of international standards
nature (see Figure 3.8).91 and regulatory cooperation and recognizes the key role of the
WTO in this area (paragraph 121).94 Finally, it recommends
These initiatives involve different stakeholders and the creation of a “Global AI Data Framework” involving a
take different forms. International initiatives involve a variety of key actors, including economies and relevant
broad range of stakeholders, including governments, international organizations, including the WTO (paragraph
intergovernmental organizations, international standard-setting 170). More detail can be found in Annex 3.
bodies and businesses. Most of these initiatives take the form
of high level principles, guidance, voluntary recommendations, Several of these initiatives also address the
scientific reports, codes of conduct, or lists of policy examples, environmental impacts of AI. This is the case, for
while others take the form of international standards. In May example, for the OECD AI Principles, the G20 AI Principles,
2024, the first binding treaty on AI – the Council of Europe the New Delhi Leaders’ Declaration,95 the UNESCO AI
Framework Convention on Artificial Intelligence and human Recommendation, the G7 Guiding AI Principles and the G7
rights, democracy and the rule of law92 – was adopted. AI Code of Conduct, the Bletchley Declaration on AI Safety,
and the UN Advisory Board on AI in the United Nations.
There are elements of complementarity among such Further, the International Organization for Standardization
initiatives, and alignment on core principles, but (ISO)/International Electrotechnical Commission (IEC) Joint
different initiatives prioritize different aspects of AI Technical Committee (JTC) 1 and subcommittee 42 (SC)
governance. Some of the common themes across various (ISO/IEC JTC 1/SC 42) is currently developing an
international initiatives include, for instance, promoting safe, international standard specifically about “environmental
secure, trustworthy, “human-centric”, ethical, transparent, sustainability aspects of AI systems.” AI policy was a key
accountable and interoperable AI, and identifying and issue at the G20 Summit in Rio de Janeiro in November 2024,
mitigating AI-triggered risks through various actions, domestic with a focus on the use of AI for sustainable development; in
policies and international cooperation.93 However, in certain the G7 Trieste Ministerial Declaration adopted in 2024, G7
instances, international initiatives appear to prioritize these economies also expressed their desire to participate in the
themes differently. Some initiatives focus on issues like human G20 AI and sustainability discussions.

56
CHAPTER 3: THE POLICIES OF AI AND TRADE

However, there is still no global alignment on AI on core principles does not guarantee alignment on how
terminology. Global agreement over key AI terminology such principles can be implemented in practice. In the
and definitions may be a particularly important trade-related absence of strong coordination, current international initiatives
element, as it may help to ensure coherence and interoperability may not be sufficient to prevent regulatory fragmentation
and to avoid fragmentation across various domestic AI at the global level. The need to improve coordination was
regulatory regimes (Meltzer, 2023). As explained in this report, acknowledged in the Final Report (2024) of the UN AI
regulatory fragmentation can itself represent an important Advisory Body (AIAB) published in September 2024 and the
trade barrier, in particular for developing economies and micro, Global Digital Compact adopted by the UN General Assembly
small and medium-sized enterprises (MSMEs). In this respect, in September 2024. The Final Report identifies three “global
the OECD AI Principles96 contain various AI definitions, of AI governance gaps” to be addressed: a “representation”
which the definitions of an “AI system”97 and an “AI system gap, a “coordination” gap and an “implementation” gap. The
lifecycle”98 are key for the implementation of any domestic AI WTO is relevant for all three, and as noted above, specific
strategy or policy and, in particular, for regulation. The Council references to the WTO are included in various places of the
of Europe Framework Convention on Artificial Intelligence and Final Report. As for the Global Digital Compact, it includes
human rights, democracy and the rule of law99 also contains a a commitment by UN members to initiate a Global Dialogue
definition of an “AI system” which is virtually identical to that in on AI governance involving governments and all relevant
the OECD Principles.100 The ISO/IEC JTC 1/SG 42, which stakeholders (paragraph 56).
is dedicated to AI standard-setting, adopted in 2022 a
document101 containing a wide range of detailed definitions
and terminology in the field of AI. It included a definition of an International initiatives
“AI system”, which shares some similarities but also includes to close the AI divide
some differences with the definition in the OECD Principles.
Finally, while the G20 AI Principles102 have more or less Increasingly, international organizations are
integrated all of the OECD Principles, they do not expressly developing courses on AI and are integrating AI
endorse the definitions, including that of an “AI System”. in their technical assistance activities, some of
Unlike OECD and ISO/IEC, the UNESCO Recommendation which have a trade component. The International
on AI Ethics does not define AI.103 Telecommunication Union (ITU), for example, offers an online
course titled “The governance of artificial intelligence” and, in
Some initiatives seem to be moving beyond general partnership with 40 other UN agencies, the ITU launched “AI
principles or guidance into implementing more for Good,” an action-oriented global platform on AI to identify
targeted or specific actions. For instance, in order to practical applications of AI to advance the UN Sustainable
foster their knowledge on existing approaches and practices, Development Goals (SDGs).109 AI for Good includes a
the G20 launched the “Examples of National Policies to year-round online programme of webinars, with an annual
Advance the G20 AI Principles”,104 and the G20 “Policy in-person AI for Good Global Summit. Other specialized
Examples on How to Enhance the Adoption of AI by MSMEs UN agencies have developed projects focused on their own
and Start-up”.105 In 2024, the G7 announced plans to advance areas of expertise. UNESCO, for example, has developed a
its 2023 Hiroshima AI process. The planned actions include Readiness Assessment Methodology to support its members
expanding outreach to partner governments to broaden in their implementation of the UNESCO Recommendation
support for the G7 AI Guiding Principles and Code of on the Ethics of AI, and is providing targeted technical
Conduct, intensifying efforts to encourage adherence to assistance in this context through projects such as its “AI
these two instruments, and intensifying cooperation across needs assessment in African countries” programme.110
multilateral forums to promote the G7 vision for advanced AI Meanwhile, the United Nations Industrial Development
systems.106 In addition, following up on the 2023 Bletchley Organization (UNIDO) has been organizing dialogues
Declaration on AI Safety, governments have agreed to on “Empowering SMEs in Developing Countries through
convey a panel of experts to produce an Intergovernmental Artificial Intelligence”111 to promote AI adoption by MSMEs
Panel on Climate Change (IPPC)-like “State of the Science” in developing economies, to enhance their competitiveness
Report,107 which will aim to review the latest cutting-edge and sustainability through shared conversations. A related
research on the risks and capabilities of frontier AI models. publication by UNIDO includes practical recommendations
The interim International Scientific Report on the Safety of and tools to help MSMEs navigate challenges and leverage
Advanced AI was published in May 2024108 and summarizes AI for various business functions and production areas.
the best of existing research, while identifying areas of research As for the World Bank, two notable projects with an AI
priority. It does not make policy or regulatory recommendations, dimension are the “Machine learning in Algeria” project,
but instead aims to inform both domestic and international which aims to enhance efficiency and integrity in customs
policymaking. The final report is expected to be published operations using machine learning, and “Fraud analytics in
ahead of the next AI summit which is expected to be held in Kenya using AI applications”, which aims to improve revenue
February 2025 in France (see also Annex 3). collection through anti-fraud measures.112 And the United
Nations Interregional Crime and Justice Research Institute
The significant overlap between initiatives, the (UNICRI) has developed a course for law enforcement
differing priorities and the lack of agreement on key agencies to equip them with the necessary resources to
terminology could create implementation challenges. institutionalize responsible AI, ensuring its alignment with
This may limit efforts to prevent fragmentation. Alignment human rights and ethics.113

57
CHAPTER 3: THE POLICIES OF AI AND TRADE

Figure 3.8: Key international policy initiatives in the area of AI

May 2019
OECD, AI Principles

June 2019
G20, AI Principles

November 2021
UNESCO, Recommendation on
the Ethics of AI

May 2023
G7, Hiroshima Process on
Generative AI

October 2023
G7, AI Guiding Principles, AI Code of
Conduct

November 2023
AI Safety Summit, “Bletchley Declaration”
on AI Safety

March 2024
UN General Assembly, AI Resolution

May 2024
International Scientific Report on the
Safety of Advanced AI (interim report)

May 2024
Council of Europe, Framework
Convention on AI, Human Rights,
Democracy and the Rule of Law

May 2024
Seoul Summit, agreement to
launch an international network
of AI Safety Institutes*

September 2024
Publication of the Final Report
of the UN AI Advisory Body

September 2024
Adoption of the UN Global
Digital Compact.

* Signatories include Australia, Canada, the European Union, France, Germany, Italy, Japan, the Republic of Korea, Singapore,
the United Kingdom and the United States.

58
CHAPTER 3: THE POLICIES OF AI AND TRADE

resources for model training, sandboxes and curated datasets


The UN AI Advisory Body (AIAB) has called for the “to catalyse local empowerment for the SDGs”.114 The Global
establishment of a global fund for AI. Published Digital Compact,115 which was adopted by the United Nations
in September 2024, the Final Report of the UN AIAB General Assembly in September 2024 after the publication of
recommends the creation of a global fund for AI to “put the UN AIAB report, calls for “innovative voluntary financing
a floor under the AI divide”. Managed by an independent options for artificial intelligence capacity-building that take
governance structure, the fund would receive financial and into account the recommendations of the High-level Advisory
in-kind contributions from public and private sources to Body on Artificial Intelligence on a Global Fund on AI”.
facilitate access to AI enablers, such as shared computing

Endnotes
1 See https://ourworldindata.org/extreme-poverty-in-brief. stored within corporate networks, can inadvertently reveal
information about personnel involved in data collection or analysis.
2 See https://eto.tech/.
In addition, metadata in online communications, such as phone
3 See https://eto.tech/. numbers, emails or IP addresses, can make users identifiable
even if the data do not directly reveal personal identities
4 See https://oecd.ai/en/data. (Lee-Makiyama, 2018).
5 See https://stackoverflow.com/. 12 See https://www.consilium.europa.eu/en/press/press-
6 See https://ourworldindata.org/grapher/share-new-artificial- releases/2024/10/10/eu-brings-product-liability-rules-in-
intelligence-cs-phds-female. line-with-digital-age-and-circular-economy/#:~:text=The%20
EU’s%20product%20liability%20regime,caused%20the%20
7 At the same time, AI also holds procompetitive potential. For injury%20or%20damage and https://eur-lex.europa.eu/legal-
instance, it empowers consumers to utilize abundant data for content/EN/TXT/PDF/?uri=CELEX:52022PC0496.
personalized products and transactions, and guides them in
navigating complex or uncertain markets to select the best 13 Trustworthiness is mentioned in international AI principles
offers based on preferences. This may lead to the emergence and declarations as a key attribute that an AI system should
of “algorithmic consumers”, whose decision-making is partially possess. See, for example, https://www.gov.uk/government/
automated through algorithms (Gal and Elkin-Koren, 2017). publications/ai-safety-summit-2023-the-bletchley-declaration/
the-bletchley-declaration-by-countries-attending-the-
8 Various competition enforcement cases were recently launched ai-safety-summit-1-2-november-2023, https://www.mofa.go.jp/
against AI companies. For example, the US Federal Trade policy/economy/g20_summit/osaka19/pdf/documents/en/
Commission (FTC) went to court to block a proposed acquisition annex_08.pdf, Organisation for Economic Co-operation and
of Arm Ltd. by Nvidia, one of the leading producers of advanced Development (2019a) and United Nations Educational, Scientific
chips powering AI, which resulted in the latter abandoning and Cultural Organization (2021). In AI terminology,
the deal (see https://www.ftc.gov/news-events/news/press- “trustworthiness” means the “ability to meet stakeholder …
releases/2022/02/statement-regarding-termination-nvidia- expectations in a verifiable way” (e.g., via certification against
corps-attempted-acquisition-arm-ltd). The European Commission, technical specification in a regulation or standard). More
like the UK Competition Markets Authority and the FTC, also specifically, the trustworthiness of an AI system relates to its
started looking into whether the investment of Microsoft in ability to meet various expectations, for example in terms of
OpenAI constituted a merger (European Commission, 2024a). its “reliability”, “availability”, “resilience”, “security”, “privacy”,
Cognizant of the risks that AI poses for competition, the competition “safety”, “accountability”, “transparency”, “integrity”, “authenticity”,
authorities of the European Union, the United Kingdom and the “quality” and “usability”. See ISO/IEC standard 22989:2022,
United States of America issued in July 2024 a Joint Statement on sub clause 3.5.16 (Trustworthiness - definition) and clause 5.15
Competition in Generative AI Foundation Models and AI Products (Trustworthiness - concept). See also ISO/IEC TR 24028:2020.
laying out various principles for protecting competition in the AI While the composite term “safe and trustworthy” AI is frequently
ecosystem (see https://www.gov.uk/government/publications/ used, given that “safety” is subsumed into the above definition of
joint-statement-on-competition-in-generative-ai-foundation- trustworthiness, in this report, for simplicity, we will only refer to
models-and-ai-products/joint-statement-on-competition- trustworthy AI.
in-generative-ai-foundation-models-and-ai-products).
14 For instance, certain risks may be associated with AI-enabled
9 An example of this is the fact that the Google search engine can autonomous vehicles that stem not from the physical components
outperform that of Microsoft because the former has wider access of the vehicle. Instead, the AI algorithm (and how it has been
to rarer queries. Having a variety of data and, in particular, its ability trained), may lead the vehicle to “behave” in a risky manner,
to capture more rare events are also important for making better causing not only material harms (e.g., physical injuries to the
predictions (Goldfarb and Trefler, 2018). driver, passengers or pedestrians) but also, uniquely, immaterial
10 However, advancements such as federated learning, which harms (e.g., privacy, cybersecurity, etc.). See UK Parliament
allows entities in various locations to build machine learning House of Commons’ Report on Self-Driving Vehicles (HC 519,
models collaboratively, without exchanging data (it is the algorithm 15 Sep 2023), paragraph 66 (noting studies warning that “fleets
that is transferred, not the data itself), and data trusts, a system or models of self-driving vehicles could be targeted by ‘malicious,
and legal entity that manages someone’s data on their behalf, possibly terrorist, systemic hacking’”). Regulatory solutions to
could mitigate the challenges linked to cross-border data flows such immaterial risks may also present complex ethical questions,
(Bonawitz et al., 2019; World Economic Forum, 2020a). e.g., the famous “trolley problem”, whereby an autonomous vehicle
has to “choose”, for example, between colliding with an elderly
11 AI can turn even non-personal enterprise and operational person and colliding with a mother and her young child. (e.g.,
data, such as stock inventory, into privacy risks. These data, Wells (2023); Lin (2021)).

59
CHAPTER 3: THE POLICIES OF AI AND TRADE

15 “Injury to pedestrians due to the malfunction of an autonomous The strength and advantages with AI/ML are the ability to train and
vehicle AI system would be tangible physical harm. Some harms, improve the system based on new real-world data. However, the
however, such as psychological harms, may not be as tangible system also needs to be continuously safe for patients and other
or quantifiable. Other aspects of harm that may be intangible or users, as well as comply with the applicable regulations regarding,
difficult to directly observe include bias or discrimination that may for example, validation”.
disproportionately and negatively impact particular communities
23 For instance, “[c]ustomisation makes traceability and
but be difficult to observe at the level of the individual. Violations
enforcement of product safety and cybersecurity more challenging
of the fundamental right to privacy may also be intangible, such
– many products (or properties) are changing constantly” (Lund
as the non-transparent use of an employee monitoring AI system”.
et al. 2023).
OECD Working Party on Artificial Intelligence Governance:
Stocktaking for the development of an AI incident definition, 24 The EU AI Act (2024a), for instance, seems to contain certain
document EP/AIGO(2022)11/FINAL (21 Oct 2023). provisions on this issue, as it requires that AI systems be re
certified if, after deployed, they present unforeseen “substantial
16 See, e.g., Report from the European Commission to the
modifications” (as defined in Article 3(23)), i.e., “… whenever a
European Parliament, the Council and the European
change occurs which may affect the compliance of a high.risk AI
Economic and Social Committee on “The Safety And Liability
system with this Regulation (e.g. change of operating system or
Implications of Artificial Intelligence, the Internet of Things
software architecture), or when the intended purpose of the system
and Robotics”, COM(2020) 64 final (19 February 2020),
changes, that AI system should be considered a new AI system
page 8. See https://eur-lex.europa.eu/legal-content/EN/TXT/
which should undergo a new conformity assessment”. However,
PDF/?uri=CELEX:52020DC0064.
“changes occurring to the algorithm and the performance of AI
17 See https://www.iso.org/committee/6794475/x/catalogue/ systems which continue to ‘learn’ after being placed on the market
p/1/u/0/w/0/d/0. or put into service (i.e., automatically adapting how functions
are carried out) should not constitute a substantial modification,
18 The Agreement on Trade-Related Aspects of Intellectual
provided that those changes have been pre-determined by
Property Rights (TRIPS Agreement) addresses inter alia the
the provider and assessed at the moment of the conformity
protection of trade secrets, including imposing certain conditions
assessment”. The EU AI Act also foresees that ex post marketing
when proprietary information (“undisclosed test and other
surveillance over AI products may: “ensure that the possible risks
data”) is accessed and used by governments for regulatory
emerging from AI systems which continue to ‘learn’ after being
purposes, albeit only in the context of “marketing approval”
placed on the market or put into service can be more efficiently
(e.g., conformity assessment procedures such as product
and timely addressed”.
certification and approval) of pharmaceuticals and agricultural
chemical products (Article 39.3). Similarly, the Technical Barriers 25 ISO/IEC TR 24368 (2022) gives examples of areas in which
to Trade (TBT) Agreement requires that WTO members ensure there is an “increasing risk for undesirable ethical and societal
that the confidentiality of information in the context of conformity outcomes and harms”, e.g.,: “financial”; “psychological”; “physical
assessment procedures (e.g. product certification and approval) is health or safety”; “intangible property (for example, IP theft,
(i) respected for imported and national products “in the same way” damage to a company’s reputation)”; “social or political systems
and (ii) respected “in such a manner that legitimate commercial (for example, election interference, loss of trust in authorities)”;
interests are protected.” (Article 5.2.4). and “civil liberties (for example, unjustified imprisonment or other
punishment, censorship, privacy breaches)”.
19 Mitchell et al. (2023) contains a detailed analysis of
circumstances when regulating AI can be performed without 26 Commenting on the fact that the AI Act’s implementation
need to access source code (“white box” testing for low-risk AI may involve the adoption of technical standards for addressing
systems), and of circumstances when a deeper understanding and both material (e.g., health) and immaterial risks (e.g., fundamental
explanation of the AI system’s decisions and recommendations is rights), Smuha and Yeung (2024), observe that: “… unlike risks to
needed (high risk AI systems) and justifies requiring access to the safety generated by chemicals, machinery or industrial waste, all
code (“black box” testing). of which can be materially observed and measured, fundamental
rights are, in effect, political constructs. These rights are
20 “Evolution” in the sense that some AI systems allow the
accorded special legal protection so that an evaluation of alleged
product to better perform, adapt and finetune overtime for a
interference requires close attention to the nature and scope of
given circumstance or for a given user, as it works in practice
the relevant right and the specific, localized context in which a
and receives and crunches more data; a sort of “personalized
particular right is allegedly infringed. We therefore seriously
AI product” similar to the idea of “personalized medicine” (e.g.,
doubt whether fundamental rights can ever be translated into
using knowledge of a patient’s genetic profile to select “the proper
generalized technical standards that can be precisely measured
medication or therapy and administer it using the proper dose or
in quantitative terms, and in a manner that faithfully reflects
regimen” – see https://www.genome.gov/genetics-glossary/
what they are, and how they have been interpreted under the
Personalized-Medicine). In fact, AI can be a driver and enabler
European Charter on Fundamental Rights and the European
for advancing personalized medicine in the area of genomics
Convention on Human Rights”.
medicine. See Cesario et al. (2023); World Health Organization
(2021). 27 See also WTO official document number G/TBT/GEN/356,
available at https://docs.wto.org/.
21 In this respect, the EU AI Act (2024a), for instance, notes in its
preamble (recital 5), that “AI may generate risks and cause harm 28 See, for example, the European Union’s Artificial Intelligence
to public interests and fundamental rights” and that “[s]uch harm Act (AIA) (https://artificialintelligenceact.eu/), under which all
might be material or immaterial, including physical, psychological, providers of “general-purpose AI models” are mandated to create
societal or economic harm.” (italics added). technical documentation that details the training and testing
processes, establish a copyright policy, and provide a sufficiently
22 Lund et al. (2023) present a useful example of this tension
detailed summary of the content used for training. In contrast,
with respect to medical devices, explaining that one of the “main
free and open AI models are only obliged to establish a copyright
obstacles of using AI in healthcare, and therefore AI-based medical
policy and submit a summary of training content.
software”, is that “how to address continuous change i.e., locked
algorithms vs non-locked autonomous systems is a challenge.

60
CHAPTER 3: THE POLICIES OF AI AND TRADE

29 RDC nº 657/2022 – Anvisa https://antigo.anvisa.gov.br/ organization, regardless of size, type and nature, that provides or
documents/10181/5141677/RDC_657_2022_.pdf/f1c32f0e- uses products or services that utilize AI systems”.
21c7-415b-8b5d-06f4c539bbc3.
41 However, some experts argue that some important provisions
30 Such as Israel, Japan, the Republic of Korea, Singapore, the of the AI Act do not follow a purely risk based approach
United Kingdom and the United States. The Global AI Index 2024 (Ebers, 2024).
published by Tortoise Media, which ranks economies by their
42 See https://oecd.ai/en/.
AI capacity at the international level: https://www.tortoisemedia.
com/intelligence/global-ai/. 43 For example, the UK has committed £100 million toward
building a “public foundation model” to support academic, small
31 While copyright protects creation of the (human) mind, patent
business and public sector applications (see https://www.gov.
protection is available for technical innovations (by humans).
uk/government/news/initial-100-million-for-expert-taskforce-
32 US Copyright Registration Guidance: Works Containing to-help-uk-build-and-adopt-next-generation-of-safe-ai), and the
Material Generated by Artificial Intelligence, available at https:// US National Artificial Intelligence Research Resource is working in
copyright.gov/ai/ai_policy_guidance.pdf; US Court of Appeals a similar direction (see https://new.nsf.gov/focus-areas/artificial-
for the 9th Circuit, Naruto v. Slater, https://cdn.ca9.uscourts. intelligence/nairr).
gov/datastore/opinions/2018/04/23/16-15469.pdf; US District
44 Standards are one of the three types of technical barriers
Court for the District of Columbia, Thaler v. Perlmutter, https://ecf.
to trade (TBT) measures that establish product specifications.
dcd.uscourts.gov/cgi-bin/show_public_doc?2022cv1564-24.
They differ from technical regulations, however, as standards
33 CJEU, Infopaq International A/S v Danske Dagblades are voluntary documents. It is also not uncommon for standards
Forening, Case C-5/08 (Intellectual Property Repository, 2023; adopted by governments to be made mandatory later on, thus
Zhou, 2019). becoming technical regulations. Standards can be developed
by different entities within WTO Members, including both
34 Strategies articulate the government’s vision regarding the
governmental and non-governmental bodies (WTO, 2021).
contribution of science, technology and innovation (STI) to the
social and economic development of an economy. They set 45 See, e.g., Kerry (2024). For instance, in 2024, China
priorities for public investment in STI and identify the focus of issued draft Guidelines for AI Standardisation which proposes
government reforms, for instance in areas such as funding public to form more than 50 national and industry-wide standards
research and promoting business innovation (OECD, 2016b). and more than 20 international standards for AI by 2026 (see
https://www.reuters.com/technology/china-issues-draft-
35 See OECD AI database. https://oecd.ai/en/dashboards/
guidelines-standardising-ai-industry-2024-01-17/ and https://
overview
mmlcgroup.com/miit-ai/). The European Commission mandated
36 Although not a country, the European Union has the power to European standardisation organizations to develop AI-related
adopt EU-wide trade-related legislation within the parameters set standards taking into account that standards will play an
by its founding treaties. important role in fulfilling requirements under the EU AI Act
(https://ec.europa.eu/transparency/documents-register/detail?
37 See https://ec.europa.eu/commission/presscorner/detail/en/ ref=C(2023)3215&lang=en).
ip_24_383.
46 For example, China’s standards strategy of 2021 identified AI
38 See https://digital-strategy.ec.europa.eu/en/policies/plan-ai. as one of the key areas. See Kerry (2024).
39 Regulation (EU) 2024/1689 of the European Parliament and 47 This is based on the data from the AI Standards Hub and is
of the Council of 13 June 2024 laying down harmonised rules provided for illustration purposes only. This data is presented
on artificial intelligence and amending regulations (EC) No without prejudice, and should not be understood as a position,
300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) on whether these documents are “standards” within the definition
2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives of Annex 1 of the TBT Agreement. See AI Standards Search -
2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial AI Standards Hub.
Intelligence Act). AIA was published in the EU Official Journal
on 12 July 2024 and entered into force 20 days later. However, 48 See the AI Standards Hub at https://aistandardshub.org.
most of AIA’s rules are only applicable 24 months after its entry
49 For example, the WTO Technical Barriers to Trade (TBT)
into force, although it provides for shorter applicability periods
Agreement requires that WTO members use relevant international
with respect to certain rules (e.g., bans on “prohibited practices”
standards as a basis of their domestic standards, technical
that are listed as posing “unacceptable risks” will already apply six
regulations and certification procedures (see Chapter 4).
months after entry into force), as well as longer periods for others
(e.g., 36 months for certain “high risk systems” covered by existing 50 See https://webarchive.nla.gov.au/awa/20220816053410/
EU harmonization legislation and for general-purpose AI systems h t t p s : / / w w w. i n d u s t r y. g o v. a u / d a t a - a n d - p u b l i c a t i o n s /
on the EU market before the Act applies to them). australias-artificial-intelligence-action-plan.
40 In the notification in 2021 of a draft of the AIA (document G/ 51 See http://gd.china-embassy.gov.cn/eng/zxhd_1/202310/
TBT/N/EU/850), the European Union explained that this proposal t20231024_11167412.htm.
was meant to provide: “… a set of recommendations intended
52 See https://www.whitehouse.gov/briefing-room/presidential-
to help the organization develop, provide, or use AI systems
actions/2023/10/30/executive-order-on-the-safe-secure-
responsibly in pursuing its objectives and meet applicable
and-trustworthy-development-and-use-of-artificial-intelligence/.
requirements, obligations related to interested parties and
See also NIST “A Plan for Global Engagement on AI Standards”
expectations from them. It includes the following: approaches to
(final, July 2024 - available at: https://nvlpubs.nist.gov/nistpubs/ai/
establish trust in AI systems through transparency, explainability,
NIST.AI.100-5.pdf) and US Government National Standards
controllability, etc.; engineering pitfalls and typical associated
Strategy for Critical and Emerging Technology presented at the
threats and risks to AI systems, along with possible mitigation
TBT Committee meeting held on 21-23 June 2023 (https://docs.
techniques and methods; and approaches to assess and
wto.org/dol2fe/Pages/SS/directdoc.aspx?filename=q:/G/
achieve availability, resiliency, reliability, accuracy, safety, security
TBT/M90.pdf&Open=True, paragraph 6.32).
and privacy of AI systems. This document is applicable to any

61
CHAPTER 3: THE POLICIES OF AI AND TRADE

53 For example, Australia has been actively engaged in the work Digital%20Markets%2C%20Competition%20and,in%20
of the International Organization for Standardization (ISO) and and%20innovate%20new%20technology.
International Electrotechnical Commission (IEC) Joint Technical
64 The Safe Harbour Privacy Principles, which were developed
Committee (ISO/IEC JTC1/SC42) and, in 2024, Australia
between 1998 and 2000 to prevent private organizations within
announced the adoption of one of the ISO/IEC JTC1/SC42
the European Union or United States that store customer data
standards (see https://www.standards.org.au/news/standards-
from accidentally disclosing or losing personal information, were
australia-adopts-the-international-standard-for-ai-management-
brought down by the European Court of Justice (ECJ) in 2020
system-as-iso-iec-42001-2023).
after Max Schrems, an Austrian activist, lawyer and author brought
54 EU AI Act, Preamble, Recital (176). a case against Facebook for its privacy violations, including
violations of European privacy laws and the alleged transfer
55 While there is no globally agreed definition, the European
of personal data to the US National Security Agency (NSA) as
Parliament Research Service notes in its paper on “Artificial
part of the NSA’s PRISM data-mining programme. The Safe
Intelligence Act and Regulatory Sandboxes” (https://www.
Harbour Privacy Principles were replaced with the Privacy Shield
europarl.europa.eu/RegData/etudes/BRIE/2022/733544/
until 2020, when the ECJ once again brought it down. A new
EPRS_BRI(2022)733544_EN.pdf) that “regulatory sandboxes
agreement was reached in July 2023 to allow data flows based
generally refer to regulatory tools allowing businesses to test
on the “adequacy decision” mechanism of the EU General Data
and experiment with new and innovative products, services or
Protection Regulation.
businesses under supervision of a regulator for a limited period
of time. As such, regulatory sandboxes have a double role: 1) 65 The United States initiated export controls on semi-conductors
they foster business learning, i.e. the development and testing of in 2022, and these restrictions were broadened over time.
innovations in a real-world environment; and 2) support regulatory In 2023, the Netherlands imposed restrictions on high end
learning, i.e. the formulation of experimental legal regimes to chipmaking. The United Kingdom, Canada and Japan followed
guide and support businesses in their innovation activities under with their own restrictions. See Financial Times (2022) and
the supervision of a regulatory authority”. Wolff (2022).
56 See https://aiverifyfoundation.sg. 66 See https://www.wto.org/english/tratop_e/dispu_e/cases_e/
ds615_e.htm.
57 i.e., the initial version of a product that includes only the core
features necessary to meet basic user needs and gather feedback 67 See https://digital-strategy.ec.europa.eu/en/library/eu-us-
for future improvements. terminology-and-taxonomy-artificial-intelligence.
58 WTO official documents IP/N/1/JPN/36, IP/N/1/JPN/C/6 and 68 See https://digital-strategy.ec.europa.eu/en/library/ai-public-
IP/C/M/92/Add.1, available at https://docs.wto.org/. good-eu-us-research-alliance-ai-public-good.
59 The UK Copyright, Designs and Patents Act 1998 provides 69 See https://www.whitehouse.gov/briefing-room/statements-
that authorship is attributed to “the person by whom the releases/2023/10/12/u-s-singapore-critical-and-emerging-
arrangements necessary for the creation of the work are technology-dialogue-joint-vision-statement/.
undertaken”, paragraph 9(3). Other common law jurisdictions
70 See https://www.raadh.mercosur.int/wp-content/uploads/
such as India (copyright Act 1957 paragraph 2(d)), Ireland
2024/04/ D E C LARAC I O N-S O B R E-LO S-P R I N C I P I O S-
(Copyright and Related Rights Act 2000 21), New Zealand
D E-D E R E C H O S-H U MAN O S-E N-E L-AM B ITO-D E-LA-
(Copyright Act 1994 5(1)) and South Africa (Copyright Act 1978
INTELIGENCIA-ARTIFICIAL.pdf.
1(iv)) follow the UK approach.
71 See https://minciencia.gob.cl/uploads/filer_public/40/2a/
60 For more information on developing economies pursuing
402a35a0-1222-4dab-b090-5c81bbf34237/declaracion_de_
open government data policies see Verhulst and Young (2017).
santiago.pdf.
61 See https://data.gov.in/.
72 See https://www.stjornarradid.is/library/04-Raduneytin/
62 See https://perma.cc/JAX3-55U8. The OECD recently ForsAetisraduneytid/ Framtidarnefnd/AI%20in%20the%20
launched an Open Government Data project to map practices Nordic-Baltic%20region.pdf.
across economies and assess the impact of open government
73 See https://dig.watch/resource/auda-nepad-white-paper-
data (OECD, 2019b).
regulation-and-responsible-adoption-of-ai-in-africa-towards-
63 Unlike anti-trust policies, ex ante regulations apply at an achievement-of-au-agenda-2063.
industry or sectoral level and attempt to define how the largest
74 See https://au.int/en/pressreleases/20240617/african-
companies must compete in the market. One such set of
ministers-adopt-landmark-continental-artificial-intelligence-
regulations is the European Union Digital Markets Act (DMA),
strategy#:~:text=The%20Continental%20AI%20Strategy
which entered into force in November 2022 and became
%20provides,potential%20risks%2C%20and%20
applicable, for the most part, on 2 May 2023. The DMA is designed
leveraging%20opportunities.
to address the market power of major digital platforms, referred to
as “gatekeepers”. It aims to ensure fair competition and innovation 75 The seven guiding principles are transparency and
in the digital market by preventing gatekeepers from imposing explainability, fairness and equity, security and safety, robustness
unfair conditions on businesses and consumers (European and reliability, human-centricity, privacy and data governance, and
Commission, 2022). The DMA includes specific obligations for accountability and integrity.
these gatekeepers, such as allowing third parties to interoperate
76 See https://www.oecd.org/en/topics/sub-issues/ai-principles.
with their services and prohibiting them from favouring their own
html.
services. The UK Digital Markets, Competition and Consumers
Bill is another example of new ex ante approach to digital markets. 77 National recommendations include nurturing AI talent and
The Bill encourages the most powerful firms in dynamic digital upskilling the workforce, supporting the AI innovation
markets to work with regulators to ensure that competition ecosystem and promoting investment in AI start-ups, investing
is maintained on an ongoing basis. See https://www.gov.uk/ in AI research and development, promoting adoption of useful
government/news/changes-to-digital-markets-bill-introduced- tools by businesses to implement the ASEAN Guide on AI
to-ensure-fairer-competition-in-tech-industry#:~:text=The%20 Governance and Ethics, and raising awareness among citizens on

62
CHAPTER 3: THE POLICIES OF AI AND TRADE

the effects of AI in society. The regional recommendations are: 94 i.e., calling for a “AI Standards Summit” involving key internation
to establish an ASEAN Working Group on AI Governance standard-setting bodies (e.g., International Telecommunication
consisting of representatives from member states to drive and Union (ITU), the International Organization for Standardization (ISO)
oversee AI governance initiatives in the region; to adapt the / International Electrotechnical Commission (IEC) and the Institute
AI Guide to address the governance of generative AI; and of Electrical and Electronics Engineers (IEEE)) and expressly
to compile a compendium of use cases demonstrating indicating that the WTO should be involved in these discussions.
practical implementation of the AI Guide by organizations
operating in ASEAN. 95 See https://www.mea.gov.in/Images/CPV/G20-New-Delhi-
Leaders-Declaration.pdf.
78 Digital economy agreements are a new type of agreement.
They aim to regulate digital trade, data flows and emerging 96 See https://www.oecd.org/en/topics/sub-issues/ai-principles.
technologies like AI. Digital economy agreements reflect html.
governments’ response to the need for regulatory frameworks 97 See the OECD revised definition of “AI system”: an “AI
tailored to the complexities of digital trade and the digital economy. system” is “a machine-based system that, for explicit or implicit
To date, four digital economy agreements have been signed and objectives, infers, from the input it receives, how to generate
have entered into force: the Singapore-Australia Digital Economy outputs such as predictions, content, recommendations, or
Agreement (SADEA), signed in 2020; the Digital Economy decisions that can influence physical or virtual environments.
Partnership Agreement (DEPA) between Chile, New Zealand Different AI systems vary in their levels of autonomy and
and Singapore, signed in 2020; the United Kingdom-Singapore adaptiveness after deployment”.
Digital Economy Agreement (UKSDEA), signed in 2022; and
the Republic of Korea-Singapore Digital Partnership Agreement 98 An “AI system lifecycle” involves the: “i) ‘design, data and
(KSDPA), signed in 2022. Others under negotiation include models’; which is a context dependent sequence encompassing
the ASEAN Digital Economy Framework Agreement (DEFA) and planning and design, data collection and processing, as well as
the EFTA-Singapore Digital Economy Agreement. model building; ii) ‘verification and validation’; iii) ‘deployment’;
and iv) ‘operation and monitoring’. These phases often take
79 KSDPA, DEPA and United Kingdom-Australia. place in an iterative manner and are not necessarily sequential.
80 United Kingdom-Australia, United Kingdom-New Zealand The decision to retire an AI system from operation may occur at any
and United Kingdom-Singapore (the latter specifies point during the operation and monitoring phase” – see OECD AI
“where appropriate”). Principles (2019), section 1.I.

81 United Kingdom-Ukraine, United Kingdom-Singapore and 99 See https://www.coe.int/en/web/artificial-intelligence/the-


United Kingdom-New Zealand. framework-convention-on-artificial-intelligence.

82 United Kingdom-Ukraine, KSDPA, United Kingdom-Singapore 100 So far, some domestic AI regulations, such as the EU’s AIA
and United Kingdom-New Zealand. and Brazil’s draft Senate Bill n. 2338/2023, seem to have adopted,
almost verbatim, the OECD Principles definitions, including that
83 United Kingdom-Ukraine, United Kingdom-Singapore and of “AI system”.
United Kingdom-New Zealand.
101 See ISO/IEC 22989:2022 (available at https://standards.
84 United Kingdom-Ukraine, United Kingdom-Singapore, SADEA iso.org/ittf/PubliclyAvailableStandards/index.html).
and United Kingdom-Australia.
102 See https://wp.oecd.ai/app/uploads/2021/06/G20-AI-
85 United Kingdom-Ukraine and United Kingdom-New Zealand. Principles.pdf.
86 Article 132-V. 103 It states in this respect that it “does not have the ambition to
87 Article 8.61-R. provide one single definition of AI, since such a definition would
need to change over time, in accordance with technological
88 Article 20.4. developments. Rather, its ambition is to address those features
89 New Zealand decided to exclude provisions on source of AI systems that are of central ethical relevance”. Yet, the
code from its agreements following a November 2021 decision Recommendation does provide a broad understanding of what “AI
of the Waitangi Tribunal, which found the source code provision systems” mean, i.e., “systems which have the capacity to process
in the Comprehensive and Progressive Agreement for data and information in a way that resembles intelligent behaviour,
Trans-Pacific Partnership (CPTPP) to be in breach of the Treaty and typically includes aspects of reasoning, learning, perception,
of Waitangi after Māori tech experts argued that there was a risk prediction, planning or control” (paragraph 2). For UNESCO, such
of biased assumptions in algorithmic design or training data. broad understanding is “crucial as the rapid pace of technological
See Jones (2024). change would quickly render any fixed, narrow definition outdated,
and make future-proof policies infeasible” (UNESCO, 2023).
90 The analysis in Roy and Sauvé (forthcoming) is based on 142
RTAs notified under GATS Article V. 104 See https://g20.utoronto.ca/2020/2020-g20-digital-0722.
html.
91 See, for example, the Bletchley Declaration (2023b),
which states that, “[m]any risks arising from AI are inherently 105 See http://www.g20italy.org/wp-content/uploads/2021/11/
international in nature, and so are best addressed through Annex1_DECLARATION-OF-G20-DIGITAL-MINISTERS-2021_
international cooperation”. FINAL.pdf.

92 See https://www.coe.int/en/web/artificial-intelligence/the- 106 See https://www.digital.go.jp/assets/contents/node/basic_


framework-convention-on-artificial-intelligence. page/field_ref_resources/390de76d-d4f5-4f45-a7b4-
f6879c30c389/0fbffe8a/20231201_en_news_g7_result_
93 While, so far, AI global governance initiatives have, on the 00.pdf.
one hand, “yielded similarities in language, such as the importance
of fairness, accountability, and transparency”, on the other 107 See https://www.gov.uk/government/publications/ai-safety-
hand, approaches on defining AI are less coordinated and summit-2023-chairs-statement-state-of-the-science-2-november/
coherent (https://www.un.org/sites/un2.un.org/files/ai_advisory_ state-of-the-science-report-to-understand-capabilities-and-risks-
body_interim_report.pdf). of-frontier-ai-statement-by-the-chair-2-november-2023.

63
CHAPTER 3: THE POLICIES OF AI AND TRADE

108 See https://assets.publishing.service.gov.uk/media/


66f5311f080bdf716392e922/international_scientific_report_
on_the_safety_of_advanced_ai_interim_report.pdf.
109 See https://aiforgood.itu.int/.
110 See https://unesdoc.unesco.org/ark:/48223/
pf0000375322/PDF/375322eng.pdf.multi.
111 See https://aiforgood.itu.int/about-ai-for-good/un-ai-actions/
unido/.
112 See https://aiforgood.itu.int/about-ai-for-good/un-ai-actions/
wbg/.
113 See https://aiforgood.itu.int/about-ai-for-good/un-ai-
actions/.
114 See https://www.un.org/sites/un2.un.org/files/governing_ai_
for_humanity_final_report_en.pdf.
115 See https://www.un.org/techenvoy/global-digital-compact.

64

You might also like