Technology and Politics
Group Assignment
Integrated Programme in Management
Indian Institute of Management Indore
Global Challenges in Regulating Digital Technologies
Group 5, Section A
Jigyasa Chauhan 2022IPM063
Naga Praneeth Kalisetty 2022IPM087
Nalla Sashank Vimal 2022IPM088
Sanjeev Prasath N M 2022IPM122
Shlok Gupta 2022IPM163
Vedant Anil Patne 2022IPM149
Abstract
The digital revolution has outpaced our regulatory systems, creating significant global
challenges. Emerging technologies like artificial intelligence, blockchain, and decentralized
platforms are evolving so rapidly that they consistently expose critical vulnerabilities in
existing legal frameworks ranging from data privacy risks to algorithmic biases and complex
jurisdictional disputes. Our assignment explores the intricate landscape of digital technology
regulation, drawing on real-world examples such as the Cambridge Analytica scandal and the
European Union's GDPR implementation. We critically examine the profound ethical
complexities surrounding AI, the concentrated power of technology corporations within data
ecosystems, and the persistent shortcomings of international regulatory cooperation. By
comparing diverse regulatory approaches – including Europe's more stringent data protection
model and the United States' market-oriented strategy – we illuminate the urgent need for
adaptive, nuanced governance models. These frameworks must balance technological
innovation with robust protections for individual privacy and societal interests. We make an
argument for a forward-looking approach to digital regulation: one that can dynamically
respond to technological change while maintaining core principles of transparency,
accountability, and public trust in an increasingly interconnected digital landscape.
Introduction
Digital technologies have become integral to global economies, political systems, and societal
interactions. However, their transformative nature outpaced traditional regulatory frameworks,
creating significant challenges for governments, policymakers, and other regulatory bodies
tasked with ethical use, accountability, and security.
At the intersection of technology and politics, the regulatory landscape is riddled with
challenges. Digital technologies influence elections, public discourse, and policymaking while
simultaneously creating vulnerabilities such as data breaches, misinformation, algorithmic
bias, and cybersecurity threats. The trade-off between fostering innovation and safeguarding
public interest is a challenge faced by policymakers. The global nature of these technologies
further complicates regulation as cross-border data flows, decentralized systems like
blockchain, and jurisdictional conflicts create enforcement gaps. For instance, emerging
technologies have introduced new business models that challenge traditional regulatory
categories, such as decentralized finance (DeFi) platforms or AI-driven tools that automate
decision-making processes. The difference in privacy laws across borders and ease of
technology and data exchange across borders is one of the biggest challenges faced by
regulatory bodies.
Digital technologies now influence every tenet of our lives. For example, AI systems used in
law enforcement raise ethical dilemmas about bias and discrimination; blockchain technologies
challenge traditional financial oversight; and social media platforms redefine political
mobilization while amplifying risks like misinformation. This underscores the urgent need for
adaptive governance models that address the ethical dilemmas posed by these technologies
while fostering international cooperation to manage their cross-border implications.
The lack of regulation can be proven to be expensive as can be seen in the Cambridge Analytica
scandal in 2018. The absence of robust data privacy regulations allowed Cambridge Analytica
to harvest personal data from over 87 million Facebook users without their direct consent. This
data was then used to create psychological profiles of the users targeted for political campaigns
influencing outcomes of major events. Incidents like this prompted stricter data privacy laws
like the EU’s General Data Protection Regulation, but its significant harm had already been
done. This highlights the need for proactive regulation in widely adopted technologies like this.
The Need for Regulation
Regulation is essential to ensure digital technologies are used ethically, equitably, and securely.
Left unchecked, these technologies can exacerbate inequalities, infringe on privacy rights, and
enable monopolistic practices. The rapid rise of generative AI tools, particularly OpenAI’s
ChatGPT, is a prime example of how innovation can outpace regulation creating challenges for
policymakers. Since November 2022, ChatGPT became the fastest growing consumer
application reaching 100 million users within two months. Its ability to generate human-like
text revolutionised industries such as education, customer service and content creation.
However, its swift adoption exposed regulatory gaps that left governments and institutions
struggling to address its ethical, legal and societal implications.
Italy’s Data Protection Agency temporarily banned ChatGPT in March 2023 for violating
EGDPR by failing to provide transparency about its data collection practices and safeguards
for minors. The possibility of GenAI’s hallucinations which is basically generating convincing
but false information posed risks of spreading misinformation. The model’s training on
copyrighted material raised questions about intellectual property rights. Many artists, writers
and publishers expressed concerns about their work being used without compensation or
acknowledgement. Its ability to create life-like images for unethical purposes such as phishing
and deep fakes is a result of absence of clear guidelines on clear usage.
A similar case is Tesla's autopilot system. Tesla markets its autopilot as a driver assistance
feature emphasising that drivers must remain attentive, however many users over rely on the
system leading to accidents. The reason for the debate is the account of liability. Is the car to
be blamed or is the driver to be blamed? New features are introduced through over-the-air
updates without prior regulatory approval or extensive testing.
Telecommunications, Internet Regulation, and Key Debates
Microsoft Antitrust Case
The landmark United States v. Microsoft case (1998) highlighted the risks of monopolistic
practices in the digital era. Microsoft was accused of bundling its Internet Explorer browser
with its Windows operating system to eliminate competition from Netscape Navigator. The
court ruled that such practices violated antitrust laws by restricting consumer choice and
innovation.
Net Neutrality Debates
Net neutrality—the principle that internet service providers (ISPs) must treat all online traffic
equally—remains contentious. Advocates argue that it ensures a level playing field for content
providers and protects free speech. Conversely, opponents claim it stifles investment in
broadband infrastructure. The recent Sixth Circuit ruling (2025) in the USA, striking down net
neutrality rules reignited debates on whether antitrust laws or direct regulation better serve
open internet principles.
Telecommunications Regulation
Traditional frameworks designed for voice communication now encompass broadband services
and digital platforms. For example, the reclassification of broadband under Title II in the U.S.
sought to enforce net neutrality but faced significant legal pushback.
History of Regulation
From when the Radio Act of 1912 was implemented in the USA to curb the chaos in radio
communications due to unregulated frequency usage to the latest developments in the U.S.
Blueprint for an AI Bill of Rights (2022), we have come a long way as a society.
Marking the first instance of government intervention to regulate technology to ensure public
safety and orderly operation, the Radio Act required licensing for all radio operators, mandated
that ships at sea maintain 24-hour radio communication and allocated specific frequencies for
different uses, introducing the concept of spectrum regulation.
Apart from Radio as a medium of communication, many regulations pertaining to Patent Rights
and Factory Rights were being developed in the backdrop of the 15th to 19th centuries, which
were also ways of coping with evolving methods of production and technology. We find
ourselves in a similar position as we navigate the newer technologies. Being Cognizant of
recent data breaches, whether it be the Facebook- Cambridge Analytica Scandal or the Marriot
International Fiasco, let us talk about the more recent regulatory developments.
The Data Privacy as a Fundamental Right: Justice K.S. Puttaswamy v. Union of India (2017)
was a landmark judgment wherein the Indian Supreme Court declared privacy as a fundamental
right under Article 21 of the Constitution. This judgment influenced the drafting of the Digital
Personal Data Protection Act, 2023, which governs data collection, processing, and storage,
including for AI applications. AI systems relying on personal data must now adhere to strict
privacy safeguards. We further see the growing importance of defining privacy with additional
provisions such as “the right to be forgotten” as seen in the EGDPR, as further discussed.
Facebook- Cambridge Analytica Case: Highlighting Challenges
The regulation of digital technologies presents one of the most significant governance
challenges of our time and is perhaps best exemplified by the Facebook-Cambridge Analytica
scandal. This case, where over 50 million Facebook users’ data was harvested and used to
influence key political events such as the 2016 U.S. presidential election and the Brexit
referendum, revealed vulnerabilities in global regulatory frameworks. It underscored how the
interconnected, fast-evolving nature of digital platforms exposes gaps in oversight and
accountability. Addressing these challenges requires confronting issues such as transnational
jurisdictional conflicts, algorithmic biases, data sovereignty concerns, and the inherent
difficulties of international cooperation.
One of the most immediate issues arises from the global nature of platforms like Facebook.
The data harvested by Cambridge Analytica came from users spread across multiple
jurisdictions, including the U.S., UK, and EU- with a lack of clearly defined jurisdictions. In
the U.S., the Federal Trade Commission (FTC) levied a $5 billion fine on Facebook, but critics
argued that the penalty was negligible compared to the company’s revenue and did little to
prevent future violations. Meanwhile, the European Union leveraged its General Data
Protection Regulation (GDPR) to pursue stricter penalties, showcasing the disparity in
enforcement approaches. The fragmented global response allowed significant gaps to remain,
highlighting the inability of national laws to regulate transnational digital platforms effectively.
At the heart of this challenge is the difficulty of aligning diverse regulatory priorities. The
European Union has prioritized data privacy and individual rights, exemplified by the GDPR,
which imposes stringent obligations on companies handling personal data. In contrast, the U.S.
adopts a more market-driven approach, focusing on fostering innovation and minimizing
regulatory burdens. Countries in the Global South, often affected by digital exploitation, may
lack the resources or regulatory infrastructure to enforce meaningful action. This patchwork of
approaches creates an uneven playing field where platforms can exploit jurisdictions with
weaker oversight, further complicating enforcement efforts.
Adding to these jurisdictional conflicts is the relentless pace of technological advancement,
which continually outpaces regulatory adaptation. Cambridge Analytica’s use of
psychographic profiling—an advanced data analysis technique—demonstrated how emerging
technologies can be deployed with little to no oversight. By analyzing Facebook users’ data,
the firm created personality profiles to deliver targeted political advertisements designed to
manipulate emotions and behaviors. These tools, while innovative, were weaponized in ways
regulators were unprepared to address. The lag between the introduction of new technologies
and the implementation of effective regulations leaves societies vulnerable to exploitation, as
seen in the growing influence of generative AI, blockchain, and other emerging technologies.
Another troubling aspect is the algorithmic bias and discrimination. Algorithms are
increasingly used in decision-making systems, from hiring and lending to law enforcement and
content moderation. However, these systems often perpetuate existing societal inequalities, as
is demonstrably explained by Joy Buolamwini in her paper. In the Cambridge Analytica case,
psychographic profiling amplified biases by targeting specific demographics with divisive
political messaging, deepening social and political divides. Addressing algorithmic bias
requires significant technical expertise and comprehensive oversight, both of which are often
lacking in regulatory bodies.
The issue of data sovereignty and localization further complicates regulation. Countries like
China, Russia, and India have enacted strict localization requirements, reflecting their desire to
assert control over information flows. However, these measures fragment the global digital
economy, increasing compliance costs for multinational companies and complicating cross-
border data governance. The Cambridge Analytica scandal highlighted the difficulty of
tracking data dispersed across multiple jurisdictions, emphasizing the need for greater
alignment between national and global data governance policies.
Cybersecurity vulnerabilities also loom large in the regulatory landscape. While the
Cambridge Analytica scandal did not involve a direct data breach, it exposed serious flaws in
Facebook’s data protection practices, particularly regarding third-party access. This lapse in
oversight enabled the unauthorized harvesting of sensitive information, illustrating the broader
risks of inadequate cybersecurity measures. The interconnected nature of digital systems means
that failures in one region can have global repercussions. Yet, the absence of enforceable
international standards for data protection and cybersecurity leaves platforms and users
vulnerable to exploitation.
The interplay of these challenges—transnational jurisdictional conflicts, rapid technological
advancements, algorithmic bias, corporate dominance, data sovereignty concerns, ethical
dilemmas, and weak international cooperation—underscores the immense complexity of
regulating digital technologies. The Facebook-Cambridge Analytica scandal serves as a stark
reminder of the consequences of inadequate oversight. It highlights systemic flaws that demand
urgent attention as digital platforms continue to evolve and expand their influence. Without
robust, adaptive, and inclusive governance frameworks, the risks posed by these technologies
will continue to grow, threatening individual rights, social cohesion, and democratic
institutions.
The EGDPR
GDPR is notable for its broad scope and detailed provisions aimed at protecting personal
data. One of its central features is the expansive definition of personal data, which includes
not only traditional identifiers such as names and addresses but also digital identifiers like IP
addresses and cookies. This inclusive approach reflects the evolving nature of data in the
digital economy. Another critical aspect of GDPR is the enhanced set of rights granted to
individuals. These include the right to access personal data held by organizations, the right to
erasure (commonly referred to as the “right to be forgotten”), and the right to data portability,
allowing individuals to transfer their data between service providers.
To ensure accountability, GDPR mandates that data controllers and processors adhere to
stringent requirements, such as conducting Data Protection Impact Assessments (DPIAs) for
high-risk activities and appointing Data Protection Officers (DPOs) in organizations that
process significant volumes of data. Additionally, GDPR introduces severe penalties for non-
compliance, with fines reaching up to €20 million or 4% of a company’s global annual
turnover, whichever is higher.
Success
One of GDPR’s most significant achievements is its global influence. Its principles have
inspired similar regulations in other regions, such as the California Consumer Privacy Act
(CCPA) in the United States and Brazil’s Lei Geral de Proteção de Dados (LGPD). By setting
a high standard for data protection, GDPR has established itself as a global benchmark.
The regulation has also succeeded in raising public awareness of privacy rights. According to
a 2019 EU Commission report, 69% of Europeans were aware of GDPR and their rights
under the regulation. This heightened awareness marks a cultural shift, emphasizing privacy
as a fundamental right rather than a secondary concern. Additionally, GDPR has prompted
organizations to adopt more robust data protection practices. Tech giants like Microsoft and
Apple have implemented GDPR-compliant measures globally, with Microsoft extending
GDPR protections to users outside the EU.
Failures and Criticism
Despite its successes, GDPR has faced criticism for its implementation and effectiveness.
One major challenge is the compliance burden it imposes on small and medium-sized
enterprises (SMEs). The requirement to appoint Data Protection Officers and conduct DPIAs
often strains the resources of smaller organizations, leading some critics to argue that GDPR
disproportionately favors larger corporations with more resources.
Ambiguities in GDPR’s provisions have also created challenges. For instance, the “legitimate
interests” clause, which permits data processing without explicit consent under certain
conditions, has been criticized for its vagueness, resulting in inconsistent interpretations
across member states. This lack of uniformity undermines GDPR’s goal of harmonizing data
protection laws within the EU.
Another significant issue is the limited enforcement capacity of Data Protection Authorities
(DPAs), particularly in smaller countries. In 2020, only 0.1% of GDPR complaints resulted in
fines, highlighting the need for greater resources and staffing to ensure effective enforcement.
Furthermore, while GDPR has led to high-profile fines, it has not significantly curtailed the
dominance of Big Tech companies like Google, Facebook, and Amazon. Critics argue that
these firms continue to monetize user data on a massive scale, revealing structural issues that
GDPR alone cannot address.
Recommendations for Improvement
To enhance GDPR’s effectiveness, several measures can be considered. Strengthening
enforcement mechanisms is critical. This includes increasing funding and staffing for Data
Protection Authorities to ensure timely investigations and introducing structural remedies for
repeat offenders, such as breaking up monopolistic tech firms. Simplifying compliance for
SMEs is another priority. Providing clearer guidelines and subsidies for small businesses can
help reduce the compliance burden, fostering a more equitable regulatory environment.
Addressing consent fatigue is also essential. The proliferation of cookie banners has led to
user disengagement, undermining the quality of consent. Promoting privacy-enhancing
technologies (PETs) and developing standardized, user-friendly consent mechanisms can
improve user engagement. Expanding international cooperation is crucial for regulating
cross-border data flows. Negotiating new frameworks that prioritize user rights while
facilitating data transfers can mitigate the uncertainty caused by the invalidation of the EU-
U.S. Privacy Shield.
Finally, GDPR should be complemented with antitrust measures to address structural issues
in the data economy. Encouraging the development of decentralized technologies, such as
Tim Berners-Lee’s Solid Project, can empower users to control their data, promoting a fairer
and more privacy-respecting digital ecosystem.
The debate over how best to regulate digital technologies encompasses a wide range of
perspectives. At its core are two contrasting approaches: one represented by Margrethe
Vestager, who advocates for robust antitrust measures to check the power of Big Tech, and
the other championed by Tim Berners-Lee, who envisions a decentralized internet that
restores control to individuals.
Margrethe Vestager: Championing Antitrust to Rein in Big Tech
Margrethe Vestager, the European Commissioner for Competition, is a leading proponent of
antitrust interventions as a tool for digital governance. Her actions have been groundbreaking,
including imposing a €2.42 billion fine on Google for favoring its own services in search
results and a record €4.34 billion penalty for anti-competitive practices tied to Android
devices. Vestager’s emphasis on tackling corporate tax evasion, exemplified by the €13
billion back-tax ruling against Apple in Ireland, further underscores her commitment to
holding Big Tech accountable.
Vestager’s philosophy centers on curbing the monopolistic practices of large technology
companies, which she argues erode consumer choice, stifle innovation, and undermine
democratic institutions. By enforcing behavioral and structural remedies, she seeks to restore
competitive balance in the digital marketplace. However, critics point out that while fines and
regulations may force companies to alter certain practices, they often fail to dismantle the
systemic dominance of tech giants.
Tim Berners-Lee: Advocating for a Decentralized Internet
In contrast to Vestager’s regulatory approach, Tim Berners-Lee—the inventor of the World
Wide Web—proposes a fundamentally different model of internet governance. Through his
Solid Project, Berners-Lee advocates for decentralizing data ownership, allowing individuals
to store their personal data in secure “pods” that they control. This vision aligns with the
principles of data autonomy and user empowerment embodied in GDPR but takes them
further by envisioning a structural overhaul of how digital ecosystems operate.
Berners-Lee’s model addresses the power asymmetries inherent in centralized data
ecosystems. By decentralizing control, he seeks to shift power away from corporations and
back to individuals. However, the vision faces significant challenges: the technical
standardization needed for broad adoption, the inertia of entrenched systems, and the
potential for creating new inequalities in access to decentralized solutions.
A Comparative Analysis: Structural Reform Versus Systemic Redesign
While Vestager and Berners-Lee share a common goal of mitigating the harms associated
with Big Tech dominance, their approaches represent distinct strategies. Vestager’s antitrust
measures focus on reforming the existing system by enforcing legal constraints on corporate
behavior. Her successes, such as high-profile fines and mandated operational changes,
demonstrate the potential for immediate impact. Yet, her critics argue that these measures
often fail to address deeper structural issues, such as data monopolization and market
concentration.
Berners-Lee, on the other hand, offers a visionary approach that seeks to redesign the system
altogether. Decentralization challenges the very foundation of centralized platforms, aiming
for a more equitable distribution of power and control. However, this approach requires
substantial societal and technological shifts, making its realization a long-term endeavor
rather than an immediate fix.
Moving Ahead
The perspectives of Vestager and Berners-Lee resonate with ideas from other scholars.
Shoshana Zuboff, in her work on “surveillance capitalism,” underscores the dangers of data
exploitation by corporations. Carissa Véliz’s concept of “data dignity” complements Berners-
Lee’s vision by emphasizing the ethical treatment of personal information, highlighting the
importance of respecting user rights. Meanwhile, Andrew McLaughlin’s warnings against
overregulation highlight the need to balance innovation with accountability.
By integrating these approaches and drawing on broader insights from thinkers like Zuboff,
Véliz, and McLaughlin, policymakers can craft a balanced framework that addresses both the
immediate and long-term challenges of digital governance.
Critical Perspectives on Digital Technology Regulation
The debate surrounding digital technology regulation often centers on two critical
perspectives: the potential stifling of innovation through over-regulation and the ethical
implications of allowing tech companies to self-regulate.
Over-regulation and Innovation
Critics argue that excessive regulation can impede technological progress and economic
growth. Stringent rules may discourage companies from investing in research and
development, fearing regulatory hurdles or compliance costs. For instance, the European
Union's General Data Protection Regulation (GDPR), while protecting consumer privacy, has
been criticized for potentially hampering the development of AI technologies due to its strict
data processing requirements.
Self-regulation and Ethical Concerns
Conversely, the laissez-faire approach of minimal government intervention raises ethical
concerns. Tech companies, driven by profit motives, may prioritize growth over user welfare.
The Cambridge Analytica scandal exemplifies the risks of inadequate oversight, where
Facebook's self-regulatory measures failed to prevent large-scale data misuse.
Balancing Approaches
A mixed approach, combining elements of government regulation and industry self-
governance, may offer a balanced solution. This could involve:
1. Regulatory sandboxes: Controlled environments where companies can test
innovations under relaxed regulations.
2. Co-regulatory frameworks: Collaboration between government and industry to
develop flexible, adaptive rules.
3. Principle-based regulation: Focusing on broad ethical principles rather than
prescriptive rules, allowing for technological evolution.
The challenge lies in striking a balance that fosters innovation while ensuring ethical
practices and consumer protection. As digital technologies continue to evolve rapidly,
regulatory frameworks must remain adaptable to address emerging challenges without stifling
the potential for groundbreaking advancements.
References
1. Beaumier, G., Kalomeni, K., Campbell‐Verduyn, M., Lenglet, M., Natile, S., Papin,
M., ... & Zhang, F. (2020). Global regulations for a digital economy: Between new
and old challenges. Global Policy, 11(4), 515-522. https://doi.org/10.1111/1758-
5899.12823
2. Bechara, F. R., & Schuch, S. B. (2021). Cybersecurity and global regulatory
challenges. Journal of Financial Crime, 28(2), 359-374.
3. Bradford, A. (2024). Digital governance and regulation. SSRN Electronic Journal.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4723373
4. Buolamwini, J. & Gebru, T.. (2018). Gender Shades: Intersectional Accuracy
Disparities in Commercial Gender Classification.
5. Dutil, P., & Williams, J. (2017). Regulation governance in the digital era: A new
research agenda. Canadian Public Administration, 60(4), 562-577.
https://doi.org/10.1111/capa.12226
6. European Commission. (2019). The Digital Economy and Society Index (DESI).
https://digital-strategy.ec.europa.eu/en/policies/desi
7. Korneeva, E., Voloshinova, M., & Albaeva, A. (2019). Leading problems and
prospects in the regulation of the digital economy: Regulatory environments in the
digital era. Advances in Social Science, Education and Humanities Research, 359,
171-175. https://www.atlantis-press.com/article/125921020.pdf
8. Papakonstantinou, V., & De Hert, P. (2022). The regulation of digital technologies in
the EU: The law-making phenomena of 'act-ification,' 'GDPR mimesis,' and 'EU law
brutality.' Technology and Regulation, 48-60.
https://doi.org/10.26116/techreg.2022.005
9. Ramachandran, A. (2018, August 28). Analyzing Medium posts to understand the
impact of the Cambridge Analytica scandal. Analytics Vidhya.
https://medium.com/analytics-vidhya/analyzing-medium-posts-to-understand-impact-
of-cambridge-analytica-scandal-5841f46703d6
10. S. 6412, An Act to regulate radio communication (Radio Act of 1912), May 20, 1912.
U.S. Capitol - Visitor Center.
11. Shahriar Akter, Grace McCarthy, Shahriar Sajib, Katina Michael, Yogesh K.
Dwivedi, John D’Ambra, K.N. Shen. (2021). Algorithmic bias in data-driven
innovation in the age of AI. International Journal of Information Management, 60,
102387. https://www.sciencedirect.com/science/article/pii/S0268401221000803
12. Sidley Austin LLP. (2024). Digital technology regulation: The new convergence of
privacy, competition, and digital governance.
https://www.sidley.com/en/insights/resources/digital-technology-regulation-the-new-
convergence-of-privacy-competition-and-digital-governance
13. Taeihagh, A. (2024). Responsible regulation for digital services in India. Journal of
Information Technology & Politics.
https://www.tandfonline.com/doi/full/10.1080/15228053.2024.2340396
14. UNESCO. (n.d.). Guidelines for the governance of digital platforms.
https://www.unesco.org/en/internet-trust/guidelines
15. Zhao, J., Bao, Y., & Huang, N. (2022). Research on the influence of digital
technology and policy restrictions on the development of digital service trade.
Sustainability, 14(16), 10420. https://doi.org/10.3390/su141610420