Internet Regulation
Internet Regulation
CONCEPT IN INDIA
Author: Sharmistha Mitra
Int. B.A., LL.B (Hons.) with Specialisation in Energy Laws
Roll: R450211091
2011-2016
DISSERTATION
Submitted under the guidance of: Mr. Sujith P. Surendran
Assistant Professor
This dissertation is submitted in partial fulfillment of the
degree of B.A., LL.B. (Hons.)
I declare that the work embodied in this dissertation, entitled LIMITED LIABILITY
PARTNERSHIP A HYBRID CONCEPT IN INDIA , is the outcome of my own work
conducted under the supervision of Prof. Sujith P. Surendran, at College of Legal
Studies, University of Petroleum and Energy Studies, Dehradun.
I declare that the dissertation comprises only of my original work and due
acknowledgement has been made in the text to all other material used.
Date
ACKNOWLEDGEMENT
I, Sharmistha Mitra, student of Int. B.A., LL.B (Hons.) with specialization in Energy
Laws, 10th Semester, College of Legal Studies, The University of Petroleum and
Energy studies have made this dissertation on Extension of WTO Regulations
and Agreements to Energy trading In the Contemporary World.
The research has been collected largely from secondary sources of information and
the method that has been adopted is doctrinal in nature for the collection of
information such as international treaties, websites, books, commentaries, journals
and articles etc.
I would like to thank my mentor Prof. Sujith P. Surendran for his guidance and
support and would even like to thank my friends for their suggestions.
ABSTRACT
Since the introduction of the World Wide Web and browsers in the early 1990s,
there has been an explosion of content available across state boundaries, in
easily duplicable format through the Internet. This develop- ment has first been
interpreted as a formidable chance for democracy and civil liberties. The Internet
has and continues to be perceived as the infra- structure and tool of global free
speech (Mueller, 2010). Many optimists hoped that, free from state intervention
or mainstream media intermediaries, citizens would be better informed about
politics, at lower costs and more efficiently. The need for content control was
however discussed as soon as the Internet became accessible to the greater
public. Similarly to the emergence of previous communication and media
technologies, pressure rapidly built up to demand more control of what type of
content is accessible to whom (Murray, 2007). The regulation of content is
linked to a broader
discussion about the regulability of the Internet that is the focus of section 1 6
before turning to content regulation per se in section 2.
1 Internet regulation
The role of the nation-state in regulating the digital realm in comparison with
other actors such as corporations or civil society remains disputed. For some scholars,
the nation-state is the main actor capable of directly or indirectly regulating social
behaviour. For others, the state is one among a variety of competing actors and has
lost its dominance. Their perspective is
generally reflected in the terminology used, focusing on governance instead of
regulation when taking into account a broader set of actors and processes than
interactions centred around the state.
The discussion about the regulation of the Internet has shifted from whether
the Internet can be regulated at all to how it is regulated and by whom. The
question opposed the so-called cyber-libertarians who contested any exterior
assertion of power, be it by states or other actors (section 1.1), to legal and
political scholars arguing that the Internet was in fact regulated, although through
different regulatory modalities (section 1.2). For some
authors, states continue to play a significant role in these regulatory arrangements
(section 1.3) while there is widespread agreement that the Internet has become an
object of political struggle for states and various other actors alike (section 1.4).
However, the narrowly defined state-centric perspective on Internet regulation has
more recently been criticised as cyber-conservatism (Mueller, 2010; DeNardis,
2010) by a third set of scholars interested in the institutional innovations and
broader power dynamics at play in Internet governance (section 1.5).
Cyberlibertarians
Much has been written about the Internets open, minimalist and
decentralised architecture that allowed for its rapid success, integration with any
other computer network and the rapid development of new applications such as
the World Wide Web or email programs. The Internet has been built upon the
end-to-end principle, which stipulates that application-specific functions are
hosted at the endpoints of the network (e.g. servers or personal
1 Barlow, J.-P. (8 February 1996). A Declaration of the Independence of Cyberspace. Available at:
https://projects.eff.org/~barlow/Declaration-Final.html. See also Cyberspace and the American
Dream: A Magna Carta for the Knowledge Age (Dyson et al., 1996) and Birth of a Digital Nation
(Katz, 1997).
2 Quote attributed to the civil liberties advocate and co-founder, together with J.P. Barlow, of the
digital rights platform Electronic Frontier Foundation (EFF), John Gilmore.
computers) instead of intermediary nodes (e.g. routers). Similarly to postal mail
delivery agents, intermediaries route data packages from one endpoint to another
endpoint, without needing to know what the datagram will be used for or contains
in terms of content. The end-to-end principle is central to current net
neutrality debates (see below) that focus on new techno- logical possibilities for
intermediaries to perform certain application-specific functions (e.g.
distinguishing between peer-to-peer file-sharing and video streaming).
The protocols and standards developed during the 1960-70s by the so- called
Internet founders, academics and government engineers, funded by the U.S.
Department of Defense, are still the basis of todays Internet. To protect their
achievements, the founders established a series of institutions, in particular the
Internet Engineering Task Force (IETF) in 1986 to regulate and develop new
standards. These institutions were perceived by cyber- libertarians and many of
the founders as new and better forms of governance under the leitmotiv of
rough consensus and running code (Dave Clark,
Internet founder, quoted in Goldsmith and Wu, 2006, 24). Although the
cyber-libertarian perspective was rapidly criticised as technologically deter- ministic
and contrary to empirical evidence of increased state and corporate control of the
Internets infrastructure and content, its main tenets and values continue to inform
current policy discussions and self-regulatory practices that proliferate online.
The argument that the Internet is not unregulable but in fact regulated by its
architecture was expanded upon in the 1999 publication of U.S. law professor
Lawrence Lessigs seminal book Code and other laws of Cyber- space. Lessig
argued that the Internet is not a space free from state intervention and that
computer code, the Internets underlying infra- structure, provides a powerful
tool for regulating behaviour online (Lessig, 1999, 2006). For Lessig, code is one
of the four modalities of regulation next to law, social norms and markets. The
latter are institutional constraints, which do not allow for immediate control over
human behaviour and are considered by a large majority of observers as
insufficient to effectively regulate global Internet traffic. Lawsuits are time and
cost intensive, often ineffective when dealing with the scale change brought
through the Internet,
whilst generating broad negative publicity (see for instance Brown, 2010).
Social norms are easily violated, and can be manipulated. Market constraints can be
circumvented in many ways, and commercial entities are dependent on effective
protection by social norms and the legal system for enforcement (Boas, 2006).
Whether Internet design choices are a regulatory modality in itself or not remains
debated (McIntyre and Scott, 2008). Murray and Scott (2001) have for instance
criticised Lessigs modalities as over- or under- inclusive. They propose to speak
instead of hierarchical (law), community- based (norms), competition-based
(market) or design-based controls.
With the turn of the millennium, the discussion has thus clearly shifted from
whether the Internet can be regulated at all to how and by whom it is and whether
there is anything explicitly new about the phenomenon. Here, scholars remain
divided by those insisting on the dominant role of nation- states in Internet
regulation, pointing to the increasing number of state legislation directed towards
the digital realm (e.g. Goldsmith and Wu, 2006), and those who argue that more
attention should be paid to new processes and institutions that are emerging at the
international level and the key role played by private actors in Internet politics (e.g.
DeNardis, 2009; Mueller, 2010).
In their 2006 book Who controls the Internet? Illusions of a borderless world,
Goldsmith and Wu (2006) recognise that the Internet challenges state power but
argue that since the 1990s, governments across the world have increasingly
asserted their power to make the global Internet more bordered and subject to
national legislations. They provide numerous examples of interactions where the
nation-state resorted as the dominant actor, starting with the LICRA v. Yahoo!
Inc. (2000) case in France that led Yahoo to remove Nazi memorabilia on its
auction website worldwide to comply with French law,3 the long interactions
between the Internets
3 In the 2000 ruling LICRA v. Yahoo! Inc., the Tribunal de Grande Instance of Paris exercised
territorial jurisdiction on the grounds that the prejudice of the content hosted abroad took place on
French territory. The Court required Yahoo! to prevent French users from accessing Nazi memorabilia
on its auction website. The company complied to the judgement even though a U.S. District Court
judge considered in 2001 that Yahoo! could not be forced to comply to the French laws, which were
contrary to the First Amendment. The ruling was reversed in 2006 by a U.S. Court of Appeals. For
Goldsmith and Wu (2006), the French state could exert pressure upon Yahoo! because the company
held several assets for its operation in France that the French state could have acted upon should
Yahoo! have refused to comply to the French court order.
founding fathers and the U.S. government over the Internets domain name
system (referred to as the root, see Mueller, 2002) that eventually led to the
establishment of ICANN and, of course, the establishment of the Chinese great
firewall as an illustration of what a government that really wants to control
Internet communications can accomplish (Goldsmith and Wu, 2006, 89). In sum,
[a] governments failure to crack down on certain types of Internet
communication ultimately reflects a failure of interest or will, not a failure of
power (ibid.). Similarly, for Deibert and Crete-Nishihata (2012), it was a
conscious decision by the US and other Western states to not directly regulate the
Internet in the early 1990s, leaving operating decisions to the Internets
engineering community that functioned on a basis of consensus building and
requests for comments. This was to foster innovation and economic growth at a
time where one could only speculate as to how precisely the Internet would
develop over time.
In fact, the first motivation for Internet regulation was to situate Internet exchanges
into existing legal categories (e.g. is the Internet similar to the telephone or
broadcasting media?) or to create new categories, and in rare
cases, new institutions. However, in the early to mid-1990s, states largely maintained
the status quo ante, either to protect emerging business models or established
governmental practices (Froomkin, 2011, 5).
4 Internet engineers remain the principal decision-makers over the Internets critical resources, most
notably the domain name system through ICANN but also in the domain of standards setting (e.g. the
IETF). Those resources are heavily contested but, notably due to the protection of the U.S.
Private actors are increasingly important in shaping the Internets develop- ment.
Some corporations have based their business model on the relatively unregulated
environment of the 1990s and early 2000s, with limited intermediary liability (e.g. for
Internet service providers (ISPs) or online con- tent providers (OCPs)), and
succeeded to monetize their Internet activities principally through paid, and
increasingly targeted, advertisement (e.g. Google or Facebook). Other actors, for
instance the entertainment industry, have been severely challenged by new practices
developing online such as widespread sharing of copyrighted material. Attempts to
roll back piracy have generally led to further technological developments such
as peer-to- peer technologies (Musiani, 2011). Other actors increasingly converge
their activities to the Internet bringing with them different norms and interests than
those of the early Internet communities (Deibert and Crete-Nishihata, 2012;
Rasmussen, 2007; Castells, 2001). States, which are driven by security and
public order concerns, are by no means in agreement over who should control the
Internet albeit all recognise the fundamental importance
of the network as a global communication infrastructure and business
opportunity. Finally, a broad transnational movement of NGOs, associations,
information groups and individuals has emerged over the years in response largely to
regulatory attempts to introduce more control of the network of networks but also, at
times, to demand governments to intervene in business practices that are considered
as harming the end-to-end principle at the basis of the Internet (Mueller et al.,
2004; Breindl and Briatte, 2013; Haunss, 2011; Lblich and Wendelin, 2012).
government, a far-reaching reform of the U.S.-centred domain name system has until now been avoided,
although concessions have been made to other governments and private actors (Mueller, 2002).
regulate the Internet without at least relying on private actors to assist them.
Many Internet issues extent beyond national borders, making coordinated
action necessary.
Many people argue that it would be wrong to attempt to regulate the Internet and advance arguments
such as the following:
The Internet was created as a totally different kind of network and should be a free space.
This argument essentially refers back to the origins of the Net, when it was first used by the
military as an open network designed to ensure that the communication always got through,
and then by academics who largely knew and trusted each other and put a high value on
freedom of expression.
The Internet is a pull not a push communications network. This argument implicitly
accepts that it is acceptable, even necessary, to regulate content which is simply 'pushed' at
the consumer, such as conventional radio and television broadcasting, but suggests that it is is
unnecessary or inappropriate to regulate content which the consumer 'pulls' to him or her such
as by surfing or searching on the Net.
The Internet is a global network that simply cannot be regulated. Almost all content
regulation is based on national laws and conventions and of course the Net is a worldwide
phenomenon, so it is argued that, even if one wanted to do so, any regulation of Internet
content could not be effective.
The Internet is a technically complex and evolving network that can never be regulated.
Effectively the Web only became a mass media in the mid 1990s and, since then,
developments - like Google and blogging - have been so rapid that, it is argued, any attempt
to regulate the medium is doomed.
Any form of regulation is flawed and imperfect. This argument rests on the experience that
techniques such as blocking of content by filters have often been less than perfect - for
instance, sometimes offensive material still gets through and other times educational material
is blocked.
The Internet is fundamentally just another communications network. The argument runs: if
we regulate radio, television, and telecommunications networks, why don't we regulate the
Net? This argument suggests that, not only is the Internet in a sense, just another network, as
a result of convergence it is essentially becoming the network. so that, if we do not regulate
the Net at all, effectively over time we are going to abandon the notion of content regulation.
There is a range of problematic content on the Internet. There is illegal content such as child
abuse images; there is harmful content such as advice on how to commit suicide; and there is
offensive content such as pornography. The argument goes that we cannot regulate these
different forms of problematic content in the same way, but equally we cannot simply ignore
it.
There is criminal activity on the Internet. Spam, scams, viruses, hacking, phishing, money
laundering, identification theft, grooming of children .. almost all criminal activity in the
physical world has its online analogue and again, the argument goes, we cannot simply ignore
this.
The Internet now has users in every country totalling around several billion. This argument
implicitly accepts that the origins of the Internet involved a philosophy of free expression but
insists that the user base and the range of activities of the Net are now so fundamentally
different that it is a mass media and needs regulation like other media.
Most users want some form of regulation or control. The Oxford Internet Survey (OxIS)
published in May 2005 had some typical indications of this. When asked if governments
should regulate the Internet, 29% said that they should. When asked who should be
responsible for restricting children's content, 95% said parents, 75% said ISPs and 46% said
government.
It is a major proposition of this presentation that any sensible discussion of regulation of the Internet
needs to distinguish between illegal content, harmful content, and offensive content. I now deal with
these in turn.
In the UK, effectively illegal content is regulated by the Internet Watch Foundation through a self-
regulatory approach.
What is the nature of the IWF?
It was founded by the industry in late 1996 when two trade bodies - the Internet Service
Providers' Association (ISPA) and the London Internet Exchange (LINX) - together with
some large players like BT and AOL came together to create the body.
It has an independent Chair selected through open advertisement and appointed by the Board.
The Board consists of six non-industry members selected through open advertisement and
three industry members chosen by the Funding Council.
The IWF has no statutory powers. Although in effect it is giving force to certain aspects of the
criminal law, all its notices and advice are technically advisory.
The IWF has no Government funding, although it does receive European Union funding
under the Commission's Safer Internet plus Action Plan
Although not a statutory body and receiving no state funding, the IWF has strong
Government support as expressed in Ministerial statements and access to Ministers and
officials.
The IWF has a very specific remit focused on illegal content, more specifically:
adult material that potentially breaches the Obscene Publications Act in the UK
The number of reports handled has increased from 1,291 in 1997 to 23,658 in 2005.
The proportion of illegal content found to be hosted in UK has fallen from 18% in 1997 to
0.3% in 2005.
Then Prime Minister Tony Blair described the IWF as perhaps the worlds best regime for
tackling child pornography.
There is a 'notice and take down' procedure for individual images which are both illegal and
hosted in UK.
The IWF compiles a list of newsgroups judged to be advertising illegal material and
recommends to members that these newsgroups not be carried. About 250 newsgroups are
'caught' by this policy.
The IWF compiles a list of newsgroups known regularly to contain illegal material and again
recommends to members that these newsgroups not be carried. A small number of additional
newsgroups are 'caught' by this policy.
Most recently and most significantly, ISPs are blocking illegal URLs using the IWF's child
abuse image content (CAIC) database and using technologies like BTs Cleanfeed. The
number of URLs on this list - which is up-dated twice a day - is between 800-1200.
The problem now for the IWF - and indeed for the other such hotlines around the world - is abroad,
more specifically:
Thailand, China, Japan & South Korea - the source of 17% of illegal reports in 2005
In early 2005, a study by the International Centre for Missing and Exploited Children (ICMEC) in
the United States found that possession of child abuse material is not a crime in 138 countries and, in
122 countries, there is no law dealing with the use of computers and the Internet as a means of
distribution of child abuse images [for more information on this report ]. So the UK needs the
cooperation of other governments, law enforcement agencies and major industry players if we are to
combat and reduce the availability of child abuse images in this country and around the world.
Since the IWF's remit is illegal material, there are some possible areas of the law which might be
amended in terms which would suggest a minor extension to the IWF's existing remit, specifically:
A possible review of the law on the test of obscenity in relation to adult pornographic material
However, the IWF has absolutely no intention or wish to engage in harmful or offensive content, so
the proposals that now follow are my personal suggestions for discussion and debate.
It is my view that currently there is Internet content that is not illegal in UK law but would be
regarded as harmful by most people. It is my contention that the industry needs to tackle such
harmful content if it is to be credible in then insisting that users effectively have to protect
themselves from content which, however offensive, is not illegal or harmful. Clearly it is for
Government and Parliament to define illegal content. But how one would define harmful content?
I offer the following definition for discussion and debate: Content the creation of which or the
viewing of which involves or is likely to cause actual physical or possible psychological
harm. Examples of material likely to be caught by such a definition would be incitement to racial
hatred or acts of violence and promotion of anorexia, bulimia or suicide.
Often when I introduce such a notion into the debate on Internet regulation, I am challenged by the
question: How can you draw the line? My immediate response is that, in this country (as in most
others), people are drawing the line every day in relation to whether and, if so how and when, one
can hear, see, or read various forms of content, whether it be radio, television, films, videos & DVDs,
newspapers & magazines. Sometimes the same material is subject to different rules - for instance,
something unacceptable for broadcast at 8 pm might well be permissable at 10 pm or a film which is
unacceptable for an '18' certificate in the cinema might receive a 'R18' classification in a video shop.
Therefore I propose in relation to Internet content that we consult bodies which already make
judgements on content about creation of an appropriate panel. Such bodies would include the Ofcom
Content Board the BBC the Association for Television On Demand (ATVOD), the British Board for
Film Classification (BBFC)], and the Independent Mobile Classification Body (ICMB) I would
suggest that we then create an independent panel of individuals with expertise in physical and
psychological health who would draw up an agreed definition of harmful content and be available to
judge whether material referred to them did or did not fall within this definition.
There should be no requirement on ISPs to monitor proactively content to which they are
providing access to determine whether it is harmful.
Reports of suspected material from the public should be submitted to a defined body.
This body should immediately refer this material to the independent panel which would make
a judgement and a recommendation as to whether it was in fact harmful.
Each ISP should be transparent about its policy in relation to blocking or otherwise of such
content and set out its policy in a link from the homepage of its web site, as many sites do
now in relation to privacy policy.
Once we have effective regimes for illegal and harmful content respectively, one has to consider that
material which is offensive - sometimes grossly offensive - to certain users of the Internet. This is
content which some users would rather not access or would rather that their children not access.
Now identification of content as offensive is subjective and reflects the values of the user who must
therefore exercise some responsibility for controlling access. The judgement of a religious household
would probably be different from that of a secular household. The judgement of a household with
children would probably be different from that of one with no chidren. The judgement of what a 12
year old could access might well be different from what it would be appropriate for an 8 year old to
view. Tolerance of sexual images might be different to those of violent images.
It is my view that, once we have proper arrangements for handling illegal and harmful content, it is
reasonable and right for government and industry to argue that end users themselves have to exercise
control in relation to material that they find offensive BUT we should inform users of the techniques
and the tools that they can use to exercise such control. What are such techniques and tools? They
include:
Labelling of material through systems such as that of the Internet Content Rating Association
(ICRA) - The ICRA descriptors were determined through a process of international
consultation to establish a content labelling system that gives consistency across different
cultures and languages.
Filtering software of which there are many different options on the market - The European
Commission's Safer Internet Programme has initiated a study aiming at an independent
assessment of the filtering software and services. Started in November 2005, the study will be
carried out through an annual benchmarking exercise of 30 parental control and spam filtering
products or services, which will be repeated over three years.
Search engine 'preferences' which are unknown to most parents - Google, the most used
browser has the word 'preferences' in tiny text to the right of the search box and clicking on
this reveals the option of three settings for what is called 'SafeSearch Filtering', yet this
facility is vitually a secret to most parents.
Use of the 'history' tab on the browser which again is unknown to many parents - This is a
means for parents to keep a check on where their young children are going in cyberspace,
although there has to be some respect for the privacy of children.
Looking at Internet content generally, what else would help? Let me make a few final suggestions:
There should more proactive media comment and debate by spokespersons who can and will
speak for the industry as a whole rather than simply for particular companies or products. Too
often debates in the media are ill-informed or unbalanced because the industry does not
engage as effectively as it could.
There should be a user-friendly, well-publicised web site offering advice on the full range of
Internet content issues. The 'Get Safe Online' initiative is a useful start here.
There should be a UK body like the Internet Rights Forum in France with government,
industry and civil society representation. This could discuss problems of Internet content,
stimulate wider debate, and make recommendations.
The Internet Governance Forum (IGF) set up by the United Nations, following the World
Summit on the Information Society in Tunis, should become a genuinely useful focus for
global discussion of issues of Internet content.
a blurring of linear ('push') and non-linear ('pull') material through developments like video
on demand (VoD), near VoD, the personal video recorder (PVR), BBC's iPlayer and Internet
Protocol Television (IPTV).
more Net users using higher speeds and spending more time and doing more things online.
If we can determine an acceptable regine for regulating Internet content, this will beg the question of
why we regulate broadcasting content so differently. Indeed why do we regulate broadcasting at all?
Historically there have been three reasons:
1. Broadcasting has used scarce spectrum and, in return for free or 'cheap' spectrum,
broadcasters have been expected to meet certain public service broadcasting obligations and
this requirement has provided leverage to regulators to exercise quite strong controls on
content. BUT: increasingly broadcasting is not done using scarce spectrum; it uses satellite,
cable or simply the telephone lines with technologies like ADSL.
2. Broadcasting has been seen as having a special social impact because so many people
watched the same programme at the same time. BUT: increasingly with multi-channel
television and time-shifting devices like the VCR and the PVR, any given broadcast is
probably seen by a relatively small proportion of the population.
3. Broadcasting has been seen as a 'push' technology over which viewers had little control once
they switched on the television set. BUT: increasingly viewers are 'pulling' material through
the use of VCRS, PVRs, video on demand, near video on demand, podcasting, and so on.
Therefore it is possible to argue that the historic reasons for regulating broadcasting in the traditional
ways are fast disappearing. In these circumstances, one could well argue that broadcasting should not
be regulated much differently from how we regulate the Internet or at least how we might regulate
the Internet in the manner proposed in this article.
Therefore regulation of broadcasting would focus on illegal and harmful content, leaving offensive
content as a matter essentially for viewers to block if they thought that appropriate for their family.
This would suggest a convergence of the regulation of broadcasting and the Internet to a model
which, compared to the present situation, would involve a lot less regulation for broadcasting and a
bit more regulation for the Internet.
This could not be done overnight. Consumers have strong expectations regarding
broadcasting regulation and these expectations would have to be managed through some kind
of transitional process.
We could not simply abandon most broadcasting regulation without empowering viewers to
make informed choices by provision of proper tools and better media literacy
Institutional change and multi-stakeholderism
Authors such as Milton Mueller (2010) consider that the only solution to current
Internet issues such as intellectual property rights, cybersecurity, content regulation
and critical Internet resources, that would preserve the open and disruptive
character of the Internet, is institutional change in the way communication and
information technology has been regulated so far. He therefore speaks about
Internet governance at the international level to highlight the coordination and
regulation of interdependent actors in the absence of an overarching political
authority (Mueller, 2010, 8, emphasis in original). For Mueller (2010, 4) the
Internet challenges the nation-state because communication takes place at a global
scope, at unprecedented scale
while control is distributed, meaning that decision-making units over net-
work operations are no longer closely aligned with political units; new
institutions have emerged to manage the Internets critical resources (e.g. domain
names, standards and protocols) beyond the established nation-state system, while
dramatically lowering the costs for collective action thus allow- ing new
transnational networks of actors and forms of activism to emerge. Similarly, for
Braman (2009) the increasing importance of information policy manifest and
trigger profound changes in the nature of how the state and governance functions.
Old categories need therefore to be reassessed in light of Internet governance and
more fluid forms of decision-making.
5 The Geneva meeting adopted the Declaration of Principles: Building the Information Society: A
Global Challenge in the New Millennium (2003); The Tunis meeting led to the Tunis Agenda for the
Information Society (2005).
could be used as a legitimation by authoritarian states to filter political content
and crack down on opponents. Several Western and African states refused to sign
the final declaration, the U.S. being the most vocal defender of Internet freedom
stating that:
The Internet has given the world unimaginable economic and social benefit during
these past 24 years. All without UN regulation. We candidly cannot support an ITU
Treaty that is inconsistent with the multi-stakeholder model of Internet governance.6
The controversial negotiations resulted in a split between Western states who refused
to sign the treaty and emerging economies, in particular Russia and China who
demanded more state control over Internet regulation and signed the final treaty. If
the present system is far from ideal, it is perceived nonetheless by most Western
states as the best possible solution to protect their interests (and the interests of their
IT industries) and prevent authori- tarian states from gaining direct interference in
Internet regulation. Nonetheless, various commentators have rejected the apparent
opposition between the freedom-protecting West compared to the authoritarian
and
repressive East by pointing to the fact that maintaining the status quo
perpetuates the dominance of the U.S. and U.S. business interests (e.g. Google
accompanied the WCIT by a particularly vocal campaign to defend Internet
freedom) in Internet governance. Poorer countries can only voice their positions
at intergovernmental conventions such as the ITU. As it is, they are in effect
excluded from gaining any weight in Internet regulation, with some countries
even arguing that the U.S. uses denial of Internet services, among other forms of
sanctions, for policy leverage. Also, the U.S. defence of Internet freedoms at
the international level stands in sharp contrast to a series of domestic policies
adopted in the name of security that increases control over networks by possibly
reducing citizens freedoms.7 Not surprisingly, many of these policies deal with
Internet content regulation.
6 Terry Kramer, head of the US delegation to WCIT, quoted in Arthur, Charles (14 December 2012).
Internet remains unregulated after UN treaty blocked, The Guardian, available at:
http://www.guardian.co.uk/technology/2012/dec/14/telecoms-treaty-Internet- unregulated?
INTCMP=SRCH
7 See for instance: Powell, Alison (20 December 2012). A sticky WCIT and the battle for control of the
Internet, Free Speech Debate, available at: http://freespeechdebate.com/en/discuss/a-sticky- wcit-and-the-
battle-for-control-of-the-Internet/; Mueller, Milton (13 December 2012). What really
2 Digital content as a new policy problem
The early literature on Internet content regulation has primarily focused on the
tension between online content regulation and human rights, in particular
freedom of expression and privacy, and constitutional principles such as the rule
of law. Especially state-led initiatives were interpreted as censorship and critically
analysed by freedom of expression advocates, computer scientists and legal
scholars. A second wave of literature focused essentially on authoritarian states to
document how countries such as China or Iran started to build national firewalls
and sophisticated filtering systems (Deibert et al., 2008; Clayton et al., 2006;
Wright et al., 2011). The spread of information control systems throughout the
world, including in Western democracies Deibert and Crete-Nishihata (2012,
341) speak about a norm regression to designate the transition from the belief
in the Internet as a freedom technology to the increasing demands for more
information controls has more recently led to the emergence of a more
empirical,
ometimes apolitical, literature that views Internet blocking not as the
exception but rather as a global norm in emergence (Deibert et al., 2010, 2011a;
McIntyre, 2012).
The Internets role in providing access to information and facilitating global free
expression has been repeatedly underlined by commentators, politicians (e.g.
Clintons Freedom to connect) and institutional reports (e.g. Dutton et al.,
2011; La Rue, 2011; OECD, 2011). However, the borderless nature of
information exchanges conflicts with the body of pre-existing regimes on
information and content regulation that have been established at the national
level. Attempts to harmonise these regulatory bodies lead often to conflicts,
especially since information, and the control thereof, gains in strategic and
economic importance (Busch, 2012).
Online content differs from previous types of content in its digital nature.
danah boyd (2008, 2009) distinguishes five by default properties of digi- tised
content: digital content is persistent, replicable, scalable, searchable
and (de)locatable. Online messages are automatically recorded and archived.
Once content is placed online, it is not easy to remove. Digital content can be
easily duplicated, e.g. over thousand mirror sites emerged within a week after
WikiLeaks web hosting contract was terminated by Amazon Cloud Services after
the publication of U.S. diplomatic cables in 2010 (Brown and Marsden, 2013;
Benkler, 2011, see below). Digital copies are perfect copies of the original,
contrary to the products of previous recording technologies such as the tape
recorder. The potential visibility of digital content is high. Digital content can be
easily transferred to almost anywhere on the globe in a matter of seconds. The
emergence of search engines allows users to access content but also provides a
new opportunity to filter what type of content depending on the algorithm used.
Finally, mobile technologies dislocate us from physical boundaries while at the
same time locating us in geographical spaces. Content is accessible in ever more
places yet technologies are increasingly constructed to locate us in space and
propose location-based content. These properties are not only socially significant,
as shown in boyds
research, but also politically in that they introduce new social practices and
policy responses that may or may not challenge existing economic, legal and political
arrangements.
If the Internet has challenged state sovereignty and oversight over content control,
Internet technologies also offer new possibilities of control by
automatising the monitoring, tracking, processing and filtering of large amounts
of digital data. If the Internet has often been praised for its decentralised nature,
removing the gatekeeping function of intermediaries, certain nodes (e.g. routers
or servers) are increasingly taught to distinguish particular types of content be
this for reasons of managing network congestion, dealing with security threats,
developing for-profit services or restricting access to certain kinds of content
(Bendrath and Mueller, 2011; Fuchs, 2012). In fact, much of the technologies
used to block Internet con- tent can be used for both valid and undemocratic
purposes. They constitute so-called dual use technologies often produced by the
cybersecurity industry of Western state. These have been rapidly adopted by
authoritarian regimes (e.g. China built the great firewall using the U.S. company
Ciscos technologies, see Goldsmith and Wu, 2006). Especially surveillance tech-
nologies used for law enforcement or traffic management purposes in Western
democracies are invariably exported to authoritarian regimes where they are
employed against activists and citizens in violation of human rights
protections (Brown and Korff, 2012).
The Open Net Initiative (ONI), a consortium of universities and private
institutions emerged in 2002 to map content restrictions across the world. Since
2006, they mapped content-access control on the Internet in 70 states,
probed 289 ISPs within those states, and tested Web access to
129 884 URLs (Deibert et al., 2011b, 6). ONI identifies four phases of Internet
access and content regulation, three of which are the titles of ONIs Access books
(Deibert et al., 2008, 2010, 2011a):
THE OPEN COMMONS lasted from 1960s to 2000. Cyberspace was perceived as
distinct from offline activities and either ignored or only slightly regulated.
The period was marked by the belief in an open Internet that enabled global
free speech.
THE ACCESS DENIED phase, from 2000 to 2005, was marked by states
increasingly erecting filters to prevent their citizens from accessing certain
types of content. China in particular emerged as the poster-child of content
restrictions by building a highly sophisticated filtering regime
that covers a wide range of contents. These controls are either based on
existing regulations or new legal measures.
THE ACCESS CONTROLLED phase, from 2005 to 2010, saw states develop more
sophisticated forms of filtering, designed to be more flexible and offensive
(e.g. network attacks) to regulate user behaviour, including through registration,
licensing and identity regulations to facilitate online monitoring and
promote self-censorship. The salient feature of this phase is the notion that
there is a large series of mechanisms (including those that are non-
technological) at various points of control that can be used to limit and shape
access to knowledge and infor- mation (Deibert et al., 2011b, 10). Filtering
techniques are situated at numerous points of the network, controls evolve over
time and can be limited to particular periods such as elections or political
turmoil. Egypts complete disconnection from the Internet in January 2011
represents the most extreme form and has triggered wide debates about state-
controlled kill switches (see for instance Wu, 2010). To achieve
more fine-grained controls, states need to rely on private actors through
informal requests or stricter regulation.
ACCESS CONTESTED is the term used for the fourth phase from 2010 onwards,
during which the Internet has emerged as a battlefield of competing interests
for states, private companies, citizens and other groups. Democratic states
are increasingly vocal in wanting to regulate the Internet. The contest over
access has burst into the open, both among advocates for an open Internet
and those, mostly governments but also corporations, who feel it is now
legitimate for them to exercise power openly in this domain, write Deibert
et al. (2011b, 14). A wide variety of groups recognise the growing ubiquity
of the Internet in everyday life and the possible effects of access controls
with some openly questioning the open standards and protocols that were
thought to be achieved for good in the 1960-70s. The foundational principles
of an open and decentralised Internet are now open for debate and the
subject of competing interests and values at all stages of decision-making
both within states and in the international realm.
Conflicts about online information and content are highly diverse, ranging from
privacy to copyright to freedom of expression to security issues. The fight against
child pornography or child abuse images has been one of the main reasons for
introducing stricter content controls and block lists, especially in liberal
democracies.8 Tools that are often associated with the diffusion of illegal content,
such as peer-to-peer file-sharing or circumvention tools have equally become the
target of censors (Deibert et al., 2008). States are by far not the only actors
showing an interest in controlling Internet content, especially since content owners
increasingly invest in network operating facilities. The emerging conflicts generally
oppose consumers and producers of information who hold diverging interests in
controlling or restricting the propagation of information (Busch, 2012). The
following section offers a short overview of the evolution of content regulation.
The early 1990s ONIs open commons phase was essentially a period
As already stated previously, much attention has been paid to the filtering of
Internet access by authoritarian regimes such as China or Iran (Deibert et al.,
2008, 2010; Boas, 2006). However, liberal democracies that base their legitimacy
on the protection of civil liberties such as freedom of expression and the rule of
law are also increasingly considering technological solutions
8 Sexual representations of children are frequently referred to as child pornography, a term rejected by
child protection associations and police forces as hiding the child abuse behind those images.
to content regulation. Automatic information controls have emerged as a new
policy tool in liberal democracies and a global norm for reasserting state
sovereignty in the online realm (McIntyre, 2012; Deibert et al., 2010; Zittrain and
Palfrey, 2008). In fact, Edwards (2009, 39) argues that technological
manipulations of Internet access to content have been widely disparaged in the
West. The Internet Watch Foundations (IWF) role in blocking content in the UK
for instance was unknown to a majority of users until the Wikipedia blocking of
2008.9
Although the Internets role is to root data packets over the networks without
regards for what content they carry,10 infrastructure is increasingly co-opted to
perform content controls. This can be the case by denying certain actors access
to hosting platforms or financial intermediaries as in the case of WikiLeaks (see
below). Infrastructure is also increasingly used to enforce intellectual property rights,
through Internet access termination laws for repeated copyright infringement
(so-called three-strikes or graduated
9 In 2008, access to the editing function of Wikipedia was blocked for all UK users after an image of a
naked girl, the cover image of a famous rock bands album that could be officially purchased in UK
stores, was flagged as child pornography and added to the IWFs blocklist. Because ISPs used a
particular type of proxy blocking, this resulted in blocking access to Wikipedias editing function for all
UK users.
10 This principle has also gained widespread political salience through the net neutrality movement
particularly in the U.S. and since 2009 increasingly in Europe. Net neutrality advocates defend the
Internets end-to-end principle with no traffic discriminations that might prioritise certain types of traffic
over others. For more information, see Marsden (2010).
response mechanisms as implemented for instance in France, Ireland or the UK,
see Yu, 2010) or through domain name seizures (DeNardis, 2012).
In the West, the rapid development of the mobile Internet has contributed to
increased control and filtering mechanisms being built into mobile access
(Zittrain, 2008). As a result of increased government intervention and cor- porate
demands, the Internet is increasingly bordered (Goldsmith and Wu, 2006). For
some private actors, the development of technological means of control was a
condition for providing access to content they owned in the first place. This is for
instance the case of the entertainment industry who use Geo-ID technologies to
control where users can have access to their content in the world. Technological
innovation online is largely driven by advertising revenues. Large Internet
corporations such as Google or Facebook are heavily reliant on advertising as
their main source of revenue. Over the last three years, about 96% of Googles
total revenue was generated by advertising. 11 To increase the effectiveness of
advertisement, contextual or targetted advertisement has emerged, proposing
targeted ads based on
the assumption that a user interested in one article will be interested in 24
similar products. Following this logic, the next step is to track user behaviour
across websites, collecting data to establish user profiles and propose products that
fit closest to his or her preferences (Kuehn, 2012). To do this, new technologies
and tools had to be developed to track, collect and process personal data. The
development of Geo-ID technologies for instance allows for geographically locating
Internet users with great accuracy, therefore making Internet advertising easier.
Figure 1 provides a summary of the main forms of content control on the Internet,
which will be discussed in more detail below. Since the early 1990s, there has been
an evolution in the way content has been regulated ranging from enforcement at
the source (section 2.2.1) to enforcement at the desti-
nation (section 2.2.2) to enforcement through intermediaries (section 2.2.3),
including through automatic filtering.
Early attempts to deal with problematic content targeted the endpoints of the
network, i.e. the producers and consumers of problematic content (Zittrain,
2003). States could effectively intervene when the content was produced in the
country, by arresting and prosecuting individuals, or when the company hosting
the content held assets in the said country. Because the U.S. company
CompuServe had office spaces and hardware in Munich, Germany, Bavarian
prosecutors succeeded in pressuring the group to block access to 200 Usenet
messaging boards containing hard pornography and child abuse images illegal
under German law in 1996. Similarly, a Parisian court convicted Yahoo in 2000
to remove nazi memorabilia items from its online auction site (see section 1.3).
The company complied, although reluctantly, because it held several assets in
France that could be seized by
the courts. A similar case is Dow Jones v. Gutnick (2002), in which the
Australian High court decided that the U.S. company Dow Jones was liable under
Australian defamation laws for an unfounded mention of Joseph Gutnick in one
of its articles that was also published online. All three examples demonstrate the
states power to effectively regulate what type of content can be accessed on its
national territory. Because the technology of
the late 1990s and early 2000s did not allow for effectively limiting content controls
to particular geographical areas, the result of all three state interventions was that the
content illegal in one state was effectively removed from the global Internet, i.e. also
in states where the content was legal. The extraterritorial effects of the judgements
resulted in much controversy especially in the U.S. where commentators
condemned the censoring of the
U.S. Internet through foreign speech restrictions (Goldsmith and Wu, 2006;
Deibert et al., 2010).
However, the U.S. were among the first to introduce legislation crimi- nalising
the initiation of a transmission of indecent material to minors. The
Communications Decency Act of 1995 (CDA) aimed at introducing content
restrictions for underaged minors but was eventually struck down as unconstitutional
with regard to the first amendment protection by the courts. Online, it is not
necessarily possible to distinguish between minors and adults, the restrictions
would thus have effectively applied to all Internet users, which was considered
excessively chilling of free speech. All liberal democracies dispose of a more or less
broad set of laws that can be used against the source of digital content, e.g. in cases
of defamation, trade secret misappropriation or particular types of speech (Zittrain,
2003).
Control at the destination includes personal computer filtering software that allows
to monitor and control what type of content is accessed with the
destinations personal computer or network. These filters can be built directly
into computers. Sometimes, they are also integrated into Web browsers.
These type of filters are used in the corporate environment to prevent employers
from accessing leisure or illegal content from within the com- panys network.
Parents are also customers of so-called parental control filters to monitor and
control what type of content their children access.
To avoid state interventions in the 1990s, the World Wide Web Con- sortium
(W3C) initiated a Platform for Internet Content Selection (PICS) to develop a global
rating system that would enable users to determine their own access to Internet
content (Resnick and Miller, 1996). The idea was
notably supported by important publishing and media conglomerates and
institutionalised through the Internet Content Rating Association (ICRA) in 1999,
whose members were early Internet industry actors and supported by the
European Safer Internet Action Plan from 1999 to 2002. However, the attempt to
introduce similar ratings than for motion pictures and television content failed
due to the lack of user adoption and the difficulty to rate highly dynamic and
ever-increasing amounts of Internet content (Mueller, 2010; Brown and Marsden,
2013; Oswell, 1999).
ISPs are not the only intermediaries in a position to enforce content regulations.
Information providers (e.g. search engines), financial intermedia- ries, DNS
registrars and hardware and software producers are also key
actors. Zittrain and Edelman (2002) noted already in 2002 that Google filtered its
search results in accordance with local laws, e.g. removing nazi propaganda and
right extremism in Germany. Goldsmith and Wu (2006) point to the regulation of
financial intermediaries in the U.S. to fight offshore gambling websites. By
forbidding all major credit card institutions to transfer money to offshore
gambling accounts, the U.S. government has effectively impacted user behaviour.
It is still possible for U.S. citizens to engage in offshore gambling but the
transaction procedure is significantly higher than previously. Also, commercial
owners of the Internets infrastructure (finan- cial intermediaries, website hosts,
etc.) play an essential role in that they can deny service to controversial speakers
thus depriving these of being heard. After whistleblower WikiLeaks released
thousands of U.S. diplomatic cables in 2010, its domain name was rapidly made
unavailable, its data refused hosting by Amazons cloud computing platform and
the most popular forms of payment services to WikiLeaks were interrupted. The
organisation, the website and the person of Julian Assange rapidly came under
attack by both
private and public actors (Benkler, 2011). WikiLeaks is an extreme case that 30
still triggers wide debate. It illustrates nonetheless that the U.S. government could not
directly prevent the Website from publishing the controversial cables. The
termination of its hosting platform can also be considered a minor inconvenience,
given that various other actors across the globe offered rapid hosting and mirrored the
cables on hundreds of websites. Removing content from the Internet once it
generates broad public interest is thus
near-to-impossible.12 The interruption of services by all major global finan- cial
intermediaries is however more problematic. It resulted in the loss of 95% of
WikiLeaks revenue and lead WikiLeaks to publicly announce the suspension of
further publications.13 If the group continued to publish, the
12 This phenomenon is also referred to as the Streisand effect following a case in which the U.S. singer
and actress Barbra Streisand used legal means to remove a picture of a her villa online, unwillingly
generating so much publicity that the picture was replicated to such an extent that the legal action had to
be abandoned.
13 Addley, Esther and Deans, Jason (24 October 2011). WikiLeaks suspends publishing to fight
financial blockage, The Guardian, available at:
http://www.guardian.co.uk/media/2011/oct/24/wikileaks-suspends-publishing.
activity is considerably reduced and WikiLeaks continues to face financial
difficulties.14
If it is true that other intermediaries should not be overlooked, ISPs and online
content providers (OCPs) merit particular attention. As gatekeepers of the
Internet, ISPs have the technical capability to monitor their users activities and are
able to block access to particular types of content through ever-more sophisticated
blocking techniques (for an overview see Murdoch and Anderson, 2008). OCPs
such as Facebook or Google attract millions of users on what has been called
quasi-public spheres, spaces that function as shopping malls or town-squares in
the digital realm. However, their content policies are largely defined by their terms
of use and contract law that does not benefit from the same constitutional free
speech protections than governmental regulations (York, 2010; MacKinnon, 2012).
Nonetheless, their content policy decisions impact millions of users across the world.
For MacKinnon (2012) these giant Internet companies represent in fact new
corporate sovereigns that make crucial decisions about the type of content
31
one can access or not. In her 2012 book Consent of the networked she
demands increased transparency and accountability from corporate and governmental
sovereigns, rejecting however a state-led initiative or stricter legislation. A further
self-regulatory measure that has attracted attention is the Global Network
Initiative, a process set up by Yahoo, Google, Microsoft, human rights groups and
academics in 2006 to reflect about how companies can uphold human rights in the
digital realm particularly when operating in authoritarian regimes. 15 A number of
reports and human rights commitments have resulted from the initiative, which
failed however in attracting further corporations to join the effort.
14 Greenberg, Andy (18 July 2012). WikiLeaks Reopens Channel for credit card donations,d ares Visa
and MasterCard to block them again, Forbes, available at:
http://www.forbes.com/sites/andygreenberg/2012/07/18/wikileaks-reopens-channel-for-credit- card-
donations-dares-visa-and-mastercard-to-block-it-again/.
15 The European Parliament has for instance demanded sharper export controls of dual-use technologies.
See: European Parliament (27 September 2011). Controlling dual-use exports. Available at:
http://www.europarl.europa.eu/news/en/pressroom/content/20110927IPR27586/html/Controll ing-dual-use-
exports.
In the mid-1990s, many Internet industry actors in liberal democracies
established private organisations, often supported by public funds, speci- fically
to deal with sexual images of children. These private bodies set up hotlines to
allow Internet users to flag problematic content and facilitate takedown and
prosecution by the police. One of the more successful hotlines is run by the
Internet Watch Foundation (IWF), set up in 1996 by the British Internet industry
as part of the broader Inhope network, the International Association of Internet
Hotlines. In the U.S. the National Center for Missing and Exploited Children
(NCMEC) pre-existed the Internet but increasingly focuses on online child abuse.
Hotlines were a response to the fact that the police was not able to effectively
deal with illegal content online (Mueller, 2010). A second reaction were rating
systems that equally developed in the 1990s but failed as indicated previously.
The organisations behind the hotlines, such as the IWF, then converted to
supporting the current notice- and-takedown system.
Internet service providers (ISPs) are generally exempt from liability for the content
32
carried or hosted on their servers as long as they are unaware of its
illegal nature and remove the content swiftly upon notification. This principle
has notably been enshrined in the U.S. 16 and the European mere conduit (e-
commerce directive, 2000) provisions. The importance of this principle has been
repeatedly underlined by advocacy groups and international organisations (La
Rue, 2011; OECD, 2011). However, the current notice-and- take down regime
encourages ISPs to swiftly remove content as soon as they are notified of its
potentially illegal or harmful nature to avoid liability issues. This results in so-
called chilling effects on free speech as content is taken down upon notification
with no or limited assessment on whether it is actually illegal. A growing number
of reports suggest that perfectly legal content is being removed under notice-and-
takedown procedures.17 When
16 Section 230 of the Communications Decency Act (CDA) states that "No provider or user of an
interactive computer service shall be treated as the publisher or speaker of any information provided
by another information content provider". Sections of the Digital Millennium Copyright Act (DMCA,
1998) also provide safe harbor provisions for copyrighted material.
17 The website http://www.chillingeffects.org/, a project of the Electronic Frontier Foundation (EFF) and
various U.S. universities, aims to inform Internet users about their rights in dealing with
not complying with take-down-requests, ISPs or OCPs risk to be held liable, as
has recently been the case with Google and Yahoo in two defamation cases in
Australia.18
Furthermore, research by Moore and Clayton (2009) indicates that there are
strong variations in removal times after a request depending on the type of
content being taken done. Despite lacking an overarching legal framework,
phishing websites19 are removed very rapidly while child abuse images, which are
illegal across the globe, suffer long removal times. The authors argue that this has
to do with the incentive of the involved actors, banks acting very promptly while
child abuse images are dealt with by the police and encounter many jurisdictional
issues when not being situated within the polices country.
notice-and-takedown requests and documents abuses of the DMCA safe harbor provision in chilling
legitimate speech.
18 Holland, Adam (28 November 2012). Google Found Liable in Australian Court for Initial Refusal to
Remove Links, in: Chilling Effects, accessed on 18 December 2012
at:http://www.chillingeffects.org/weather.cgi?WeatherID=684
19 Phishing websites are sites that appear genuine (typically banking sites) to dupe Internet users to enter their
passwords and login credential to be used for fraud.
20 Google Transparency Report, Copyright removal Requests, retrieved on January 18, 2013 from:
https://www.google.com/transparencyreport/removals/copyright/.
Figure 2: Copyright removal requests to Google search per week
21 Blocklists have sometimes been leaked on the Internet. The Australians Communications and Media
Authority (ACMA) blocklist was leaked by WikiLeaks in 2009 and several sites were detected as non-
conform to ACMAs content rating system, for instance the Website of a dentist in Queensland. More
recently, in March 2012, more than 8000 Websites, including Google and Facebook, were blocked by
the Danish child pornography list. See EDRi (14 March 2012). Google and Facebook Blocked by the
Danish Child Pornography Filter, available at: http://www.edri.org/edrigram/number10.5/danish-filter-
blocks-google-facebook.
transparency and accountability to be introduced to the existing systems (see for
instance Bambauer, 2009; Edwards, 2009 for a critical stance on this
development see Mueller, 2010)
tiveness (Deibert et al., 2008, 2010). The EUs CIRCAMP blocklist, used by
various national hotlines, relies on DNS filtering, even though it can be easily
circumvented. British Telecoms Cleanfeed system employs a hybrid filter. In the
U.S., the National Centre for Missing and Exploited Children (NCMEC) provides
voluntary blocklists to ISPs since 2007, many of whom
use them to filter their networks. Some companies, such as AT&T, also use the so-
called hash-values provided by NCMEC to monitor and filter private
communications for known images based on their hash value, thus including
non-Web content (McIntyre, 2012). Computer science literature focuses on the
detailed implementations of this type of blocking techniques principally in
authoritarian regimes (for China, see for instance Wright et al., 2011, but see
Clayton (2005) for an analysis of the UK Cleanfeed system; see also our technical
report about Internet blocking).
The lack of reliable data measuring Internet blocking has already been
invoked in 2003 by Zittrain, who subsequently participated in building up
ONI and Herdict. More recently, the New America Foundations Open Tech-
nology Institute, The PlanetLab Consortium, Google Inc. and individual
researchers have initiated the Measurement Lab, a Web platform that can host a
variety of network measurement tools for broadband and mobile connections.
While some of the available tests are more specifically targeted at measuring the
quality of broadband connections, the use of deep-packet inspection (DPI), a
technology that allows to open up data packets and examine their content has
come to the centre of attention more recently. DPI is used for a variety of reasons
including bandwidth management, network security or lawful interception but can
also be used to regulate content, prioritise certain products over competing
services, target advertising or enforce copyright (Bendrath and Mueller, 2011). As
a result, several teams of researchers have developed new tools to measure and
assess DPI use by Internet service providers, which is unregulated in most
countries (see for instance Dischinger et al., 2010).
First academic assessments have emerged: Dischinger et al. (2008) for instance
39
assessed Bit Torrent blocking, presenting particularly high values
for U.S. ISPs such as Comcast. More recent research by Mueller and Asghari (2012)
and Asghari et al. (2012b), using the Glasnost test available on M-
Lab, investigate the particular use of DPI technology for throttling or blocking
peer-to-peer (P2P) applications over three years. They use bivariate analysis to test
possible correlations for economic and political drivers of DPI technology and its
implementation by 288 ISPs in 75 countries.22
22 For a critical assessment of methodological issues regarding Internet throttling measurements see Asghari et
al. (2012a).
Interestingly, Mueller and Asghari (2012) find that governmental regu- lation
in the U.S. and Canada did not impact DPI use. In both countries, DPI use
resulted in public protests, litigation and the development of new regulation based
on net neutrality principles. The public confrontation clearly impacted DPI use in
the U.S. where ISPs considerably decreased their use of the technology, even
when the FCC ruling was challenged. In Canada, however, the new, uncontested,
regulation did not reduce DPI use, which actually increased after the regulation
was passed. Legislation alone is therefore not able to explain this apparent
paradox.
3 Future research
The literature review presented the main research questions and findings on Internet
content regulation as they have evolved since the introduction of the Internet in the early
1990s. Of particular interest is the nature of new regulatory arrangements that range
from self- to co- to state regulatory interventions (see also Marsden, 2011) set in
place to respond to growing 40
concerns about a wide range of illegal or harmful content such as copyright infringing
material or content deemed harmful to minors or threatening public order.
The various techniques and points of control have been discussed to high-
light where states and private actors could intervene to control digital inform-
ation flows. Particular attention has been paid to blocking techniques and the
legal and democratic implications of these. Finally, we have discussed recent
research providing empirical evidence of the amount of blocking carried out in
liberal democracies, identifying several shortcomings.
First, there remains a lack of reliable and longitudinal data about what type of
content is blocked or removed by which type of actor, where and through which
process. Recent initiatives such as the M-Lab provide first opportunities to gather
and analyse large amounts of data but present none- theless several
methodological challenges (see for instance Asghari et al., 2012a). Regulatory
authorities such as the U.S. Federal Communications
Commission (FCC) or the Body of European Regulators for Electronic
Communications (BEREC) are in the process of carrying out large broadband
connection tests that might result in relevant data for this research project in the
coming years.
Finally, there has been limited attention for the political drivers and factors
surrounding the adoption and implementation of blocking tech-
niques. Much of what we know about Internet blocking in liberal demo-
41
cracies is the result of media reports, freedom of expression advocates with
little systematic analysis. Future research will benefit from a comparative and
systematic perspective on Internet blocking in liberal democracies in particular.
REVIEW OF LITERATURE
Online learning environments are platforms where educational courses are delivered
through the Internet, or using Web-based instructional systems either in real-time
(synchronously) or asynchronously. Reid (2005) stated that a web-based instructional
system or online learning is easy and inexpensive compared to traditional learning
methods. Conformed to Moore and Kearsley (2005), web-based instructional can make
extensive use of network technologies to incorporate a variety of organizational,
administrative, instructional, and technological components, in offering flexibility
concerning the new methodology of learning. Online learning is self-managed when an
instructor provides the software programs and resources to transfer new skills while
the learner controls the process to achieve their own objective to acquire those new
skills (Gravill & Copeau, 2008). Therefore, the process of online learning is going to
be implemented by the learner, and the learner will become an active controller instead
of being a passive learner, which has been the norm in the past. Online learners need to
understand the dynamics in an online setting (Voderwell & Savery, 2004). Learners
need to know how online learning works, interactions, relations, perceptions, and role
of learners. But are they ready for online learning environments.
The Internet cannot and should not be regulated like old media.
However, more can and should be done, especially in relation to harmful content.
Most problematic Internet content is not illegal or harmful and users must take appropriate
responsibility while being advised on tools and techniques.
Over time, we should regulate broadcasting and the Internet in a less differentiated manner.
If we do not have a rational debate on the regulation of the Internet and come up with practical
and effective proposals, then many of the one-third of UK homes that are still not on the Internet
will be deterred from doing so, many of those who are on the Net will be reluctant to use it as
extensively and confidently as they should, and we run the risk that scare campaigns will be
whipped up around particularly harmful or offensive content, tempting politicians and regulators
to intervene in ways that the industry would probably find unhelpful.
References
Asghari, H., Mueller, M. L., van Eeten, M. J. G., and Wang, X. (2012a).
Making Internet measurements accessible for multi-disciplinary research. an
in-depth look at using M-Labs Glasnost data for policy research. Submitted
to IMC12.
Asghari, H., van Eeten, M. J. G., and Mueller, M. L. (2012b). Unravelling the
economic and political drivers of deep packet inspection. An empirical study of DPI
use by broadband operators in 75 countries. Paper presented at the GigaNet 7th
Annual Symposium, November 5, Baku, Azerbaijan.
Bendrath, R. and Mueller, M. (2011). The end of the Net as we know it? deep packet
inspection and Internet governance. New Media & Society, 13(7):11421160.
Benkler, Y. (2011). A free irresponsible press: Wikileaks and the battle over the soul of the
networked fourth estate. Working draft. Forthcoming in Harvard Civil Rights-Civil
Liberties Law Review.
Boas, T. (2006). Weaving the authoritarian web: The control of Internet use in non-
democratic regimes. In: Zysman, J. and Newman, A., editors, How
revolutionary was the digital revolution? National responses, market transitions,
and global technology, pp. 361378. Stanford University Press, Palo Alto, CA.
42
Boyd, Danah (2008). Taken Out of Context. American Teen Sociality in Networked Publics.
PhD thesis, University of California, Berkeley.
Braman, S. (2009). Change of State: Information, Policy, and Power. MIT Press, Cambridge, MA.
Breindl, Y. and Wright, J. (2012). Internet filtering trends in liberal democracies: French
and German regulatory debates. Paper presented at the FOCI12 workshop, 2nd
USENIX workshop on Free and Open Communications on the Internet, August 6,
2012, Bellevue, WA.
Brown, I. (2008). Internet filtering be careful what you ask for. In: S.Kirca and
L.Hanson, editors, Freedom and Prejudice: Approaches to Media and Culture, pp.
7491.
Bahcesehir University Press, Istanbul.
Brown, I. and Marsden, C. (2013). Regulating Code: Good Governance and Better Regulation
in the Information Age. MIT Press, Cambridge, MA.
Busch, A. (2012). Politische Regulierung von Information eine Einfhrung. In: Busch and
Hofmann (2012), pp. 2447.
Busch, A. and Hofmann, J. (2012). In: Busch, A. and Hofmann, J., editors, Politik und die
Regulierung von Information, volume 46 of Politische Vierteljahresschrift. Nomos,
Baden-Baden, Germany.
Castells, M. (2001). The Internet Galaxy: Reflections on the Internet, Business, and Society.
Oxford University Press.
Clayton, R. (2005). Failures in a hybrid content blocking system. Presented at the Workshop
on Privacy Enhancing Technologies, Dubrovnik, Croatia.
Clayton, R., Murdoch, S. J., and Watson, R. N. M. (2006). Ignoring the great firewall of China.
In: Danezis, G. and Golle, P., editors, Privacy Enhancing Technologies workshop (PET
2006), LNCS. Springer.
Deibert, R. J., Palfrey, J. G., Rohozinski, R., and Zittrain, J. (2008). Access Denied:
The Practice and Policy of Global Internet Filtering. Information Revolution and Global
Politics. MIT Press, Cambridge, MA.
Deibert, R. J., Palfrey, J. G., Rohozinski, R., and Zittrain, J. (2010). Access controlled:
43
The shaping of power, rights, and rule in cyberspace. Information Revolution and
Global Politics. MIT Press, Cambridge, MA.
Deibert, R. J., Palfrey, J. G., Rohozinski, R., and Zittrain, J. (2011a). Access
Contested: Security, Identity, and Resistance in Asian Cyberspace. Information
Revolution and Global Politics. MIT Press, Cambridge, MA.
Deibert, R. J., Palfrey, J. G., Rohozinski, R., and Zittrain, J. (2011b). Access contested:
Toward the fourth phase of cyberspace controls. In: Deibert, R. J., Palfrey, J. G.,
Rohozinski, R., and Zittrain, J., editors, Access Contested: Security, Identity, and
Resistance in Asian Cyberspace, Information Revolution and Global Politics, pp. 320.
MIT Press, Cambridge, MA.
DeNardis, L. (2009). Protocol Politics: The Globalization of Internet Governance. MIT Press,
Cambridge, MA.
DeNardis, L. (2012). Hidden levers of Internet control. Information, Communication & Society,
15(5):720738.
Dischinger, M., Gummadi, K. P., Marcon, M., Mahajan, R., Guha, S., and Saroiu, S. (2010).
Glasnost: Enabling end user to detect traffic differentiation. Paper presented at the
USENIX Symposium on Networked Systems Design and Implementation (NSDI).
Dischinger, M., Mislove, A., Haeberlen, A., and Gummadi, K. P. (2008). Detecting bittorrent
blocking. Paper presented at IMC.
Dutton, W. H., Dopatka, A., Hills, M., Law, G., and Nash, V. (2011). Freedom of connection,
freedom of expression: the changing legal and regulatory ecology shaping the Internet.
Technical report, [UNESCO].
Dyson, E., Gilder, G., Keyworth, G., and Toffler, A. (1996). Cyberspace and the American
dream: A magna carta for the knowledge age. Information Society, 12(3):295308.
Edwards, L. (2009). Pornography, Censorship and the Internet. In: Edwards, L. and Waelde, C.,
editors, Law and the Internet. Hart Publishing, Oxford, UK.
Faris, R. and Villeneuve, N. (2008). Measuring global Internet filtering. In: Deibert, R. J.,
Palfrey, J. G., Rohozinski, R., and Zittrain, J., editors, Access Denied: The Practice and
Policy of Global Internet Filtering, Information Revolution and Global Politics, pp. 528.
MIT Press, Cambridge, MA.
Freedom House (2012). Freedom on the Net 2012. A Global Assessment of Internet and
Digital Media. Technical report, Freedom House.
Froomkin, M. (2011). Lessons learned too well: The evolution of Internet regulation.
Technical report, CDT Fellows Focus series.
Fuchs, C. (2012). Implications of deep packet inspection (dpi) Internet surveillance for
society. The Privacy & Security Research Paper Series 1, Uppsala University.
Goldsmith, J. and Wu, T. (2006). Who Controls the Internet? Illusions of a Borderless World.
Oxford University Press, Oxford/New York.
Hintz, A. (2012). Challenging the digital gatekeepers: International policy initiatives for free
expression. Journal of Information Policy, 2:128150.
Hofmann, J. (2012). Information und Wissen als Gegenstand oder Ressource von
Regulierung. In: Busch and Hofmann (2012), pp. 523.
Johnson, D. R. and Post, D. G. (1996). Law and borders the rise of law in cyberspace.
Stanford Law Review, 48(5):13671402.
Johnson, P. (1997). Pornography drives technology: Why not to censor the Internet.
Federal Communications Law Journal, 49(1):217226.
Kuehn, A. (2012). Cookies versus Clams. Tracking Technologies and their Implications for
Online Privacy, Paper presented at the GigaNet 7th Annual Symposium, November 5,
Baku, Azerbaijan.
La Rue, F. (2011). Report of the special rapporteur on the promotion and protection of the right to
freedom of opinion and expression. Technical Report A/HRC/17/27, Human Rights Council
Seventeenth session Agenda item 3.
Lessig, L. (1999). Code: And Other Laws of Cyberspace. Basic Books, Cambridge, MA.
Lessig, L. (2006). Code: And Other Laws of Cyberspace, Version 2.0. Basic Books, New
York.
Lblich, M. and Wendelin, M. (2012). ICT policy activism on a national level: Ideas, resources
and strategies of German civil society in governance processes. New Media & Society,
14(6):899915.
MacKinnon, R. (2012). Consent of the Networked: The Worldwide Struggle For Internet
Freedom. Basic Books, New York.
McIntyre, T. (2012). Child abuse images and cleanfeeds: Assessing Internet blocking systems.
In: Brown, I., editor, Research Handbook on Governance of the Internet. Edward Elgar,
Cheltenham.
McNamee, J. (2011a). Internet blocking. crimes should be punished and not hidden.
45
Technical report, EDRi.
Moore, T. and Clayton, R. (2009). The Impact of Incentives on Notice and Take-down. In:
Managing Information Risk and the Economics of Security, pp. 199223. Springer US.
Mueller, M., Pag, C., and Kuerbis, B. (2004). Civil society and the shaping of communication
information policy: Four decades of advocacy. The Information Society: An International
Journal, 20(3):169.
Mueller, M. L. (2002). Ruling the root: Internet governance and the taming of cyberspace.
Information Revolution and Global Politics Series. MIT Press, Cambridge, MA.
Mueller, M. L. (2010). Networks and States: the global politics of Internet governance.
Information Revolution and Global Politics Series. MIT Press, Cambridge, MA.
Mueller, M. L. and Asghari, H. (2012). Deep packet inspection and bandwidth management:
Battles over BitTorrent in Canada and the United States. Telecommunications Policy,
36(6):462475.
Murdoch, S. J. and Anderson, R. (2008). Tools and technology of Internet filtering. In:
Deibert, R. J., Palfrey, J. G., Rohozinski, R., and Zittrain, J., editors, Access
Denied: The Practice and Policy of Global Internet Filtering, Information Revolution and
Global Politics, pp. 2956. MIT Press, Cambridge, MA.
Musiani, F. (2011). Privacy as Invisibility: Pervasive Surveillance and the Privatization of Peer-to-
Peer Systems. tripleC Cognition, Communication, Co-operation, 9(2):126140.
OECD (2011). Joint declaration on freedom of expression and the Internet. Technical report,
OECD.
Oswell, D. (1999). The dark side of cyberspace Internet content regulation and child
protection. Convergence: The International Journal of Research into New Media
Technologies, 5(4):4262.
Rasmussen, T. (2007). Techno-politics, Internet governance and some challenges facing the
Internet. Research Report 15, Oxford Internet Institute.
Reidenberg, J. (1998). Lex informatica: The formulation of information policy rules through
technology. Texas Law Review, 76(3):553584.
Reporters without borders (2012). Internet enemies 2012. Report, Reporters without borders for
Press Freedom.
Resnick, P. and Miller, J. (1996). PICS: Internet access controls without censorship.
Communications of the ACM, 39(10):8793.
Vanobberghen, W. (2007). The Marvel of our Time: Visions about radio broadcasting in the
Flemish Catholic Press, 19231936. Paper presented at the 25th IAMCR conference, Paris,
France.
Wellman, B. (2001). Physical place and cyberplace: The rise of personalized networking.
International Journal of Urban and Regional Research, 25(2):227252.
Wright, J., de Souza, T., and Brown, I. (2011). Fine-grained censorship mapping information
sources, legality and ethics. In: Proceedings of Freedom of Communications on the Internet
Workshop.
Wu, T. (2010). The Master Switch: The Rise and Fall of Information Empires. Atlantic Books,
London.
York, J. C. (2010). Policing content in the quasi-public sphere. Technical report, Open Net
Initiative Bulletin.
Yu, P. K. (2004). The Escalating Copyright Wars. Hofstra Law Review, 32:907951. Zeno-
Zittrain, J. (2003). Internet Points of Control. Boston College Law Review, 43(1). Zittrain,
J. (2008). The Future of the Internet And How to Stop It. Yale University Press,
New Haven, CT.
Zittrain, J. and Edelman, B. (2002). Localized Google search result exclusions: Statement of
issues and call for data. Available at http://cyber.law.harvard.edu/filtering/google/.
Zittrain, J. and Palfrey, J. G. (2008). Internet filtering: The politics and mechanisms of
control. In: Deibert, R. J., Palfrey, J. G., Rohozinski, R., and Zittrain, J., editors,
Access Denied: The Practice and Policy of Global Internet Filtering, Information
Revolution and Global Politics, pp. 2956. MIT Press, Cambridge, MA.
47