Presentations Session 1 – Structural Inequalities, Social Inclusion and the Principle of Non Discrimination
Persons with Disabilities in Cyprus and the lack of accessibility in the virtual world
Ioanna Georgiou, Andreas Christodoulou and K. Stavros Parlalis, Frederick University, Cyprus
Theoretical framework: Technology is an integral part of progress and development in our daily lives. The concept of technology appeared in the late 60’ during the 3rd industrial revolution, where computers, automation, machine development and replacement of working hands came in the spotlight. The expanded network of services, data and the involvement of people in the production, and supply to the receivers-users is bound to lead to the 4th “industrial revolution” (Kastanas, 2019). The development of technology consists of all those tools, skills and processes that enable users to achieve their goals including information retrieval and improving their quality of life (European Patent Office, 2017). However, what happens in cases in which users have reduced opportunities to access information, receive services and can enjoy entertainment and socialization? People with disabilities is a group that often has difficulty accessing such information. This can be due to a variety of factors as disability can be related to any physical, mental and sensory disorder, in which the individual is called upon to face difficulties and barriers to equal interaction with the environment and society (“Legislation on Persons with Disabilities 2000 (127(I)/2000)”, 2000). Factors such as the country of residence, resources, equipment and facilities are crucial for their inclusive access. The interpretation of universal access is an important element for the product and service that is approached in each case, including the universal design and the respective adaptations. This concept, in the case of people with disabilities, should be subject to the necessary changes in its application, so that it can respond as much as possible to their individual needs. The difficulties that arise in the individualized treatment, adopt the term “reasonable adjustments”, meaning the appropriate modifications and arrangements in order to ensure this right to persons with disabilities (Basdekis, 2012).
Approach and methodology: The researchers employed a desktop research for the purposes of the current study, focusing on relevant Legislations, Protocols, Reports and Regulations in Cyprus.
Empirical outcome: In the United Nations Convention on the Rights of Persons with Disabilities, it mentions as one of the basic principles the “accessibility” of these persons. In Cyprus, this concept practically focuses mainly on the access of people with disabilities to buildings and not so much on their access to opportunities for networking, learning, information and socialization through the internet. Our working experience with people with disabilities confirms the weaknesses; we daily receive requests from individuals and/or their families mentioning similar difficulties. Further to this, the current presentation seeks to describe the factors that affect the access of people with disabilities to the Internet, their active and equal participation in it, while emphasizing the benefits they have through their ability to participate. It also seeks to make policy recommendations that will contribute to their internet access, including all actions for relevant adjustments.
Rethinking Rights in Social Media Governance: Human Rights, Ideology and Inequality
Rachel Griffin, Sciences Po, France
Social media platforms are powerful intermediaries for online communication, media and cultural production. As such, they are a major concern for policymakers and academics. Much of the legal literature on social media analyses them through the lens of human rights (e.g. Jørgenson, 2019; Frosio and Mendis, 2020; Sander, 2020). Fundamental rights also play a major role in EU social media law, serving both as general guiding values and as the key check on state and corporate power.
This paper questions the dominance of human rights as the primary normative framework for social media regulation and academic critiques thereof. It draws on literature from critical legal studies (e.g. Kennedy, 2002), feminist political theory (e.g. Brown, 2000, 2011), and law and political economy (e.g. Kapczynski, 2019) which critiques the liberal-individualistic orientation of human rights, their inability to address structural conditions and collective interests, and the tendency of individual legal rights to disproportionately benefit the privileged.
These arguments are highly relevant to EU social media regulation, which relies heavily on individual rights and remedies, and downplays structural issues and collective interests. They are further developed in the social media context by examining the mounting evidence that platforms systematically reproduce oppression along axes such as gender, race and sexuality. For example, they disproportionately suppress content from marginalised groups, promote content that reinforces regressive stereotypes, and profile users in ways that reinforce structural inequalities. By reviewing empirical evidence illustrating these issues and analysing legal sources including the Copyright Directive, Terrorist Content Regulation, and recent ECJ cases Glawischnig-Piesczek and Poland v Parliament and Council, the paper shows that such issues cannot adequately be addressed within a human rights framework.
As a legal framework, human rights fail to address collective and societal issues – for example, platforms’ diffuse influence on culture and social norms – and favour individualistic remedies like content removal appeals, which not only overlook collective interests but cannot even offer effective, equal protection to individuals. In political discourse, the seemingly apolitical language and individualistic orientation of human rights can legitimise corporate activities, while downplaying structural questions about the political economy of this privatised, highly-concentrated, advertiser-funded industry – which is fundamental to understanding how platforms reproduce social inequalities.
Finally, the paper discusses alternative views of rights which are arguably better-suited to addressing social media’s unequal impacts. More structural or collective framings of human rights (e.g. Cohen, 2017; Sander, 2021) nonetheless have important limitations: they still depoliticise issues of group-based oppression by framing them in universal terms, and overlook social issues that are irreducible to individual rights. However, as critical race theorists (e.g. Crenshaw, 1988) have argued, even while agreeing with many of the aforementioned criticisms, rights remain pragmatically necessary for progressive movements. Broad-based support for human rights and their entrenched role in EU law mean that critics of social media cannot avoid relying on them. At the same time, academics concerned with platforms’ unequal impacts should seek to develop more explicitly political critiques of social media law, based on alternative normative visions.
The right to be excluded from the information society
Georgios Terzis, Vrije Universiteit Brussel, Belgium and Dariusz Kloza, Universiteit Gent, Belgium
In this contribution, we discuss if there is, or if there should be, a right to be excluded from the information society.
Our contribution is motivated by an observation that, nowadays, various services have become available predominantly – and sometimes only – via the internet (e.g. banking services, passenger location forms or applications for social assistance). Overall, these days, life without internet access has become unduly burdensome and – at times – impossible. In result, these developments have deepened further the ‘divide’ between those who benefit therefrom and those who do not.
While we do not negate that the so-called ‘digital divide’ is a negative phenomenon, we argue that both the debates on, and the practice of, overcoming such a ‘divide’ tend to neglect a complementary point: while leaning towards a de facto obligatory inclusion in the information society, neglected is the choice of individuals not to partake therein.
In the first part of this contribution, from the perspective of global ethics, we critically revisit the old arguments for the inclusion (cf. e.g. the 1980 MacBride report) as well as recent ones (e.g. the 2003-2005 World Summit on the Information Society), but also overviewing the new ones, in particular those recently inspired by a public health crisis (cf. e.g. the case for access to education or e-justice). Most importantly, we posit that ‘inclusion’ has not been sufficiently defined and elaborated as it is presumed that everyone needs and wants to be included.
Therefore, in the second part, we confront these arguments for inclusion with those for exclusion, both old and new ones, as – in our view – the use of ‘the digital’ should be an option. We argue that the financial, literacy, ascetic or pleonastic factors, which have been argued to be the main ones that lead to exclusion, also support strong arguments for the right to be excluded. Furthermore, any obligation to use technologies is often unfair, especially towards people who do not know how to or cannot afford to use them. (The right not to use the internet, if there were any, constitutes a case in point here.)
We conclude by arguing that human rights could be successfully invoked to protect individuals against a de facto obligatory inclusion in the information society. This is nothing new as there exists (or, interpreted have been) already many human rights that safeguard some degree of exclusion, e.g. the right to privacy or the socalled negative aspects of the freedom of expression (e.g. a right not to express oneself) or the freedom of assembly (e.g. a right not to join a group). Such a right to be excluded will further safeguard the choice of individuals not to partake in the information society.
Presentations Session 2 – Freedom of Expression beyond the EU Digital Services Act: Regulating Private Regulators
European views on the privatization of the public space: addressing human rights restrictions in platforms’ user terms
Berdien van der Donk, Copenhagen University, Denmark
Can a large-scale social media platform (SMP) decide to block legal content? Should a platform with more than three billion users be able to restrict the access to protected speech, to decide that criticism on covid-19 policy is misinformation, or that euthanasia equals suicide? This research project addresses the dilemma of restrictive user terms through a combination of contract and human rights law. Through the enforcement of their user terms, social media platforms have slowly become the guardians of online speech. The freedom to conduct a business in article 16 of the European Charter, and the freedom of contract that is enclosed in therein, allows platforms to freely draft their user terms. Seemingly, they are free to exclude whichever content they want, including content that falls within the scope of protected speech. The project will answer where exactly this freedom to draft user terms stops.
The project is two-folded: it assesses the pluriform legal orders applicable to user terms on social media platforms, addressing both the perspective of the platform and the platform’s users, and secondly, it covers a double comparative research (internal and external) of the national legal systems of four European Member States: Denmark, Germany, Italy, and the Netherlands.
The internal study compares case-law on content restrictions in the user terms of social media platforms to case-law on historical access and content restrictions related to non-platform actors. It will show that recently, European national courts have begun to define the role of social media platforms more clearly. Unfortunately, this development is turning into a fragmented European system. Whereas some Member States have qualified social media platforms as ‘privatized public spaces’, others have chosen to treat these platforms as strictly private property. This fragmentation can be traced back to the emergence of house rights (based on the right to property) and the increasing influence of human rights on agreements between private parties. In most Member States, human rights are included into national private law through ‘open norms’. Consequently, national courts apply human rights horizontally in disputes between a platform and its user, and therewith they slowly dissolve the public-private distinction in law.
The external part of the study compares the national approaches in the four Member States with one another to assess whether a discrepancy exists within the European Union. The analysis exposes an incoherent and inconsistent system, which leads to legal insecurity for both the platforms and their users in the European Union. The differences arguably emerged due to the existing fragmented system of offline access restrictions and should be addressed consistently. In its current form, the upcoming Digital Services Act will not solve these problems. Therefore, a final decision on the role of online platforms is urgently needed.
Regulate the Journey, not the Destination: The Digital Services Act and Freedom of Expression Online
Torben Klausa, Bielefeld University, Germany
From a regulator’s perspective, the challenges of digital content moderation have for a long time circled around the – increasingly frustrated – outcry “What is illegal offline must also be illegal online!” The recent years, however, have shown a danger that goes beyond said worry about illegal material, namely lawful but awful content: questionable posts containing disinformation, racism, or homophobia that might be technically legal but can still endanger public discourse and democracy. In this regard, states find themselves between a rock and a hard place. They depend on social media companies as private entities in two respects: to obey and enforce the existing state law – but also to introduce their own, private terms of service (ToS) and community standards that need to be stricter than law to keep lawful but awful content at bay.
This situation has led to the rise of a new regulatory approach that focuses less on platforms’ substantive rules and ToS, but rather on the way said rules are enforced. Instead of states imposing certain rules for online content and forcing private platforms to carry them through – a practice coined as new-school speech regulation by Balkin (2014) –, a trend towards procedural regulation focuses less on what rules platforms enforce but on how they do it. This way, procedural regulation appears to be an option to leave the details of content regulation out of the state’s hand, while still guaranteeing a certain level of users’ rights by imposing transparency, accountability, rights to appeal – in other words: due process guarantees – on private platforms.
This approach to protecting users’ rights online by putting social media companies into procedural chains formerly only known to the constitutional state itself has been discussed by academics in the field (e. g. Haggart and Keller 2021; Douek 2020; Van Loo 2020; Bloch-Wehba 2019; Bunting 2018; Suzor 2018). The concept’s relevancy, however, increases with its recognition by law-makers. This is reflected not only in national regulatory reforms like the German Network Enforcement Act, or NetzDG (Klausa 2022). Instead, with the European Union’s Digital Services Act (DSA) about to be passed into law, one of the global players in digital policy making is on the verge of making procedural regulation the new standard of protecting freedom of expression online. How does the DSA embody the concept of procedural regulation? And what are the leverage points to safeguard users’ rights online via the non-substantive regulation of online platforms? Based on the existing theoretical literature on the concept, the envisioned paper shall trace the dawn of procedural regulation in the EU’s coming DSA and analyze whether the concept in practice can live up to its promises in theory. It shall be shown that the dichotomy is less about enforcing rules offline and online – but more about a shift of procedural boundaries from states to platforms.
Tracing the contestability of content moderation: Where are the users?
Naomi Appelman, University of Amsterdam, Netherlands
Over the past two decades social media platforms’ content moderation systems have developed into the main structures governing online speech. These governance structures are under pressure as scholarly and regulatory debate is focused on how they are broken, endanger the freedom of expression, and facilitate a slews problematic speech. Central to these issues is the dependence of users on these content moderation for participation in online communication and adjudication of online conflicts. This is often contrasted an image of a participatory early internet where the systems governing online speech were open to contestation. This paper sees to explore this purported contrast and uncover how the contestability of online speech governance has changed from the 1990ies to the early 2020ies. Specifically, it will focus on how the influence and agency of users in respect to content moderation practices has evolved throughout the development of content moderation systems and their regulation. Retracing this development will make visible what (legal) structures contribute to people’s current position vis-à-vis content moderation systems and, crucially, illuminate how EU online speech regulation can start to increase the contestability of speech governance.
Building on extensive internet histories, historical legal work on online speech governance, and a growing critical online communication scholarship, the paper traces the development of content moderation practices in relation to the people whose content is moderated as a specific aspect of these broader histories along four dimensions: i) The organisational and governance aspects of content moderation systems themselves, including a specific focus on the political and economic interests of the corporate social media platforms that drive the development of these systems, ii) the technical functioning and affordances of these systems, iii) important legal and regulatory developments, as well as the complex relation between the large tech platforms and regulatory authorities, and iv) the sociotechnical imaginaries, as described by Jasanoff, driving both the development of content moderation systems and their regulation.
This layered analysis will show that, with the increased importance of content moderation systems, there is a clear trend towards diminishing popular influence on these systems. While the big communication platforms make online communication and expression extremely accessible, people’s ability to influence or modify these systems is, overall, low. Crucially, the debates and developments have as their main object specific expression in the online context. The paper contends that people behind these expressions seem to disappear behind the content and the rights or responsibilities connected to it. This takes place in a context where content moderation practices and online speech governance more broadly has increasingly become a site of political contestation. The result is that communication platforms and governments are competing for control over online speech governance which makes for a rapidly developing landscape. Building on these conclusions, lessons are drawn for how the contestability of online speech governance can be increased and the people whose content is moderated can emerge from under the shadow of constraining content moderation systems.
The right to communicate online as an expression of freedom of speech and secrecy of correspondence. A comparative analysis on the example of European Union member states
Stanislaw Edelweiss, Leibniz Universitaet Hannover, Germany
People’s communication has always been a part of the culture and identity. With the great and unprecedented development of information technology, the tools that facilitate communication enable real-time sending and receiving of complex data.
Depending on how this service is rendered, as well as, who contributes to this content, the European legal order will distinguish several types, including but not limited to an intermediary, access, host, or content provider.
Nowadays, sending text messages online is the minimum minimorum expected from the service providers. The longer the digital revolution lasts, the greater are the options of exchanging content between users. It does not limit anymore to the simple chat function, but rather to sending extensive audio or video materials.
This exchange serves the purpose of expressing will, emotions, and ideas. For most users, almost unlimited access to online communication methods creates opportunities for borders-free dialog with people from distant countries, speeds up business operations, and finally realizes the idea of the global village. Even if the advantages are manifold, these platforms are also being used by those, who try to use this tool to illicit illegal activity. Therefore, many countries introduce or have already introduced regulations that allow interference and moderation of user-generated content. This naturally means that some data, if not every, that has been placed in the online communicator is subject to an internal screening or further scrutiny by a service provider to limit this type of activity. Oftentimes, such proactive measures are related to transparency reporting obligations, i.e., making publicly available information about the amount or type of content that has been disabled or removed. Additionally, some countries expect that service providers will report on the criminal activity that arises through these channels. It also became a gold standard among the EU countries to secure a collaboration with these platforms through a contact point, that can enable lawful interception, which is nothing else, but reviewing the user-generated content by law enforcement. Since not everything can be monitored by a service provider or some state officials, some EU member states impose on the platform an obligation to build software, that allows users to report illegal content, making them active collaborators of flagging content, which should be moderated or reviewed. As much as it is needed and valuable that illegal activity is limited, it also poses a threat of potential infringement of users’ freedom and right to privacy.
In this paper, the author presents in a comparative way the latest legal developments in this area, including telecommunication laws, transposition of the European Electronic Communications Code (Directive (EU) 2018/1972) into national legal orders of several EU member states, and describes the potential outcome of the latest EU’s Regulation 2021/784 On addressing the dissemination of terrorist content online. Moreover, tries to answer the question, of what constitutes illegal content, and if there is a universal definition of it. Finally, if this legislative effort does not limit innovation and isn’t technically too hard to implement for the service providers.
Order versus Chaos – Democracy and Information Online
Mavili Moura, Coimbra University, Portugal
It is usual to assume as a premise the interface between information and democracy. This premise grounds the notion that access to quality information is critical for a well-functioning democracy. This nearmythic cultural status assumes that when information flows freely, truth, and good information, eventually reveals itself. In an ideal world, where equal access to the information is present and literacy on the part of citizens is also uniform, it is perhaps possible to imagine that quality information would prevail. The current information environment provokes turbulence and chaos in the democracy field. It carries both the opportunity to fall apart the existing form of democracy or reorganize it in a new configuration. In this scenario, with the new information systems and its ability to carry many different messages simultaneously from and to many locations, the content of the information is also characterized by the way the speech is construed legally. The impact of the use of technologies of information in the democracy depends not just in the technologies of information and communication but also in the rules about the use and on the entity that make these rules.
These factors today are a challenge. The European Commission proposed a legislative initiative to upgrade rules governing digital services in the European Union. The Digital Services Act is the result of the need to give the people the control of the kind of online content they wish to read, watch, and share in a world where life is fundamentally intertwined and entangled with algorithmic media of all sorts. The clash between freedom of expression and restrictions on illegal online content or misinformation is a challenging reality in the attention economy. This clash involves not only the large online content platforms but also the smaller ones. The free marketplace of ideas serves as a shield for those who profit from illegal content and misinformation. The elementality of information in democracy, in its debate, especially in a digitalised world, should be repositioned. Considering the above, based on bibliographic and documentary research, the study aims to assess how the Digital Single Act can contribute to the protection of the right to information vis-à-vis the universal aspiration of democracy in the online environment.
Presentations Session 3 – Mobilizing Human Rights in the Digital Age
A digital constitutionalism framework for AI: human rights as socio-technical standards
Nicola Palladino, Trinity College Dublin, Ireland
AI is increasingly crucial in everyday life and social relations, which raises both expectations on AI’s capacity to foster human well-being as well as concerns about the risks for human autonomy and integrity. In the last few years, the awareness that the full potential of this technology is attainable only by building a trustworthy and human-centric framework in order to avoid both misuses of AI applications capable of endangering people and underuse because of a lack of public acceptance has notably spread across all stakeholders. As a result, we have witnessed a flourish of initiatives setting ethical codes and good governance principles for AI development, which nonetheless seems unable to fill the “principle-to-practice” gap, raising doubts about being mere “ethical washing” initiatives.
This paper argues that a human rights or digital constitutionalism framework constitutes a more suitable approach to developing a trustworthy and human-centric framework for AI governance, providing an already widely recognized set of standards supported by national and international institutions. Also, they rely on a system of reflexive evaluations capable of solving tensions and balancing competing concerns.
However, a traditional human rights approach may result ineffective because most of the violations and harms to people integrity and autonomy occur at an opaque technological layer of governance, outside of public awareness and scrutiny. Furthermore, human rights norms are typically too general and state-oriented to be directly applied into operative contexts.
This study aims to shed light on how human rights standards could be operationalized in the AI development and deployment context.
Based on Gunther Teubner’s Societal Constitutionalism, this paper conceives fundamental human rights as counter-institutions given in the ‘code’ and embedded into the socio-technical design of digital systems, which contrast the expansionistic and harmful tendencies of digitalization.
The paper first focuses on the interplay between human rights and artificial intelligence by reviewing related human rights instruments and highlighting how artificial intelligence applications could impact human rights and what human rights standards could apply to automated decision systems.
Then, the results from a database of artificial intelligence development and management tools mapping available instruments to “put into practice” specific human rights standards will be employed to discuss the potential and shortcomings of the actual landscape, paying particular attention to instruments such as human rights impact assessment, human rights due diligence, fairness metrics and disparate impact mitigation tools, explainability methods.
Finally, recommendations will be developed on how to advance the operationalization of human rights standards in the design and management of AI systems combining technical tools with organizational arrangements and regulatory frameworks.
When unconsent is no option: Assessing the impact of mandatory digital identity systems on human rights
Rosanna Fanni, Centre for European Policy Studies, Belgium
The general trend of anti-competitive behaviour by a few dominant companies triggered substantial security, privacy and sovereignty questions for governments. While the past decade was marked by a relative erosion of national sovereignty in favour of global connectivity and export-oriented innovation, national governments increasingly seek to seize control of digital essential infrastructures for citizens, as well as the regulatory frameworks for those infrastructures. Debates about public and private platform regulation worldwide seem to indicate a global paradigm shift placing governments increasingly at the centre of digital essential infrastructures while also self-attributing greater power to determine the rules.
Along with states’ digital ambitions comes an increasing demand for proof of identity online. Identifying and authenticating citizens’ identity has become a de-facto standard for many citizens to access the most basic online services and platforms to communicate or work. These identity systems are run by governments, sometimes by private companies, or by a combination of both – however the extent to which and under what conditions this is the case has never been studied in a comparative manner. This paper asks what similarities and differences in the design of mandatory digital identity (ID) systems exist. More largely, this paper rises the critical question over states’ ambiguous role in controlling essential digital infrastructure while also being responsible for protecting human rights.
The recent international trend – or rather, push – to implement digital ID projects is seemingly rooted in digitalising public administration or digital development. Digital ID systems are said to provide legal identity – rights enshrined in Article 6, UN Human Rights Declaration (right to recognition as a person before the law) and sustainable development goal 16.9 (right to legal identity). This is why governments around the world are increasingly mandating citizens to register in national digital ID systems. However, numerous documented disproportionate impacts on vulnerable and marginalised populations make it imperative to better understand the design and features of specific digital ID systems within regimes. Digital rights organisations document how digital ID systems exclude or discriminate citizens, exacerbate data exploitation and abuse, as well as create a chilling effect on surveillance. In spite of these documented human rights violations, questionable digital ID projects remain under-researched by the academic community as well as policy professionals due to the lack of systematic interdisciplinary research on this matter. As the European Union currently revises the eIDAS regulatory framework, understanding better the human rights pitfalls are imperative as the current proposal leaves questions around mandatory registration or the reuse of data by third parties largely unaddressed. To date, no comprehensive risk impact assessment of mandatory digital ID systems on universal human rights exist. This paper aims to fill the research gap by comparatively assessing implemented digital ID systems and the reported impacts on human rights through a systematic literature review. In so doing, the paper allows to compare the dual function practices of states as digital essential infrastructure providers and rulemakers thereof.
Old Norms in a New Context: The Formulation, Implementation and Enforcement of Human Rights in the Digital Age
Evelyne Tauchnitz, University of Lucerne, Switzerland
The digital transformation is affecting every aspect of our lives from an individual to the global level. The extent and way people are affected varies and depends on the intention and purpose digital technologies are used for. In a peaceful context, digital technologies can help us to eradicate human challenges related to poverty, exclusion and inequalities. In a violent context, however, digital technologies are prone to create new threats and risks for humanity, most notably through the development of new cyber weapons, lethal autonomous weapon systems (LAWs) and new forms of espionage and mass surveillance.
Societies can respond to these new challenges with different governance strategies, reaching from education and awareness to a complete ban of certain harmful technologies. In my presentation, I aim to discuss two internet governance options: 1) the search of a new global and legally binding agreement – e.g. a new UNConvention- on the values and purposes that we want technology to serve, and 2) possibilities of improving implementation and enforcement of already existing legally binding human rights treaty law – namely the two international covenants of 1966 – with regards to the internet and digital technologies.
The first option of trying to reach a comprehensive international agreement on the purposes that the development and use of digital technologies should serve is undoubtedly a challenging undertaking, yet if successful, would hold the advantage of providing a normative framework that could guide and cover new and emerging uses of technology not only for now but also for the future. One mechanism to draft internationally binding agreements are UN conventions. There already exist several UN conventions that aim to address a number of global challenges such as the protection of biological diversity, children rights or the prohibition of certain weapons that cause unnecessary human suffering. In my presentation I will argue that given that global nature of the digital transformation, we also need to look for global solutions to legislate the development and use of digital technologies. Reaching a comprehensive agreement on the values and purposes that we want technology to serve, would allow to draw the (legal) boundaries within which technology needs to operate accordingly. Exploring political and legal challenges to the drafting of a new UN convention on digital technologies is a further key step. Whether these challenges could be overcome remains an open question, yet one worth to be discussed.
Alternatively, however, it might also be ‘enough’ to try to make sure that already existing international human rights law, namely the International Covenant on Civil and Political Rights (ICCPR) and the International Covenant on Economic, Social and Cultural Rights (ICESCR) which were adopted in 1966, is implemented more successfully when it comes to human rights online. Politically, there would need to be global enforcement mechanisms and effective access to remedy where harm occurs. Till now, however, private tech corporations and authoritarian regimes have quite successfully lobbied against the possibility to legally enforce human rights with concrete sanctions on a global level.
Presentations Session 4 – Normative Models, Normative Powers: Digital Constitutionalism, Digital Sovereignism, Extraterritorialism and Laissez-Faire
Between Digital Constitutionalism and Sovereignty: The Emergence of a European Model of Internet Regulation
Mauro Santaniello, Università degli Studi di Salerno, Italy, Francesco Amoretti, Università degli Studi di Salerno, Italy and Fortunato Musella, Università degli Studi di Napoli Federico II, Italy
Over the past five years, European strategies and policies relating to the global governance of the Internet have marked a sudden turnaround. The conservative approach that the European Union had adopted since the World Summit on the Information Society (WSIS) in 2003 and 2005 – basically flattened on the US position in defense of the private self-regulation regime – has given way to requests for reforms formulated at the highest institutional levels. This political twist has been substantiated in declarations and programmatic speeches by leading politicians, in the initiatives of regulatory authorities, in a number of policy documents, and above all in new legislative processes which, all together, seem to represent a paradigmatic shift in EU digital policies. This transformation happens at the intersection between two normative models that have recently emerged in the debate on Internet governance. The first is that of digital constitutionalism, which places the protection of human rights and liberal-democratic principles at the core of institutional action in the cyber domain. The second model underlying the new European initiatives is that of the so-called digital sovereignty, which, in various forms and modalities, tends to rearticulate the distribution of digital power between state, business and civil society actors.
The aim of this paper is to better understand how the European Union conceptualizes and practices digital constitutionalism and digital sovereignty, how these models are combined with each other in EU’s digital strategies, and what are the main consequences of this new approach both in terms of human rights protection and policy change at the global level.
The research adopts a constructivist theoretical perspective, focusing on discursive interactions between several actors engaged in the production, implementation and evaluation of digital policies in a multilayered governance environment, and addressing the ways by which these interactions shape policy and institutions. More in detail, this study draws on the theories of “hybrid constitutionalism”, which looks at the interplay between social forces and legalization practices within processes of formal and informal constitutionalization, and of “fragmented sovereignty”, which conceptualizes sovereignty as a complex semantic sediment from an accumulation of arguments, which are embedded in specific historical settings and produce varying institutional arrangements.
Moving from this theoretical framework, authors conduct a narrative policy analysis on more than fifty EU policy documents, identifying storylines ascribable to each of the two normative models, mapping their connections and overlapping, and stressing the political significance of contradictions and tensions arising in EU digital policies. Findings are expected to clarify the traits and directions of the emerging European approach to Internet governance, and to support a theoretical thinking about new trajectories and cleavages between different regulatory models at the global level.
The normative dimension of the EU cybersecurity securitization
Domenico Fracchiolla, Università degli Studi di Salerno, Italy
From the protection of the Single Market to the prevention of cybercrime, from the protection of infrastructures and critical information systems to cyber-defence, cybersecurity has become a strategic policy area of EU and one of its main security priorities. The evolution of the European Union (EU) cybersecurity policy has been investigated in relation to the European integration effort, in terms of development of the policy (H. Carrapico – B. Farrand 2020) and the securitization theory (Dunn Cavelty, 2008; Hansen & Nissenbaum, 2009; Lawson, 2013), with the contribution of the EU Commission to the EU securitization process (Brandao – Camisao 2021). However, a comprehensive approach to cybersecurity governance within the EU focused on the relevance of human rights and democratic values is still missing and recent literature has strongly criticized the securitization process proposing the de securitization move aiming at a positive cyberpeace (Burton and Christou 2021). This paper argue that the principle of resilience, the main ideational goal and the fundamental value (Cavelty 2013) promoted in all the main EU documents of the last fifteen years, far from expressing the de securitisation move, it can be considered the harbinger of the EU securitization process based on democratic values, referring to the normative dimenison of securitization (Floyd 2018, Roe 2012). Bearing this background in mind, the paper explores the nexus between the securitization process and the EU liberal cybersecurity culture, that epitomize the role of the EU in shaping the conditions of human rights in the cyberspace. In order to pinpoint the policy’s re orientation and evolution, qualifing the essential characteristics of EU cybersecurity policy with the democratic principles enclosed in the notion of resilience, Mary Kaldor’s global security culture (2018) is the theoretical framework engaged. The research questions adressed are focused on the identification of the EU’s evolving ecosystem of cybersecurity governance, the lasting relevance of the securitization process of the EU cybersecurity policy and to what extent the EU has been able to develop a global security culture of liberal peace within its cybersecurity policy. The analysis will be conducted considering the main EU documents relevant to understand a comphrensive approach to EU cybersecurity policy. In particular, the Directive on Security of Network and Information Systems (2016), the Cybersecurity Strategy for the EU (2013; 2019) as well as the Digital Agenda for Europe (2010), the EU’s Internal Security Strategy (2010), the EU External Security Strategy (2010), the Global Strategy for the EU’ Foreign and Security Policy (2016), the European Agenda on Security (2016), the Cyber Diplomacy Toolbox (2017), the Cyber Defence Policy Framework (2018) and the EU toolbox for 5G security (2019) have provided the fundamental guidance, in terms of significant tools and mechanisms, for addressing cybersecurity issue. The critical constructivism analytical framework applied to the Copenhagen School is considered to fully understand the development of EU cybersecurity policy, the discrursive framing, and the shaping of this policy’s design and trajectory, integrating the model of collective securtization outlined by Sperling and Webber (2018) for the EU.
GDPR Codes of Conduct and their Territorial Features: European ‘Data Imperialism’ through Soft Law?
Carl Vander Maelen, Ghent University, Belgium
Scholars have intensely debated the territorial application of the EU’s General Data Protection Regulation laid down in article 3 GDPR (see, among others: Svantesson, 2020). Some have accused it of promoting ‘data imperialism’ (Mannion, 2020) whereas others say data regulation requires a rethinking territoriality (Streinz, 2021, p. 925).
Although soft law in data protection has been explored thoroughly (see, among others: Marsden, 2011) the territorial aspects of GDPR soft law are under-researched. However, the EU’s multi-level and multi-actor harmonization-oriented regulatory strategy (Wessel, 2007) makes soft law an important aspect of the its regulatory activities.
Codes of conduct are such a GDPR soft law tool. Article 40.2 GDPR encourages their development to specify the Regulation’s other provisions. Article 40.3 states that actors subject to the Regulation can adhere to GDPR codes, meaning article 3 GDPR determines the scope. However, article 40.3 GDPR also states that “codes of conduct approved […] and having general validity […] may also be adhered to by controllers or processors that are not subject […] pursuant to Article 3”.
This raises several research questions regarding the interplay between the (extra)territorial features of the GDPR and codes of conduct. First, what categories of potential adherents to GDPR codes can be identified? This paper will identify three categories: adherents to codes by way of article 3.1 GDPR, adherents via article 3.2 GDPR, and adherents via article 40.3 GDPR. Second, how do these categories relate to the three different types of GDPR codes that exist, namely national codes, transnational codes, and codes having general validity (article 40 paragraphs 5-9 GDPR)? Finally, which scenarios exist for the types of adherents with regards to their expected commitment to (the types of) codes of conduct and what are the potential effects thereof on their activities intra-EU and extra-EU?
This paper employs a doctrinal research methodology: it identifies the GDPR as a positive statement of law and examines applying article 3 GDPR to codes, following a deductive and explanatory legal reasoning (Farrar, 2010, p. 91). This coincides with an internal legal methodology: the paper uses legal principles, doctrines and concepts to build critical reasoning around an authoritative text (McCrudden, 2006, p. 633).
Bibliography:
Farrar, J.H. (2010). Legal Reasoning. Thomson Reuters.
Mannion, C. (2020). Data Imperialism: The GDPR’s Disastrous Impact on Africa’s E-Commerce Markets. Vanderbilt Journal of Transnational Law, 53, 685.
Marsden, C.T. (2011). Internet Co-Regulation: European Law, Regulatory Governance and Legitimacy in Cyberspace. Cambridge University Press.
McCrudden, C. (2006). Legal Research and the Social Sciences. Law Quarterly Review, 122, 632–650.
Streinz, T. (2021). The Evolution of European Data Law. In P. Craig & G. de Búrca (Eds.), The Evolution of EU Law (3rd ed., pp. 902–936). Oxford University Press.
Svantesson, D.J.B. (2020). Article 3: Territorial scope. In C. Kuner, L.A. Bygrave, C. Docksey, & L. Drechsler (Eds.), The EU General Data Protection Regulation (GDPR): A Commentary. Oxford University Press.
Wessel, R.A., & Wouters, J. (2007). The Phenomenon of Multilevel Regulation: Interactions between Global, EU, and National Regulatory Spheres. International Organizations Law Review, 2(2), 257–289.32
“Ethical, Human-Centric AI” going global: European Union leadership and actorness in AI
George Christou, University of Warwick, United Kingdom, Trisha Meyer, Vrije Universiteit Brussel, Belgium and Rosanna Fanni, Centre for European Policy Studies, Belgium
The European Union (EU) approach to Artificial Intelligence (AI) is underpinned by a risk-based strategy that seeks to harness the opportunities and maximise the benefits of AI and at the same time, ensure that the challenges are addressed to avoid adverse and undesirable outcomes. The rapid technological evolution of AI has raised societal and policy concern that technology will autonomously evolve in a direction that disregards ethical values and human rights.
The EU’s response has thus been to develop an approach to AI that embeds trust and excellence through clear ethical and legal guidelines and rules for the development of AI technology. In April 2018 the Commission launched the European AI Strategy and a Coordinated Action Plan, with a two-pronged policy of making the EU a world-class hub for AI, while ensuring that AI is ethical and human-centric. The Commission’s White Paper on AI published in February 2020, set out a vision for AI in Europe; that is, as an ecosystem of excellence and trust for AI. Finally, in April 2021, the Commission developed a proposal for a regulatory framework (the AI Act) and a revised Coordinated Action Plan to “promote the development of AI and address the potential high risks it poses to safety and fundamental rights” (European Commission 2021: 1). The ambition of the EU then is to create rules on AI that foster trust and adoption by EU citizens in the evolving AI-enabled ecosystem, while protecting EU fundamental rights in the digital age. While the objectives and impact of its AI policy for European citizens and businesses are clear, less is known about the EU’s external or global ambitions for AI.
This paper interrogates the EU’s leadership claims and asks how we can understand the sort of leadership the EU can exert in AI through its ethical, human-centric approach. To do this, we suggest that a more conceptually informed account is required that allows us to identify the type of leadership it can exert as an actor in AI within the existing global opportunity structure. Here we posit that the EU will require both exemplary and diplomatic leadership skills to exert influence in what is a relatively new domain in global governance. We therefore suggest a novel approach that fuses actorness and leadership frameworks in order to shed light on: a) the internal dimensions of actorness to assess the EU’s potential for exemplary leadership; and b) the external dimensions of actorness to ensure the challenges and opportunities that such a technology represents are navigated in a safe, secure and ethical manner.
Using evidence gathered from analysis of primary and secondary documents and semi-structured interviews, we (1) explore the EU’s ambitions in the emerging international AI policy ecosystem, and (2) assess how exercising exemplary and diplomatic EU leadership can contribute to global governance standards for ethical, human-rights centred AI development and deployment worldwide.
Does the implementation of the GDPR safeguard adequate privacy protection, & general human rights law, for the average EU internet-user? Or, is it just a vague political scheme?
Antonia Frangou, European Law and Governance School – EPLO Institute, Cyprus
Background & Methodology:
The introduction of the GDPR seems to be creating legal confusion about its efficiency, its implementation, and lastly, its success. In addition to that, it seems to be creating further questions; whether is concerned largely with political causes rather than legal obligations.
This proposal aims to undertake a legal theory analysis to identify the aforementioned, by challenging how the Human Rights as a spectrum co-exist with this political scheme and identify the legal obligations; which evolve by this novel Regulation. The legal theory analysis will, also, include a detailed examination of the scope of the GDPR itself, and how private data are being handled by “controllers and processors” i.e. the affected businesses.
Moreover, the research will need to undertake an empirical research to collect new results in order to assess the efficiency of the GDPR in relation to a) businesses and b) EU internet users. The quantitative analysis does not necessarily need to collect case to case evidence, but a general market-based analysis would be helpful for this empirical research.
It aims to be detailed, by discussing all of the above questions in relation to European Union Trade and Competition Rules, and their correlation with the results. In explanation, the free market will be thoroughly examined on how it responds after the enforcement and the practical application of the GDPR.
This will conclude and this paper is going to provide the examination of the final question. By legally analyzing the components of the EU market and how it operates, the final question will be examined in two steps:
Step 1: By assessing the evidence of the research for the two previous questions; in detail, the legal theory in relation to human rights, the case study in relation to businesses (quantitative approach) and lastly the qualitative material collected for the EU internet – users.
Step 2: By performing a legal analysis on the EU common market and the laissez-faire legal elements.
Step 3: By comparing the data collected during Step 1 with the data collected in Step 2, making Step 2 a platform of legal analysis, as the common market is the pool for all the aforementioned legal questions.
Step 4: Answer the question whether it is “just a vague political scheme”. This will be identified if the implementation of the GDPR proves to create more obstacles on the operation of the Common Market, rather than making it more efficient. Then it will examine the correlation it has with the legal protection an average EU internet-user enjoys, to conclude whether the justification of this protection is indeed legal (legal theory analysis of Human Rights in the EU spectrum) or it is just a result of political agenda which was not interested to examine the potential legal complications.
Presentations Session 5 – Democratic Values and the (Lost?) Promises of Mutistakeholderism, Peer Production and Decentralization
Institutional Sources of Legitimacy in Multistakeholder Global Governance at ICANN
Hortense Jongen, Vrije Universiteit Amsterdam & University of Gothenburg, Netherlands and Jan Aart Scholte, Leiden University & University of Duisburg-Essen, Netherlands
Multistakeholder global governance has risen in recent decades as a major alternative to old-style multilateralism, particularly in the fields of environment, food, health, corporate accountability, and the Internet. In contrast to multilateral organizations, which develop global cooperation among nation-states, multistakeholder regimes assemble representatives of various sectors that ‘have a stake’ in a particular problem (e.g., academe, business, civil society, governments, and technical experts).
Multistakeholder initiatives often present themselves as more effective, democratic and fair than multilateralism. Yet how far have these multistakeholder initiatives been able to attract legitimacy: on what grounds and to whom? In particular, how far do legitimacy beliefs toward multistakeholder regimes share the same institutional sources that other research has found to count for multilateralism?
This paper examines these questions in relation to one of the leading global multistakeholder bodies, the Internet Corporation for Assigned Names and Numbers (ICANN). ICANN is an interesting case to study these questions, given this multistakeholder scheme’s size and careful attention to institutional design. Moreover, ICANN is important. Overseeing several key technical functions of the global Internet infrastructure, it plays a crucial role in making possible a single global Internet. This raises the question what drives legitimacy beliefs toward such a technical organization? And how far are human rights considerations important in these contexts?
Drawing on mixed-methods survey interviews conducted with some 500 individuals across the ICANN regime, this paper explores the relationship between their assessments of institutional qualities on the one hand and their legitimacy beliefs toward ICANN on the other. First, we are interested in finding out what institutional qualities participants find important for ICANN. For example, how important do they find it that ICANN is accountable to all stakeholders or that it promotes human rights in its operations and the DNS? Second, we seek to determine how far participants see these institutional qualities realized in practice. Third, we aim to identify how far legitimacy in ICANN is rooted in perceptions of its purpose, procedure, and performance.
We present three key findings:
1. In terms of principles, participants in ICANN generally attach more importance to technocratic performance and democratic and fair procedure than to democratic and fair performance. Participants give particularly low scores to ICANN promoting democratic values and human rights. In contrast, respondents find it between ‘quite’ and ‘very’ important that ICANN delivers on technocratic procedure.
2. In terms of practice, participants in ICANN are generally most satisfied with the institution’s purpose, technocratic performance, and democratic procedures. Participants give comparatively lower scores to ICANN’s delivery on technocratic and fair procedures as well as on democratic and fair outcomes (i.e., that ICANN promotes democratic values and human rights).
3. Regarding links between institutional qualities and legitimacy beliefs, participants in ICANN who are more satisfied with several aspects of the organization’s purpose, adherence to fair, democratic, and technocratic procedures, and ability to deliver technocratic outcomes generally are more likely to have high confidence in ICANN.
The Limits to Peer Production in Security Infrastructures: Technological and Regulatory Challenges to the PGP Web of Trust
Ashwin Mathew, King’s College London, United Kingdom
As one of the earliest publicly available encryption programmes, Pretty Good Privacy (PGP) was intended to usher in an era of secure online communication, acting as a bulwark against the perceived overreach of government surveillance. The PGP Web of Trust (WoT) functioned as an essential adjunct to PGP, providing a decentralised infrastructure to validate and connect the identities of PGP users to their encryption keys, through a cryptographically secured social network. Ever since their inception in the 1990s, PGP and the WoT together promised secure online communication independent of any centralised authority, whether government or corporation. PGP and the WoT offer an early example of a successful system based on what we now term commons-based peer production, as PGP users coordinated directly with their immediate acquaintances – through non-hierarchical non-market action – to construct the WoT as what could be termed an “information security commons” providing the basis of a decentralised system for secure online communication. PGP and the WoT have been used extensively around the world since their creation, and remain in active use by information security and open source software communities, with over 6 million PGP keys currently observable in the WoT.
In spite of these successes, the last few years have seen significant technological and regulatory challenges to the infrastructure of the WoT, to the point where the very existence and utility of the WoT today face serious problems. This is most visible from the rapid decline in the global population of PGP keyservers (from a peak of over 120 to less than 40 as of this writing), which constitute a decentralised database of publicly visible PGP keys and cryptographic material that makes the WoT, replicated across keyservers independently operated by volunteers across the world. A robust keyserver infrastructure is essential for the distribution and discovery of relationships across the WoT. As this infrastructure declines, so does the WoT.
In this paper, I explore the technological and regulatory challenges behind this decline. I focus on two cases which have seen significant discussion within the PGP keyserver operator community: the “poison key” attacks via the WoT that effectively denied users’ access to PGP, and GDPR requests which have caused many keyserver
operators to take their keyservers offline. The result is a more fragmented keyserver infrastructure, with a new generation of keyservers adopting centralised approaches and abandoning support for the WoT. I employ ethnographic methods to examine these cases, drawing on my experience of operating a PGP keyserver and participating in PGP operational communities for 4 years. I argue that my findings offer broader lessons for the design, operation, and governance of decentralised systems, illustrating limits to peer production that arise internally within systems from technological choices, and externally from regulatory environments.
Techno-legal Challenges in Digital Identity Infrastructures: the Example of Self Sovereign Identities
Alexandra Giannopoulou, University of Amsterdam, Netherlands and Ioannis Krontiris, Homo Digitalis, Greece
In the current context of informational capitalism, datafication and the use of data-driven technologies by private and public actors across many areas of life have significantly altered the ways in which societies operate and perceive individuals. In this sociotechnical ecosystem, blockchain-based systems emerged with the promise of a panacea for the dominating centralization observed (Bodo, Brekke & Hopman 2021; Bodo & Giannopoulou 2019). In the field of personal data protection, blockchain is highlighted as a technology allowing data sovereignty and compliance with the founding personal data protection principles such as data minimization and data protection from the outset. design (Giannopoulou 2021).
The main regulatory tool for the protection of personal data is the General Regulation for the Protection of Personal Data (RGDP). Blockchains and GDPR both seek to grant users greater control (Finck 2018). While the latter pursues this objective by imposing duties of vigilance on centralized controllers and controllers and by reinforcing the rights of the persons concerned, blockchains claim to go further by trying to eliminate any intermediary actor and the need to trust them. Through these efforts to reorganize the techno-social architectures of personal data governance, proposals have emerged to reorganize the architectures of digital identity. In Europe, these proposals are aligned with an effort to modernize both technological and legal tools. In particular, the European Parliament recently proposed the modification of the legal framework applicable to digital identity, namely the eIDAS regulation.
The creation of a new ecosystem of unique and user-controlled digital identity through a secure technological application, is an aspiration that has gradually gained importance in a dispersed way. Various identity management solutions are emerging in different jurisdictions, with the aim of creating a unified privacypreserving identity that bridges digital identity with these non-digital representations.
Recognizing the need for innovation in identification technologies, the supply of digital identification products has grown exponentially under the name self-sovereign identity (SSI). The term describes a technological identity management system created to operate independently of public or private entities based on decentralized technologies, and designed to prioritize user security, privacy, personal autonomy and to selfgovernance (Giannopoulou & Wang 2021) . The idea of SSI was created as an expression of personal digital sovereignty, the fundamental principles of which have been systematically expressed by Christopher Allen (2016).
Self-sovereign identity projects have increasingly embraced blockchain, due to the coincidence of goals and technology needs. These existing proposals seem to deviate a little from their initial objective, namely to detach state sovereignty through the issuance and recognition of technological means of identification. For this reason, this digital identity and identification technology market has now been renamed in practice with the more general term digital identity.
The question this article will try to answer is: are self sovereign digital identity solutions suitable for ensuring data sovereignty and individual empowerment? To answer this question, we will address challenges emerging from both the technological architectures and the applicable institutional and normative frameworks.