Did you miss any of our events? Are you interested in the most interesting information from our events? Now you will not miss anything important.

CHECK OUT OUR PREVIOUS EVENTS

30. 11. 2020

More about this event

The “Right to be Human in the Age of AI” was the third event in a wider series falling under the project: “Human Rights in the 21st Century”. Which was held together in part with Institute Novum, the Institute of Politics and Society and the European Liberal Forum, as well as other political foundations across Europe.

The focus of the event was on artificial intelligence (AI) and human rights. With the Universal Declaration of Human Rights (UDHR) being adopted in 1948. The European Union (EU) continues to be committed to supporting democracy and human rights in its external relations, in accordance with its founding principles of liberty, democracy and respect for human rights, fundamental freedoms and the rule of law.

In the world of AI, keywords such as algorithms, machine learning, automation and big data are increasingly becoming commonplace. Powering some of these AI systems [algorithms] are immense amounts of data being generated and collected from citizens. As AI systems and driven technologies continue to improve exponentially, their impacts on society and citizens’ rights alike are brought to the forefront.

Providing reasonable arguments on how contemporary political cleavages and technological change is affecting and/or contra positioning some important values we live by, i.e. freedom of speech, expression and democracy.

Speakers / Main Points

 

Dr. Anne Bowser, was the first of two speakers joining the event from the United States. Dr. Bowser is the Deputy Director with the Science and Technology Innovation Program (STIP) and the Director of Innovation at the Wilson Center. Her work investigates the intersections between science, technology and democracy.

Speaking with a US perspective on AI and Ethics, the focus on bias and machine learning, a form of advanced AI, and a specific focus on facial recognition systems.

Presenting that bias is a concept that is rooted in cognitive and evolutionary psychology, where it is advantageous for people to make decision quickly with limited information mimicking the human brain as a model. With the two most common forms of bias being 1) data bias, which includes selection bias, incomplete or incorrect data and reporting bias, and 2) algorithmic bias, including system design and causal inferences.

As AI systems progressively improve and become more sophisticated, these two sources of bias merge and interact, whereby strong social and ethical ramifications occur. Case studies, i.e. Gender Shades (2018) and NIST study on Demographic Effects (2019), found that there are significant biases across the board. The Gender Shades study found that Amazon’s Rekognition’s Facial Analysis mistakenly identified woman as men (19% of time) and darker skinned woman as men (31% of the time).

The wider use of these facial recognition systems requires policy, ethical and technological solutions. How do we deal with bias in AI? 1) Short term, identification and mitigation, 2) intermediate term, planning, algorithmic impact assessments and technical architectures, 3) lastly, for long term, “big picture” frameworks, such principles, definitions, standards and requirements.

 

Theresa Harris, the second speaker from the United States, is the Project Director in the Scientific Responsibility, Human Rights and Law Program at the American Association for the Advancement of Science (AAAS). Where she manages the Program’s projects on science and human rights, including a volunteer referral service that provides technical support for human rights organizations, the Science and Human Rights Coalition network of scientific associations and societies, activities that promote greater understanding of the human right to science, and a new initiative on artificial intelligence that focuses on public health and mass surveillance.

Theresa focused on how the right to enjoy the benefits of scientific progress and its applications should inform our understanding of the human rights concerns and opportunities presented by artificial intelligence.

Addressing, how the AAAS was approaching the opportunities and challenges of AI for Human Rights. Conceptually starting with the “right to enjoy the benefits of scientific progress and its applications”. Which is a long underutilized, in Article 27 of the Universal Declaration of Human Rights and explain in more detail in Article 15, of the International Covenant on Economic and Social Rights.

Highlighting that earlier in 2020, the UN Committee on Economic, Social and Cultural Rights (OHCHR), adopted General Comment 25, detailing state obligations to respect and fulfill the right to science. Which provides a guideline for states to develop mechanisms that autonomous intelligence systems are designed in ways that avoid discrimination, enable decisions to be explained, and accountability for their use. As well as, establish a legal framework to impose non-state such as “big tech” actors a duty of human rights due diligence.

All while maintaining the balance of expanding the potential positive benefits of new technologies at the same time reducing their risk. Obligation to prevent harm and use science and technology to expand on human rights rather than infringed. By prioritizing public funding for research addressing basic needs provides more equitable access. Providing an emphasis on the importance of “quality” in data being on the many challenges with AI systems, specifically biometrics.

Concluding, that looking beyond the bands and more closely into how we can use and maximize the use of these technologies to provide benefits to human rights. At the same time limiting the use of AI where it can cause potential harm and infringed on the human rights or the right to enjoy the benefits of scientific progress and its applications.

 

Dr. Stefan Larsson is a senior lecturer and Associate Professor in Technology and Social Change at Lund University, Sweden, Department of Technology and Society. He is also lawyer (LLM) and socio-legal researcher that holding a PhD in Sociology of Law as well as a PhD in Spatial Planning. His multidisciplinary research focuses on issues of trust and transparency on digital, data-driven markets, and the socio-legal impact of autonomous and AI-driven technologies.

Dr. Larsson focused on the mechanics of implementing value-based governance of AI. Given in the last few years, there has been an increase of awareness and a surge in ethics guidelines as tools for AI governance, on the top level of the EU as well as member states.

Firstly, increased awareness, as the use of more autonomous systems becomes more pervasive, there are adverse effects to society when these systems are implemented. In Sweden, organizations such as the Wallenberg AI, Autonomous Systems and Software Program – Humanities and Society (WASP-HS), the focus is the combination of the implementation and application of technologies.

Secondly, guidelines as a trend, AI guidelines have been on the rise in the last few years, to include publications, strategies and initiatives in both the public and private sectors. Although, with much convergence around i) transparency, ii) justice and fairness, iii) non-maleficence, iv) responsibility and v) privacy, there lacks a substantive divergence on the use of the terminology.

Lastly, the EU and member states, towards a Human Centric AI in the EU, following the guidelines set forth by the European Union’s approach on “trustworthy AI”. This notion is emanating from the appointment of the High-Level Expert Group on AI (AI HLEG) and its key publications, primarily the Ethics Guidelines for Trustworthy AI. Specifically, the seven key requirements for the realization of trustworthy AI, found in the Ethics guideline, as well as other documents including similar notions, from the OECD AI Policy Observatory[1]

Raising the question regarding, “what should be clearly regulated and what methodology and assessment tools be developed on a member state level?”

 

Dr. Mohd Shahid Siddiqui, joining from India, as media and development expert in digital media and social development. As well as, a campaigner for Human Rights working as a journalist. Highlighted, both the positive and negative aspects set forth by AI technologies.

For the positives, AI technologies are currently paving way in the fields of medicine, research and technology. AI is already being used in some circumstances in healthcare, policing and criminal justice systems. Stating, within a generation, the use of AI will become much more widespread in many workplaces, in healthcare, in education, and across public sectors, with the possibility of improving the lives of the citizens.

Highlighting, that according to a 2019 survey by the Pew Research Center, minorities remain less likely to own a computer or have high speed Internet at home in the United States compared 82 per cent of whites’ report owning a desktop or laptop computer. Leaving disadvantages that cannot be solved by technology alone but on a governmental and policy maker level.

For the negatives, regulatory and governance challenge from a human rights perspective in regards to emerging digital technologies. As well as, the abuse of some systems, such as those used to stipends free speech or other human rights.

Maintaining, that there should be focus on the societal and economic impacts in the future as more and more AI technologies are utilized in both the private and public sectors.

Domen Savič, the director of the NGO Citizen D (Drzavljan D), focuses on developing long-term projects related to digital rights, communication privacy and digital security, media regulation and active citizen participation in the political sphere.

The focus on the development of the covid tracing application in Slovenia. Leading to two implementations, 1) the narrative that the application would “solve the covid crisis” by merely downloading, and 2) the passing of mandatory law for the download and implementation of the application, which was not enforced, causing confusion within Slovenia.

The covid contact tracing apps were rolled out as a means to utilize technology in an attempt to curve the spread of covid-19 by tracing and contacting those who were in contact or vicinity or someone who may test positive in the future, via Bluetooth.

However, the theory and the practical side around the development of a covid tracing application in Slovenia had led to failures. Prominently, the legal framework and the practical solutions. Emphasizing, that the problems of AI should not only be focusing on technology vs transparency, technology vs openness, but also that of responsibility. In its self, the technology, AI, should also not be used as a mask to deter responsibility in both the public and private side.

[1] OECD (2020).