Why is ethics in tech now more important than ever?
79% of developers think they have an obligation to consider Ethics in the work they perform, the companies they work for and the products they produce, according to a Stackoverflow survey.
With this growing consideration of ethics within tech, how ethical considerations are resolved will have real and direct consequences on real people’s lives, not just in the immediate future but for a long time to come.
The ethical consequences of new technologies have existed since Socrates' attack on writing in Plato's dialogue, Phaedrus, however, the formal field of techno-ethics had only existed for a few decades.
Some recent examples which highlight this point are algorithms used by tech firms that are biased in their decision-making. Image recognition systems that have failed to identify non-white faces, social media sites pushing fake news and systems that attempt to influence the political results in democratic elections.
We highlight three talks from this year’s upcoming Code Mesh conference (08-09 Nov) that will attempt to address these issues and look to possible solutions that can help contribute to the path ahead.
Federica Pelzel, Director of Data and Analytics Platforms at Mastercard, will be talking about Ethics and AI and how to Identify and prevent bias in predictive models.
Kate Carruthers, a senior lecturer in computer science & engineering at UNSW Sydney will talk about the intersection of infosec, AI and ethics, with recent examples of Cambridge Analytica and the weaponisation of social media and the web.
Andrea Dobson, a counseling psychologist/GZ psychologist, will give a psychological perspective to ethics in tech.
An overview of ethics in tech
Technology itself is incapable of possessing moral or ethical qualities in-itself. However, when considering ethics in tech we are often referring to those attributes given to it by those that made it, and those that decided how it must be used, tech as an embodiment of human values.
What are we to make of data, and modeling, that inadvertently hurts those who need the most protection? Who is to blame for this, should it be the development team who created it? These effects have a wider social impact, should governments step-in with new legislation and laws? Do we need to look at the very corporations and businesses that produce these technologies? Should ethics influence how they do business? Bias and discrimination can be inadvertently introduced into models, what different strategies can be provided to prevent this from happening?
The ethics of tech can usually be considered in either two ways:
1. The ethics in the development of new technology, whether it is always, never, or contextually right or wrong to invent and implement a technological innovation, for example, should scientists have developed the atomic bomb?
2. Ethical questions brought up from the ways in which technology extends or curtails the power of individuals. The use of complex algorithms and big data, to influence on a large scale, the political decisions that a society makes, brings into focus the effects of technology on an individual’s privacy, freedoms, and values.
Techno-ethical perspectives are constantly evolving as technologies advance in areas unforeseen by its creators, and as users change the ways in which these technologies are used. Humans cannot separate themselves from these technologies, as they are an integral part of society and they influence our behaviours in a profound way. The short-term and long-term ethical considerations for technologies do not just engage the creator and producer but also make users question their beliefs in correspondence with this technology and also how governments must allow, react to, change, and/or deny technologies.
More about our Code Mesh LDN ethics speakers:
Federica Pelzel: Public sector technologist, Director of Data and Analytics Platforms at Mastercard
Ethics and AI: Identifying and preventing bias in predictive models
As we explore more sophisticated ways to make smarter, more accurate decisions, the use of data and predictive models have been at the forefront of innovation. But what happens when our use of data, and modeling, inadvertently hurt those who need the most protection? In this session, we'll explore how bias and discrimination are introduced into models, and different strategies to prevent it from happening to you.
Kate Carruthers: Chief Data & Analytics Officer and Senior Lecturer in Computer Science & Engineering
Infosec, AI and ethics – new models for a secure future
Thanks to things like Cambridge Analytica and the weaponisation of social media and the web we are at an interesting juncture. The intersection of infosec, AI and ethics mean that we need to develop new approaches to privacy and security. This talk explores some possible futures and provides some practical suggestions for ethical and safe computing.
Andrea Dobson: Counseling psychologist/GZ psychologist
Ethics in tech: a psychological perspective
Why do people behave unethically and what can we do about it? Andrea will go into details about social psychology research on behaviour, ethics and company culture. What are the anti-patterns we should all avoid?
Most of the knowledge we have on Conformity and Obedience today come from Psychological experiments done in the 1950s and 1960s. What do they mean in today’s society and what impact are they having on the choices we make? And are there things we can do about this?