Tackling the Ethical Challenges of Slippery Technology

BY: Anab

Our brief writeup on how organisations can start tackling the complex, ethical challenges of slippery technologies like AI.


Tackling the Ethical Challenges of Slippery Technology

The release of Google’s AI principles last week are promising. It is hard to imagine how these principles will be baked into the DNA of the company and how their implementation will play out in terms of company decision making and strategy, as they balance these against profit margins. But I am sure they are on it.

Rachel Coldicutt, CEO of Doteveryone raised some important questions about the principles. Her suggestion is that Google must say who their AI applications will benefit, and who they will harm. This is a strong moral position for a company to take, but these are indeed important considerations for technology companies. Not only that, Rachel’s writing provokes further questions. Given the networked nature of the technologies that companies like Google create, and the marketplace of growth and progress that they operate within, how can they control who will benefit and who will lose? What might be the implications of powerful companies taking an overt moral or political position? How might they comprehend the complex possibilities for applications of their products?

We imbue technology with the ideals of the people who have created it, rather than those who use it.

Jon Ardern, Superflux

One very real aspect of our technological landscape is that we tend to imbue technology with the ideals of the people who have created it. Implicitly, the technologies reinforce the beliefs and intentions of those who make and sell them. However, designers, engineers and marketeers only ever set up the affordances and suggest a use case. The true impact of a technology is, more often than not, defined by those who use it. Whether that’s knitting groups or fascist regimes, we have seen technology become an amplifier and accelerator of the social, cultural and political values of the groups who use it, not those who made it. And it will continue to be used in ways that you can never imagine.

The starting point of creating the products and services around technology is usually ‘need-centered’. Designers are generally expected to respond to a particular ‘need’. But what are sold and framed as urban lifestyle products have different uses depending on the context and needs of those using them. In Myanmar, SIM cards are cheap and easy to find so nearly everyone has a SIM card and phone number, but devices are shared between people; privacy isn’t a concern like it is in the West. Many people in rural Myanmar don’t have mirrors, so use the front-facing cameras to take selfies to see how they look. Before 2014 there was no internet in Myanmar, and even now connectivity is sporadic, many rely on Chinese apps preloaded on the phones, in a language they don’t speak. They are the unimagined users, the users on the margins, as Eliza Oreglia puts it in a recent lectureThose on the margins are not involved in the feedback loop of design improvement. They were not even imagined in the design process.

I remember the surprise in the western media after this new story went out a while back, about Syrian refugees using smart phones. It was surprising because that particular context of use was so far removed from how smartphones are advertised. They are sold as a lifestyle product, and that frames our expectations of how that technology is going to be used. But technology will always be adapted to the needs of those who have access to it, regardless of the maker’s intention. Simultaneously the very same technology was being ingeniously exploited by oppressive forces. Soldiers at government checkpoints, as well as at ISIS checkpoints were demanding Facebook passwords. They would look at Facebook profiles to determine travellers ’allegiance in the war.


Still from the film ‘Everything is Connected to Everything’, about the vast, invisible ecologies of technology networks. Produced for the V&A ©Superflux 2018

I suspect the companies who create tech products know this. They work with the marketeers to create the perfect use case; the seductive, magical scenario you buy into, because that helps ship the product. If they started thinking of unintended consequences, of who their product could potentially harm, that could become very tricky. It would mean asking thorny questions:

How many unintended consequences can we think of? And what happens when we do release something potentially problematic out into the world? How much funding can be put into fixing it? And what if it can’t be fixed? Do we still ship it? What happens to our business if we don’t? All of this would mean slowing down the trajectory of growth, it would mean deferring decision-making, and that does not rank high in performance metrics. It might even require moving away from The Particular Future in which the organisation is currently finding success.

With the desire to move from Narrow towards General AI, things will get only more complicated. An audience member recently asked me after a talk: “We are developing a voice AI, but we are working to make it autonomous in its responses. And we want to be transparent with our users about the AI. So what should we say to them? We have an AI in this device, and we know its intelligent, but we don’t know exactly how it will respond?”

The layers of interacting networks within a deep neural network involve algorithms training themselves based on the data sets available to them. (Or maybe with the trajectory of AlphaGo Zero, that might no longer be needed) Although we understand the maths behind them — we don’t understand whythey make decisions. Not to mention the inherent biases programmed into AI which mirror our own human biases. This creates a transparency imbalance: AI needs transparency with personal data in order to do its work, but its own rational and decision making is opaque to us. This lack of understanding is synonymous with lack of control. By automating decision making, and not understanding the intentions or logic behind certain decisions, control over these decisions is relinquished.

All of this can leave organisations paralysed or confused, and in some cases even complacent — in the state of can’t-be-bothered-because-I-cant-do-anything-about-it. But I think, today more than ever, creating, implementing and practicing a broad set of ethical principles is crucial. Because recent news have shown what happens if you don’t. (eg. Google DuplexFacebookCambridge Analytica etc)

It might require that you:

  1. Invest in considering the unintended consequences of what you ship.Rachel wrote about this, and we practice this in our work with clients all the time. (I am reminded of Bruno Latour’s essay where he writes about design’s ‘humble’ efforts to move away from the heroic, Promethean, hubristic dream of action: “Go forward, break radically with the past and the consequences will take care of themselves!” I reckon we should send Bruno to Silicon Valley, where design is so intertwined with disruption. )
  2. Map the power of your organisation and products. What influence you leverage through your technology and the networks it reaches, is important to study, not just for monetising on ‘views’, but to better understand the effect it can have.
  3. Decouple performance metrics from financial success. I think this is probably the most difficult, but also very important.
  4. Develop multiple, alternate futures. By considering unintended consequences more thoroughly, you are probably already on your way towards developing alternatives that might be more worthwhile. You might come to blows with your financial controllers and shareholders, but you will find a way through it. After all, we are seeing the results of too much growth.
  5. Focus less on bringing science fiction to life and instead, spend more time with anthropologists. (I can’t recommend David A. Banks’ Baffler piece ‘Engineered for Dystopia enough. David implores engineers to consider their power, and talks about the need to create more stories about engineers coming to terms with the consequences of their creations.)

This is a very quick post but I wanted to share and record some thoughts, as we are working with few organisations on some of these contentious issues.

Many thanks to Danielle Knight for her help on the piece and with proofreading.

SUPERFLUX

Somerset House Studios, London UK
hello@superflux.in
All rights reserved © 2017. No. 6601242

Web Design > SONIA DOMINGUEZ
Development > TOUTENPIXEL

We'd love to hear from you

New projects
Internships
General enquiries

Studio M48,
Somerset House Studios,
New Wing, Somerset House,
Strand, London, WC2R 1LA

Title By Date
News – November 2024 Superflux 08.11.2024
From Active Hope to Tangible Realities: Interview with Anab Jain Superflux 04.12.2023
The Quiet Enchanting launches on the Strand Superflux 19.10.2023
Action Speaks Summit Now Open at New York Climate Week 2023 Superflux 21.09.2023
Radical Design For A World In Crisis in Noema Magazine Superflux 27.04.2023
Superflux featured in Design Week Superflux 17.03.2023
Announcing Superflux’s ambitious new initiative: CASCADE INQUIRY Superflux 10.01.2023
ANAB & JON RECEIVE THE ROYAL DESIGNER FOR INDUSTRY (RDI) AWARD 2022 Superflux 10.01.2023
SAFE: A Collection of Works Exploring Safer Futures Superflux 05.10.2022
Superflux Featured on BBC Radio 4 Anab 10.08.2022
SUBJECT TO CHANGE: Announcing Superflux’s first-ever solo exhibition at The DROOG Gallery Superflux 04.02.2022
Superflux’s new immersive installation opens at Museum of the Future, Dubai Superflux 23.02.2022
Design Studio of the Year Award 2021 Superflux 17.12.2021
A More Than Human Manifesto Superflux 17.12.2021
Superflux Interview in ICON Magazine Superflux 12.12.2021
“Dreamed-up Designs”: a Financial Times feature on Superflux Superflux 18.06.2021
Calling Creative Producers! Superflux 08.02.2021
‘Our Friends Electric’ acquired by the European Patent Office Anab 15.03.2021
Emerging Futures Grant from National Lottery Community Fund Superflux 16.11.2020
‘Standing on the Shoulders’ Podcast: On Plural Futures and Multi-Species Companionship Superflux 01.10.2020
Superflux Invited to La Biennale Di Venezia 2021 Superflux 03.07.2020
EU Horizon 2020 Grant for Superflux and Partners Superflux 19.06.2020
Experiments in Indoor Farming Superflux 08.06.2020
Calling for a More-Than-Human Politics Superflux 23.03.2020
Superflux Feature in ‘Feeling the Future’ Conference Superflux 24.06.2020
Spring in Flux Superflux 14.04.2020
Calling Creative Producers! Superflux 29.01.2020
Inviting Internship Applications Superflux 16.01.2020
Come Work With Us Superflux 01.10.2019
Stop Shouting Future, Start Doing It Anab 24.01.2019
2018 Highlights Superflux 21.12.2018
Instant Archetypes: A toolkit to imagine plural futures Superflux 01.11.2018
TED Talk: Why We Need To Imagine Different Futures Anab 19.06.2017
Cartographies of Imagination Anab 30.09.2018
Tackling the Ethical Challenges of Slippery Technology Anab 11.06.2018
AI, HUMANITARIAN FUTURES, AND MORE-THAN-HUMAN CENTRED DESIGN Superflux 08.06.2018
The Future Starts Here Superflux 29.05.2018
Studio News: Power, AI and Air Pollution Superflux 09.10.2017
Future(s) of Power Launch Event Anab 09.10.2017
BUGGY AIR AT DESIGN FRONTIERS Superflux 15.09.2017
Calling all comrades & collaborators! Superflux 14.09.2017
CAN SPECULATIVE EVIDENCE INFORM DECISION MAKING? Anab 31.05.2017
STUDIO NEWS: TED, MAPPING, FOOD COMPUTERS, AND THE FUTURE OF WORK. Superflux 21.04.2017
BACK TO THE FUTURE: WHAT WE DID IN 2016 Superflux 31.01.2017
REALITY CHECK: PRESENTING AT UNDP SUMMIT Jon 06.12.2016
MITIGATION OF SHOCK JOURNAL Jon 12.07.2016
STUDIO HAPPENINGS Anab 04.07.2016
PROFESSORSHIP AT THE UNIVERSITY OF APPLIED ARTS VIENNA Anab 28.06.2016
HIGHLIGHTS FROM 2015 Anab 30.12.2015
SUPERFLUX MAGAZINE, ISSUE 1. Anab 21.04.2015
THE DRONE AVIARY JOURNAL Anab 09.04.2015
IOT, DRONES AND SPACE PROBES: ALTERNATE NARRATIVES Anab 01.03.2015
AUTUMN NEWS Jon 08.11.2014
ASSISTIVE TECHNOLOGIES AND DESIGN: AN INTERVIEW WITH SARA HENDREN Anab 07.11.2014
A QUARTERLY UPDATE FROM THE STUDIO Anab 11.05.2014
IN THE LOOP: DESIGNING CONVERSATION WITH ALGORITHMS Superflux 04.04.2014
IOTA WINS NOMINET TRUST FUNDING Jon 25.10.2013
SAILING THE SEAS OF SUPERDENSITY: GUEST POST BY SCOTT SMITH Superflux 19.10.2013
DNA STORIES: GUEST POST BY CHRISTINA AGAPAKIS Superflux 30.09.2013
PRESS RELEASE: DYNAMIC GENETICS VS. MANN Jon 01.08.2013
AN INTRODUCTION TO INFRASTRUCTURE FICTION: GUEST POST BY PAUL GRAHAM RAVEN Superflux 24.06.2013
SUPERNEWS, VOL 1. Jon 08.04.2013