Tackling the Ethical Challenges of Slippery Technology

BY: Anab

Our brief writeup on how organisations can start tackling the complex, ethical challenges of slippery technologies like AI.


Tackling the Ethical Challenges of Slippery Technology

The release of Google’s AI principles last week are promising. It is hard to imagine how these principles will be baked into the DNA of the company and how their implementation will play out in terms of company decision making and strategy, as they balance these against profit margins. But I am sure they are on it.

Rachel Coldicutt, CEO of Doteveryone raised some important questions about the principles. Her suggestion is that Google must say who their AI applications will benefit, and who they will harm. This is a strong moral position for a company to take, but these are indeed important considerations for technology companies. Not only that, Rachel’s writing provokes further questions. Given the networked nature of the technologies that companies like Google create, and the marketplace of growth and progress that they operate within, how can they control who will benefit and who will lose? What might be the implications of powerful companies taking an overt moral or political position? How might they comprehend the complex possibilities for applications of their products?

We imbue technology with the ideals of the people who have created it, rather than those who use it.

Jon Ardern, Superflux

One very real aspect of our technological landscape is that we tend to imbue technology with the ideals of the people who have created it. Implicitly, the technologies reinforce the beliefs and intentions of those who make and sell them. However, designers, engineers and marketeers only ever set up the affordances and suggest a use case. The true impact of a technology is, more often than not, defined by those who use it. Whether that’s knitting groups or fascist regimes, we have seen technology become an amplifier and accelerator of the social, cultural and political values of the groups who use it, not those who made it. And it will continue to be used in ways that you can never imagine.

The starting point of creating the products and services around technology is usually ‘need-centered’. Designers are generally expected to respond to a particular ‘need’. But what are sold and framed as urban lifestyle products have different uses depending on the context and needs of those using them. In Myanmar, SIM cards are cheap and easy to find so nearly everyone has a SIM card and phone number, but devices are shared between people; privacy isn’t a concern like it is in the West. Many people in rural Myanmar don’t have mirrors, so use the front-facing cameras to take selfies to see how they look. Before 2014 there was no internet in Myanmar, and even now connectivity is sporadic, many rely on Chinese apps preloaded on the phones, in a language they don’t speak. They are the unimagined users, the users on the margins, asEliza Oreglia puts it in a recent lectureThose on the margins are not involved in the feedback loop of design improvement. They were not even imagined in the design process.

I remember the surprise in the western media after this new story went out a while back, about Syrian refugees using smart phones. It was surprising because that particular context of use was so far removed from how smartphones are advertised. They are sold as a lifestyle product, and that frames our expectations of how that technology is going to be used. But technology will always be adapted to the needs of those who have access to it, regardless of the maker’s intention. Simultaneously the very same technology was being ingeniously exploited by oppressive forces. Soldiers at government checkpoints, as well as at ISIS checkpoints were demanding Facebook passwords. They would look at Facebook profiles to determine travellers ’allegiance in the war.


Still from the film ‘Everything is Connected to Everything’, about the vast, invisible ecologies of technology networks. Produced for the V&A ©Superflux 2018

I suspect the companies who create tech products know this. They work with the marketeers to create the perfect use case; the seductive, magical scenario you buy into, because that helps ship the product. If they started thinking of unintended consequences, of who their product could potentially harm, that could become very tricky. It would mean asking thorny questions:

How many unintended consequences can we think of? And what happens when we do release something potentially problematic out into the world? How much funding can be put into fixing it? And what if it can’t be fixed? Do we still ship it? What happens to our business if we don’t? All of this would mean slowing down the trajectory of growth, it would mean deferring decision-making, and that does not rank high in performance metrics. It might even require moving away from The Particular Future in which the organisation is currently finding success.

With the desire to move from Narrow towards General AI, things will get only more complicated. An audience member recently asked me after a talk: “We are developing a voice AI, but we are working to make it autonomous in its responses. And we want to be transparent with our users about the AI. So what should we say to them? We have an AI in this device, and we know its intelligent, but we don’t know exactly how it will respond?”

The layers of interacting networks within a deep neural network involve algorithms training themselves based on the data sets available to them. (Or maybe with the trajectory of AlphaGo Zero, that might no longer be needed) Although we understand the maths behind them — we don’t understand whythey make decisions. Not to mention the inherent biases programmed into AI which mirror our own human biases. This creates a transparency imbalance: AI needs transparency with personal data in order to do its work, but its own rational and decision making is opaque to us. This lack of understanding is synonymous with lack of control. By automating decision making, and not understanding the intentions or logic behind certain decisions, control over these decisions is relinquished.

All of this can leave organisations paralysed or confused, and in some cases even complacent — in the state of can’t-be-bothered-because-I-cant-do-anything-about-it. But I think, today more than ever, creating, implementing and practicing a broad set of ethical principles is crucial. Because recent news have shown what happens if you don’t. (eg. Google DuplexFacebookCambridge Analytica etc)

It might require that you:

  1. Invest in considering the unintended consequences of what you ship.Rachel wrote about this, and we practice this in our work with clients all the time. (I am reminded of Bruno Latour’s essay where he writes about design’s ‘humble’ efforts to move away from the heroic, Promethean, hubristic dream of action: “Go forward, break radically with the past and the consequences will take care of themselves!” I reckon we should send Bruno to Silicon Valley, where design is so intertwined with disruption. )
  2. Map the power of your organisation and products. What influence you leverage through your technology and the networks it reaches, is important to study, not just for monetising on ‘views’, but to better understand the effect it can have.
  3. Decouple performance metrics from financial success. I think this is probably the most difficult, but also very important.
  4. Develop multiple, alternate futures. By considering unintended consequences more thoroughly, you are probably already on your way towards developing alternatives that might be more worthwhile. You might come to blows with your financial controllers and shareholders, but you will find a way through it. After all, we are seeing the results of too much growth.
  5. Focus less on bringing science fiction to life and instead, spend more time with anthropologists. (I can’t recommend David A. Banks’ Baffler piece ‘Engineered for Dystopia enough. David implores engineers to consider their power, and talks about the need to create more stories about engineers coming to terms with the consequences of their creations.)

This is a very quick post but I wanted to share and record some thoughts, as we are working with few organisations on some of these contentious issues.

Many thanks to Danielle Knight for her help on the piece and with proofreading.

SUPERFLUX

Somerset House Studios, London UK
hello@superflux.in
All rights reserved © 2017. No. 6601242

Web Design > SONIA DOMINGUEZ
Development > TOUTENPIXEL

We'd love to hear from you

New projects
Internships
General enquiries

Studio M48,
Somerset House Studios,
New Wing, Somerset House,
Strand, London, WC2R 1LA

Title By Date
Instant Archetypes: A toolkit to imagine plural futures Nicola 01.11.2018
Cartographies of Imagination Anab City & InfrastructureIoT, Data & Civics 30.09.2018
Tackling the Ethical Challenges of Slippery Technology Anab Autonomous systems 11.06.2018
AI, HUMANITARIAN FUTURES, AND MORE-THAN-HUMAN CENTRED DESIGN Nicola 08.06.2018
The Future Starts Here Danielle Knight 29.05.2018
STUDIO NEWS: POWER, AI AND AIR POLLUTION Jake 09.10.2017
Future(s) of Power Launch Event Anab IoT, Data & Civics 09.10.2017
BUGGY AIR AT DESIGN FRONTIERS Jake City & InfrastructureClimate Change 15.09.2017
Calling all comrades & collaborators! Nicola 14.09.2017
TED 2017: WHY WE NEED TO IMAGINE DIFFERENT FUTURES Anab Autonomous systemsBiotechCity & InfrastructureClimate ChangeIoT, Data & Civics 19.06.2017
CAN SPECULATIVE EVIDENCE INFORM DECISION MAKING? Anab IoT, Data & Civics 31.05.2017
STUDIO NEWS: TED, MAPPING, FOOD COMPUTERS, AND THE FUTURE OF WORK. Jake Autonomous systemsCity & InfrastructureClimate ChangeIoT, Data & Civics 21.04.2017
BACK TO THE FUTURE: WHAT WE DID IN 2016 Jake Autonomous systemsCity & InfrastructureIoT, Data & Civics 31.01.2017
REALITY CHECK: PRESENTING AT UNDP SUMMIT Jon IoT, Data & Civics 06.12.2016
MITIGATION OF SHOCK JOURNAL Jon Autonomous systemsBiotechClimate Change 12.07.2016
STUDIO HAPPENINGS Anab Autonomous systemsClimate ChangeIoT, Data & Civics 04.07.2016
PROFESSORSHIP AT THE UNIVERSITY OF APPLIED ARTS VIENNA Anab 28.06.2016
HIGHLIGHTS FROM 2015 Anab Autonomous systemsCity & InfrastructureIoT, Data & Civics 30.12.2015
SUPERFLUX MAGAZINE, ISSUE 1. Anab Autonomous systems 21.04.2015
THE DRONE AVIARY JOURNAL Anab Autonomous systems 09.04.2015
IOT, DRONES AND SPACE PROBES: ALTERNATE NARRATIVES Anab Autonomous systemsIoT, Data & Civics 01.03.2015
AUTUMN NEWS Jon City & InfrastructureClimate ChangeIoT, Data & Civics 08.11.2014
ASSISTIVE TECHNOLOGIES AND DESIGN: AN INTERVIEW WITH SARA HENDREN Anab 07.11.2014
A QUARTERLY UPDATE FROM THE STUDIO Anab Autonomous systemsBiotechCity & InfrastructureIoT, Data & Civics 11.05.2014
IN THE LOOP: DESIGNING CONVERSATION WITH ALGORITHMS Alexis Autonomous systems 04.04.2014
IOTA WINS NOMINET TRUST FUNDING Jon IoT, Data & Civics 25.10.2013
SAILING THE SEAS OF SUPERDENSITY: GUEST POST BY SCOTT SMITH Scott City & Infrastructure 19.10.2013
DNA STORIES: GUEST POST BY CHRISTINA AGAPAKIS Christina BiotechClimate Change 30.09.2013
PRESS RELEASE: DYNAMIC GENETICS VS. MANN Jon Biotech 01.08.2013
AN INTRODUCTION TO INFRASTRUCTURE FICTION: GUEST POST BY PAUL GRAHAM RAVEN Paul City & Infrastructure 24.06.2013
SUPERNEWS, VOL 1. Jon 08.04.2013