Blog
Tackling the Ethical Challenges of Slippery Technology
Our brief writeup on how organisations can start tackling the complex, ethical challenges of slippery technologies like AI.
Tackling the Ethical Challenges of Slippery Technology
The release of Google’s AI principles last week are promising. It is hard to imagine how these principles will be baked into the DNA of the company and how their implementation will play out in terms of company decision making and strategy, as they balance these against profit margins. But I am sure they are on it.
Rachel Coldicutt, CEO of Doteveryone raised some important questions about the principles. Her suggestion is that Google must say who their AI applications will benefit, and who they will harm. This is a strong moral position for a company to take, but these are indeed important considerations for technology companies. Not only that, Rachel’s writing provokes further questions. Given the networked nature of the technologies that companies like Google create, and the marketplace of growth and progress that they operate within, how can they control who will benefit and who will lose? What might be the implications of powerful companies taking an overt moral or political position? How might they comprehend the complex possibilities for applications of their products?
“We imbue technology with the ideals of the people who have created it, rather than those who use it.”
One very real aspect of our technological landscape is that we tend to imbue technology with the ideals of the people who have created it. Implicitly, the technologies reinforce the beliefs and intentions of those who make and sell them. However, designers, engineers and marketeers only ever set up the affordances and suggest a use case. The true impact of a technology is, more often than not, defined by those who use it. Whether that’s knitting groups or fascist regimes, we have seen technology become an amplifier and accelerator of the social, cultural and political values of the groups who use it, not those who made it. And it will continue to be used in ways that you can never imagine.
The starting point of creating the products and services around technology is usually ‘need-centered’. Designers are generally expected to respond to a particular ‘need’. But what are sold and framed as urban lifestyle products have different uses depending on the context and needs of those using them. In Myanmar, SIM cards are cheap and easy to find so nearly everyone has a SIM card and phone number, but devices are shared between people; privacy isn’t a concern like it is in the West. Many people in rural Myanmar don’t have mirrors, so use the front-facing cameras to take selfies to see how they look. Before 2014 there was no internet in Myanmar, and even now connectivity is sporadic, many rely on Chinese apps preloaded on the phones, in a language they don’t speak. They are the unimagined users, the users on the margins, as Eliza Oreglia puts it in a recent lecture. Those on the margins are not involved in the feedback loop of design improvement. They were not even imagined in the design process.
I remember the surprise in the western media after this new story went out a while back, about Syrian refugees using smart phones. It was surprising because that particular context of use was so far removed from how smartphones are advertised. They are sold as a lifestyle product, and that frames our expectations of how that technology is going to be used. But technology will always be adapted to the needs of those who have access to it, regardless of the maker’s intention. Simultaneously the very same technology was being ingeniously exploited by oppressive forces. Soldiers at government checkpoints, as well as at ISIS checkpoints were demanding Facebook passwords. They would look at Facebook profiles to determine travellers ’allegiance in the war.
Still from the film ‘Everything is Connected to Everything’, about the vast, invisible ecologies of technology networks. Produced for the V&A ©Superflux 2018
I suspect the companies who create tech products know this. They work with the marketeers to create the perfect use case; the seductive, magical scenario you buy into, because that helps ship the product. If they started thinking of unintended consequences, of who their product could potentially harm, that could become very tricky. It would mean asking thorny questions:
How many unintended consequences can we think of? And what happens when we do release something potentially problematic out into the world? How much funding can be put into fixing it? And what if it can’t be fixed? Do we still ship it? What happens to our business if we don’t? All of this would mean slowing down the trajectory of growth, it would mean deferring decision-making, and that does not rank high in performance metrics. It might even require moving away from The Particular Future in which the organisation is currently finding success.
With the desire to move from Narrow towards General AI, things will get only more complicated. An audience member recently asked me after a talk: “We are developing a voice AI, but we are working to make it autonomous in its responses. And we want to be transparent with our users about the AI. So what should we say to them? We have an AI in this device, and we know its intelligent, but we don’t know exactly how it will respond?”
The layers of interacting networks within a deep neural network involve algorithms training themselves based on the data sets available to them. (Or maybe with the trajectory of AlphaGo Zero, that might no longer be needed) Although we understand the maths behind them — we don’t understand whythey make decisions. Not to mention the inherent biases programmed into AI which mirror our own human biases. This creates a transparency imbalance: AI needs transparency with personal data in order to do its work, but its own rational and decision making is opaque to us. This lack of understanding is synonymous with lack of control. By automating decision making, and not understanding the intentions or logic behind certain decisions, control over these decisions is relinquished.
All of this can leave organisations paralysed or confused, and in some cases even complacent — in the state of can’t-be-bothered-because-I-cant-do-anything-about-it. But I think, today more than ever, creating, implementing and practicing a broad set of ethical principles is crucial. Because recent news have shown what happens if you don’t. (eg. Google Duplex, Facebook, Cambridge Analytica etc)
It might require that you:
- Invest in considering the unintended consequences of what you ship.Rachel wrote about this, and we practice this in our work with clients all the time. (I am reminded of Bruno Latour’s essay where he writes about design’s ‘humble’ efforts to move away from the heroic, Promethean, hubristic dream of action: “Go forward, break radically with the past and the consequences will take care of themselves!” I reckon we should send Bruno to Silicon Valley, where design is so intertwined with disruption. )
- Map the power of your organisation and products. What influence you leverage through your technology and the networks it reaches, is important to study, not just for monetising on ‘views’, but to better understand the effect it can have.
- Decouple performance metrics from financial success. I think this is probably the most difficult, but also very important.
- Develop multiple, alternate futures. By considering unintended consequences more thoroughly, you are probably already on your way towards developing alternatives that might be more worthwhile. You might come to blows with your financial controllers and shareholders, but you will find a way through it. After all, we are seeing the results of too much growth.
- Focus less on bringing science fiction to life and instead, spend more time with anthropologists. (I can’t recommend David A. Banks’ Baffler piece ‘Engineered for Dystopia’ enough. David implores engineers to consider their power, and talks about the need to create more stories about engineers coming to terms with the consequences of their creations.)
This is a very quick post but I wanted to share and record some thoughts, as we are working with few organisations on some of these contentious issues.
Many thanks to Danielle Knight for her help on the piece and with proofreading.