IN THE LOOP: DESIGNING CONVERSATION WITH ALGORITHMS

BY: Alexis

Intro by Anab Jain:

Last year we were lucky to have some fantastic guest posts from Paul Graham Raven, Scott Smith and Christina Agapakis. Continuing the tradition into our second year, I am thrilled to welcome Alexis Lloyd, Creative Director R&D New York Times, to our blog with a great essay. When I met Alexis last year, it was clear that there were crossovers in our work, and we are grateful that she agreed to write for us, brilliantly exploring a space that we are currently preoccupied with in the studio. Over to Alexis.

 

IN THE LOOP: DESIGNING CONVERSATIONS WITH ALGORITHMS 

Earlier this year, I saw a video from the Consumer Electronics Show in which Whirlpool gave a demonstration of their new line of connected appliances: appliances which would purportedly engage in tightly choreographed routines in order to respond easily and seamlessly to the consumer’s every need. As I watched, it struck me how similar the notions were to the “kitchen of the future” touted by Walter Cronkite in this 1967 video. I began to wonder: was that future vision from nearly fifty years ago particularly prescient? Or, perhaps, are we continuing to model technological innovation on a set of values that hasn’t changed in decades?

When we look closely at the implicit values embedded in the vast majority of new consumer technologies, they speak to a particular kind of relationship we are expected to have with computational systems, a relationship that harkens back to mid-20th century visions of robot servants. These relationships are defined by efficiency, optimization, and apparent magic. Products and systems are designed to relieve users of a variety of everyday “burdens” — problems that are often prioritized according to what technology can solve rather than their significance or impact. And those systems are then assumed to “just work”, in the famous words of Apple. They are black boxes in which the consumer should never feel the need to look under the hood, to see or examine a system’s process, because it should be smart enough to always anticipate your needs.

So what’s wrong with this vision? Why wouldn’t I want things doing work for me? Why would I care to understand more about a system’s process when it just makes the right decisions for me?

The problem is that these systems are making decisions on my behalf and those decisions are not always optimal: they can be based on wrong assumptions, incomplete understanding, or erroneous input. And as those systems become more pervasive, getting it wrong becomes increasingly problematic. We are starting to realize that black boxes are insufficient, because these systems are never smart enough to do what I expect all the time, or I want them to do something that wasn’t explicitly designed into the system, or one “smart” thing disagrees with another “smart” thing. And the decisions they make are not trivial. Algorithmic systems record and influence an ever-increasing number of facets of our lives: the media we consume, through recommendation algorithms and personalized search; what my health insurance knows about my physical status, the kinds of places I’m exposed to (or not exposed to) as I navigate through the world; whether I’m approved for loans or hired for jobs; and whom I may date or marry.

As algorithmic systems become more prevalent, I’ve begun to notice of a variety of emergent behaviors evolving to work around these constraints, to deal with the insufficiency of these black box systems. These behaviors point to a growing dissatisfaction with the predominant design principles, and imply a new posture towards our relationships with machines.

google-voice-search-mobile-app-leicester-square

Image: Adspiration

 

The first behavior is adaptation. These are situations where I bend to the system’s will. For example, adaptations to the shortcomings of voice UI systems — mispronouncing a friend’s name to get my phone to call them; overenunciating; or speaking in a different accent because of the cultural assumptions built into voice recognition. We see people contort their behavior to perform for the system so that it responds optimally. This is compliance, an acknowledgement that we understand how a system listens, even when it’s not doing what we expect. We know that it isn’t flexible or responsive enough, so we shape ourselves to it. If this is the way we move forward, do half of us end up with Google accents and the other half with Apple accents? How much of our culture ends up being an adaptation to systems we can’t communicate well with?

 

NEGOTIATION

 

The second type of behavior we’re seeing is negotiation — strategies for engaging  with a system to operate within it in more nuanced ways. One example of this is Ghostery, a browser extension that allows one to see what data is being tracked from one’s web browsing and limit it or shape it according to one’s desires. This represents a middle ground: a system that is intended to be opaque is being probed in order to see what it does and try and work with it better. In these negotiations, users force a system to be more visible and flexible so that they can better converse with it.

We also see this kind of probing of algorithms becoming a new and critical role in journalism, as newsrooms take it upon themselves to independently investigate systems through impulse response modeling and reverse engineering, whether it’s looking at the words that search engines censor from their autocomplete suggestions, how online retailers dynamically target different prices to different users, or how political campaigns generate fundraising emails.

 

Antagonism

 

cvd-banner

 

Third, rather than bending to the system or trying to better converse with it, some take an antagonistic stance: they break the system to assert their will. Adam Harvey’s CV Dazzle is one example of this approach, where people hack their hair and makeup in order to foil computer vision and opt out of participating in facial recognition systems. What’s interesting here is that, while the attitude here is antagonistic, it is also an extreme acknowledgement of a system’s power — understanding that one must alter one’s identity and appearance in order to simply exert free will in an interaction.

Rather than simply seeing these behaviors as a series of exploits or hacks, I see them as signals of a changing posture towards computational systems. Culturally, we are now familiar enough with computational logic that we can conceive of the computer as a subject, an actor with a controlled set of perceptions and decision processes. And so we are beginning to create relationships where we form mental models of the system’s subjective experience and we respond to that in various ways. Rather than seeing those systems as tools, or servants, or invisible masters, we have begun to understand them as empowered actors in a flat ontology of people, devices, software, and data, where our voice is one signal in a complex network of operations. And we are not at the center of this network. Sensing and computational algorithms are continuously running in the background of our lives. We tap into them as needed, but they are not there purely in service of the end user, but also in service of corporate goals, group needs, civic order, black markets, advertising, and more. People are becoming human nodes on a heterogeneous, ubiquitous and distributed network. This fundamentally changes our relationship with technology and information.

However, interactions and user interfaces are still designed so that users see themselves at the center of the network and the underlying complexity is abstracted away. In this process of simplification, we are abstracting ourselves out of many important conversations and in doing so, are disenfranchising ourselves.

Julian Oliver states this problem well, saying: “Our inability to describe and understand [technological infrastructure] reduces our critical reach, leaving us both disempowered and, quite often, vulnerable. Infrastructure must not be a ghost. Nor should we have only mythic imagination at our disposal in attempts to describe it. ‘The Cloud’ is a good example of a dangerous simplification at work, akin to a children’s book.”

So, what I advocate is designing interactions that acknowledge the peer-like status these systems now have in our lives. Interactions where we don’t shield ourselves from complexity but actively engage with it. And in order to engage with it, the conduits for those negotiations need to be accessible not only to experts and hackers but to the average user as well. We need to give our users more respect and provide them with more information so that they can start to have empowered dialogues with the pervasive systems around them.

This is obviously not a simple proposition, so we start with: what are the counterpart values? What’s the alternative to the black box, what’s the alternative to “it just works”? What design principles should we building into new interactions?

 

Transparency

The first is transparency. In order to be able to engage in a fruitful interaction with a system, I need to be able to understand something about its decision-making process. And I want to be clear that transparency doesn’t mean complete visibility, it doesn’t mean showing me every data packet sent or every decision tree. I say that because, in many discussions about algorithmic transparency, people have a tendency to throw their hands up, claiming that algorithmic systems have become so complex that we don’t even fully understand what they’re doing, so of course we can’t explain them to the user. I find this argument reductive and think it misunderstands what transparency entails in the context of interaction design.

As an analogy, when I have a conversation with a friend, I don’t know his whole psychological history or every factor that goes into his responses, let alone what’s happening at a neurological or chemical level, but I understand something about who he is and how he operates. I have enough signals to participate and give feedback — and more importantly, I trust that he will share information that is necessary and relevant to our conversation. Between us, we have the tools to delve into the places where our communication breaks down, identify those problems and recalibrate our interaction. Transparency is necessary to facilitate this kind of conversational relationship with algorithms. It serves to establish trust that a system is showing me what I need to know and is not doing anything I don’t want it to with my participation or data; and that it is giving me the necessary knowledge and input to correct a system when it’s wrong.

We’re starting to see some very nascent examples of this, like the functionality that both Amazon and Netflix have, where I can see the assumptions that are being made by a recommendation system and I am offered a way to give negative feedback; to tell Amazon when it’s wrong and why. It definitely still feels clunky — it’s not a very complex or nuanced conversation yet, but it’s a step in the right direction.

 

pngbase6442cb3a0677bc111c

 

More broadly, the challenge we’re facing has a lot to do with the shift from mechanical systems to digital ones. Mechanical systems have a degree of transparency in that their form necessarily reveals their function and gives us signals about what they’re doing. Digital systems don’t implicitly reveal their processes, and so it is a relatively new state that designers now bear the burden of making those processes visible and available to interrogate.

 

Agency

The second principle here is agency, meaning that a system’s design should empower users to not only accomplish tasks, but should also convey a sense that they are in control of their participation with a system at any moment. And I want to be clear that agency is different from absolute and granular control.

This interface, for example, gives us an enormous amount of precise control, but for anyone but an expert, probably not much sense of agency.

A car, on the other hand, is a good illustration of agency. There’s plenty of “smart” stuff that the car is doing for me, that I can’t directly adjust — I can’t control how electricity is routed or which piston fires when, but I can intervene at any time to control my experience. I have clear inputs to steer, stop, speed up, or slow down and I generally feel that the car is working at my behest.

 

Virtuosity

The last principle, virtuosity, is something that usually comes as a result of systems that support agency and transparency well. And when I say virtuosity, what I mean is the ability to use a technology expressively.

A technology allows for virtuosity when it contains affordances for all kinds of skilled techniques that can become deeply embedded into processes and cultures. It’s not just about being able to adapt something to one’s needs, but to “play” a system with skill and expressiveness. This is what I think we should aspire to. While it’s wonderful if technology makes our lives easier or more efficient, at its best it is far more than that. It gives us new superpowers, new channels for expression and communication that can be far more than utilitarian — they can allow for true eloquence. We need to design interactions that allow us to converse across complex networks, where we can understand and engage in informed and thoughtful ways, and the systems around us can respond with equal nuance.

These values deeply inform the work we do in The New York Times R&D Lab, whether we are exploring new kinds of environmental computing interfaces that respond across multiple systems, creating wearables that punctuate offline conversations with one’s online interests, or developing best practices for how we manage and apply our readers’ data. By doing research to understand the technological and behavioral signals of change around us, we can then build and imagine futures that best serve our users, our company, and our industry.

 

About the Author: Alexis Lloyd is the Creative Director of the Research and Development Lab at the New York Times where she investigates technology trends and prototypes future concepts for content delivery. Follow on twitter @alexislloyd

SUPERFLUX

Somerset House Studios, London UK
hello@superflux.in
All rights reserved © 2017. No. 6601242

Web Design > SONIA DOMINGUEZ
Development > TOUTENPIXEL

We'd love to hear from you

New projects
Internships
General enquiries

Studio M48,
Somerset House Studios,
New Wing, Somerset House,
Strand, London, WC2R 1LA

Title By Date
We’re hiring a Studio & Operations Manager! Superflux 20.12.2023
Announcing Superflux’s ambitious new initiative: CASCADE INQUIRY Superflux 10.01.2023
ANAB & JON RECEIVE THE ROYAL DESIGNER FOR INDUSTRY (RDI) AWARD 2022 Nicola 10.01.2023
SAFE: A Collection of Works Exploring Safer Futures Superflux 05.10.2022
Superflux Featured on BBC Radio 4 Anab 10.08.2022
SUBJECT TO CHANGE: Announcing Superflux’s first-ever solo exhibition at The DROOG Gallery Superflux 04.02.2022
Superflux’s new immersive installation opens at Museum of the Future, Dubai Superflux 23.02.2022
Design Studio of the Year Award 2021 Superflux 17.12.2021
A More Than Human Manifesto Superflux 17.12.2021
Superflux Interview in ICON Magazine Superflux 12.12.2021
“Dreamed-up Designs”: a Financial Times feature on Superflux Superflux 18.06.2021
Calling Creative Producers! Nicola 08.02.2021
‘Our Friends Electric’ acquired by the European Patent Office Anab 15.03.2021
Emerging Futures Grant from National Lottery Community Fund Nicola 16.11.2020
‘Standing on the Shoulders’ Podcast: On Plural Futures and Multi-Species Companionship Superflux 01.10.2020
Superflux Invited to La Biennale Di Venezia 2021 Superflux 03.07.2020
EU Horizon 2020 Grant for Superflux and Partners Superflux 19.06.2020
Experiments in Indoor Farming Leanne 08.06.2020
Calling for a More-Than-Human Politics Superflux 23.03.2020
Superflux Feature in ‘Feeling the Future’ Conference Superflux 24.06.2020
Spring in Flux Nicola 14.04.2020
Calling Creative Producers! Nicola 29.01.2020
Inviting Internship Applications Superflux 16.01.2020
Come Work With Us Superflux 01.10.2019
Stop Shouting Future, Start Doing It Anab 24.01.2019
2018 Highlights Danielle Knight 21.12.2018
Instant Archetypes: A toolkit to imagine plural futures Nicola 01.11.2018
TED Talk: Why We Need To Imagine Different Futures Anab 19.06.2017
Cartographies of Imagination Anab 30.09.2018
Tackling the Ethical Challenges of Slippery Technology Anab 11.06.2018
AI, HUMANITARIAN FUTURES, AND MORE-THAN-HUMAN CENTRED DESIGN Nicola 08.06.2018
The Future Starts Here Danielle Knight 29.05.2018
Studio News: Power, AI and Air Pollution Jake 09.10.2017
Future(s) of Power Launch Event Anab 09.10.2017
BUGGY AIR AT DESIGN FRONTIERS Jake 15.09.2017
Calling all comrades & collaborators! Nicola 14.09.2017
CAN SPECULATIVE EVIDENCE INFORM DECISION MAKING? Anab 31.05.2017
STUDIO NEWS: TED, MAPPING, FOOD COMPUTERS, AND THE FUTURE OF WORK. Jake 21.04.2017
BACK TO THE FUTURE: WHAT WE DID IN 2016 Jake 31.01.2017
REALITY CHECK: PRESENTING AT UNDP SUMMIT Jon 06.12.2016
MITIGATION OF SHOCK JOURNAL Jon 12.07.2016
STUDIO HAPPENINGS Anab 04.07.2016
PROFESSORSHIP AT THE UNIVERSITY OF APPLIED ARTS VIENNA Anab 28.06.2016
HIGHLIGHTS FROM 2015 Anab 30.12.2015
SUPERFLUX MAGAZINE, ISSUE 1. Anab 21.04.2015
THE DRONE AVIARY JOURNAL Anab 09.04.2015
IOT, DRONES AND SPACE PROBES: ALTERNATE NARRATIVES Anab 01.03.2015
AUTUMN NEWS Jon 08.11.2014
ASSISTIVE TECHNOLOGIES AND DESIGN: AN INTERVIEW WITH SARA HENDREN Anab 07.11.2014
A QUARTERLY UPDATE FROM THE STUDIO Anab 11.05.2014
IN THE LOOP: DESIGNING CONVERSATION WITH ALGORITHMS Alexis 04.04.2014
IOTA WINS NOMINET TRUST FUNDING Jon 25.10.2013
SAILING THE SEAS OF SUPERDENSITY: GUEST POST BY SCOTT SMITH Scott 19.10.2013
DNA STORIES: GUEST POST BY CHRISTINA AGAPAKIS Christina 30.09.2013
PRESS RELEASE: DYNAMIC GENETICS VS. MANN Jon 01.08.2013
AN INTRODUCTION TO INFRASTRUCTURE FICTION: GUEST POST BY PAUL GRAHAM RAVEN Paul 24.06.2013
SUPERNEWS, VOL 1. Jon 08.04.2013