NEWS

27.01.2015

MACHINE NETWORKS

An editorial about Machine networks alongside many other brilliant folks.

11.05.2014

A QUARTERLY UPDATE FROM THE STUDIO

The year has flown by! What should be a monthly update has now become a quarterly (perhaps even semi-annual) update, our attempt to share studio highlights, and a fleeting moment to reflect on what has happened and what we have learnt.

 

PROJECTS
On the Consultancy front, we have been lucky to have the opportunity to work with some great clients this year. Couple of quick project hightlights that we can share publicly:

 

Future Cities Catapult / Family Day Out Programme
One of the most exciting projects we have been working on this year is with the Future Cities Catapultcalled ‘A Family Day Out Programme’. The project seeks to work with partially sighted and blind people to help identify the characteristics of future cities that will enrich their experience of it and develop potential cityscapes that would inspire them to make journeys into cities and around them. We have been through an extensive design research, horizon scanning and futurescaping process and are currently visualising some of the outcomes.

 

img_42191

img_8415

 

Museum of Future Government Services / PMO, UAE
We were lead creative consultants for the concept and scenario development of the Museum of Future Government Services commissioned by the Prime Minister’s Office of the UAE, working the incredible  TellartFabricaNear Future Laboratory and Institute of the Future, spearheaded by Noah Raford. The project launched at the Government Summit, a global platform dedicated to the improvement and enhancement of government services and related opportunities. The six exhibits being shown at the Museum are immediately visually compelling, yet provocative, and ambitious visions of how services ranging from border control to health care to education could be delivered in the future, in an attempt to stimulate thought and action, from their leaders and civic officials in the UAE. Our colleagues at Tellart and Fabrica, working with the PMO, have done a remarkable job in translating concepts, developing elements, and ultimately executing the exhibits.

 

513514161_1280x720

 

On the Lab front, currently two projects are keeping us on our toes.

 

Things that Fly and Watch Over You: Quadcopters, multirotors, positioning systems, and such other stuff has kept us occupied in the Lab, in huge amounts. Project Impossible is a beast that is simulteneously exciting and terrifying. One of the most fun part of the project is an opportunity to work with a host of amazingly talented people, all to be announced in an upcoming press conference.

survelliance-drone-sketch

 

IoTA: Internet of Things Academy: A full update on this project requires a separate blogpost, but suffice to say, we have made good progress. We are grateful to have a team of great people working with us: Gyorgyi Galik, Philipp RonenbergMartin Charlier and Daniel Pomlett. We have moved in a different direction from our initial proposal, but feel we now have a much clearer, far more exciting direction. Our focus is on people, on social and environmental concerns, and thinking of ways in which IoT can ultimately shape and influence legislation and policy. We are grateful for the incredible support of our partners Hugh Knowles and Louise Armstrong from the Forum for the Future and funders Nominet Trust and Founders Forum for Good, as well as the brillants folks at Suncorp who have been supporting our work. For regular updates follow @IoTAcademy on twitter or have a peek into our process on our tumblr.

Also on the Lab front, we were in India earlier this year, and have revisited Lilorann, with an renewed interest in Tactical Design and Tools for Critical Jugaad. We are in talks with several collaborators in the hope of realising a small thing this winter. Stay tuned.

Our Associate Tobias Revell has recently completed a commission ‘Monopoly of Legitimate Use‘ premiered at the Lighthouse Brighton, which we highly recommend making a trip for. Also, Yosuke Ushigome is currently developing a fascinating project “exploring high-speed and speculative trading of our bodily-harvested energy/data/knowledge/assets” to be exhibited in October in Tokyo.

 

TALKS & EXHIBITIONS

 

Keynote, Futureverything: I delievered a keynote at the Futureverything Festival in Manchester end of March. Titled ‘Valley of the Meatpuppets’, the talk explores the ethereal space where people, agents, thingbots, action heroes and big dogs coexist and how influence is designed within this space. I think the conference videos should go online soon. It was also great to exhibit the 5th Dimensional Camera and Open Informant at the Festival too.

 

Design and Violence, MoMA New York: We were invited by Paola Antonelli to contribute to their online show Design and Violence with a critical response to the work of Phil Ross. We wrote a short fiction piece exploring a future world where Mycotecture becomes a favoured material and what its implications might be.

V&A Friday Late: Candyce and I presented Dynamic Genetics vs Mann, followed by a series of sessions with the Synbio Tarot Cards at the V&A Friday Late for Synthetic Aesthetics. We had never run this sort of a session previously, but judging by the evening’s success are considering new avenues for such toolkits.

We will be showing Dynamic Genetics vs Mann at the DEAF Biennale in Rotterdam later this month as part of the ‘Blueprints for the Unknown’ Exhibition, and hoping that there will be a way for the project to be shown in the UK soon, perhaps where the project will resonate the most. I will also be giving a talk at the DIY ‘Altopia’ Seminar at the Biennale. I’ll be joining Tobias Revell at the Lighthouse to discuss his new work and explore themes of migration, borders, and networks. And I think that might be it, in terms of talks this year, apart from Chicago much later this year. Due to time contraints I have recently had to turn down few very exciting conference invitations for this year, but looking forward to it next year.

TEACHING

We enjoy teaching and our favourite form is intense workshops, which gives us an opportunity to set a brief, and a concetrated time with students to develop responses. We just wrapped up a workshop at HEAD, Geneva, with the Media Design MA students, working with them on a highly challenging brief titled ‘Failed States: Tactical Design for Uncertain Futures. Developed in collaboration with Justin Pickard, we invited students to design thoughtful responses to emerging political tensions at the intersection of migration, housing, climate change, robotics, surveillance, currency and finance, energy, public protest, and the hollowing out of the contemporary nation-state, for a near-future Switzerland. Needless to say, it was a highly energetic, inspiring week, and we’ll be writing a bit more about it soon. 

screen-shot-2014-05-03-at-20-42-37

image-copy-1

This was meant to be brief, so I’ll stop. Just a quick final note to say that we are also considering new projects, collaborations and partnerships for 2015, so if you have something in mind, do drop us a line.

Adios, be well!

 

04.04.2014

IN THE LOOP: DESIGNING CONVERSATION WITH ALGORITHMS

Intro by Anab Jain:

Last year we were lucky to have some fantastic guest posts from Paul Graham Raven, Scott Smith and Christina Agapakis. Continuing the tradition into our second year, I am thrilled to welcome Alexis Lloyd, Creative Director R&D New York Times, to our blog with a great essay. When I met Alexis last year, it was clear that there were crossovers in our work, and we are grateful that she agreed to write for us, brilliantly exploring a space that we are currently preoccupied with in the studio. Over to Alexis.

 

IN THE LOOP: DESIGNING CONVERSATIONS WITH ALGORITHMS 

Earlier this year, I saw a video from the Consumer Electronics Show in which Whirlpool gave a demonstration of their new line of connected appliances: appliances which would purportedly engage in tightly choreographed routines in order to respond easily and seamlessly to the consumer’s every need. As I watched, it struck me how similar the notions were to the “kitchen of the future” touted by Walter Cronkite in this 1967 video. I began to wonder: was that future vision from nearly fifty years ago particularly prescient? Or, perhaps, are we continuing to model technological innovation on a set of values that hasn’t changed in decades?

When we look closely at the implicit values embedded in the vast majority of new consumer technologies, they speak to a particular kind of relationship we are expected to have with computational systems, a relationship that harkens back to mid-20th century visions of robot servants. These relationships are defined by efficiency, optimization, and apparent magic. Products and systems are designed to relieve users of a variety of everyday “burdens” — problems that are often prioritized according to what technology can solve rather than their significance or impact. And those systems are then assumed to “just work”, in the famous words of Apple. They are black boxes in which the consumer should never feel the need to look under the hood, to see or examine a system’s process, because it should be smart enough to always anticipate your needs.

So what’s wrong with this vision? Why wouldn’t I want things doing work for me? Why would I care to understand more about a system’s process when it just makes the right decisions for me?

The problem is that these systems are making decisions on my behalf and those decisions are not always optimal: they can be based on wrong assumptions, incomplete understanding, or erroneous input. And as those systems become more pervasive, getting it wrong becomes increasingly problematic. We are starting to realize that black boxes are insufficient, because these systems are never smart enough to do what I expect all the time, or I want them to do something that wasn’t explicitly designed into the system, or one “smart” thing disagrees with another “smart” thing. And the decisions they make are not trivial. Algorithmic systems record and influence an ever-increasing number of facets of our lives: the media we consume, through recommendation algorithms and personalized search; what my health insurance knows about my physical status, the kinds of places I’m exposed to (or not exposed to) as I navigate through the world; whether I’m approved for loans or hired for jobs; and whom I may date or marry.

As algorithmic systems become more prevalent, I’ve begun to notice of a variety of emergent behaviors evolving to work around these constraints, to deal with the insufficiency of these black box systems. These behaviors point to a growing dissatisfaction with the predominant design principles, and imply a new posture towards our relationships with machines.

google-voice-search-mobile-app-leicester-square

Image: Adspiration

 

The first behavior is adaptation. These are situations where I bend to the system’s will. For example, adaptations to the shortcomings of voice UI systems — mispronouncing a friend’s name to get my phone to call them; overenunciating; or speaking in a different accent because of the cultural assumptions built into voice recognition. We see people contort their behavior to perform for the system so that it responds optimally. This is compliance, an acknowledgement that we understand how a system listens, even when it’s not doing what we expect. We know that it isn’t flexible or responsive enough, so we shape ourselves to it. If this is the way we move forward, do half of us end up with Google accents and the other half with Apple accents? How much of our culture ends up being an adaptation to systems we can’t communicate well with?

 

NEGOTIATION

 

The second type of behavior we’re seeing is negotiation — strategies for engaging  with a system to operate within it in more nuanced ways. One example of this is Ghostery, a browser extension that allows one to see what data is being tracked from one’s web browsing and limit it or shape it according to one’s desires. This represents a middle ground: a system that is intended to be opaque is being probed in order to see what it does and try and work with it better. In these negotiations, users force a system to be more visible and flexible so that they can better converse with it.

We also see this kind of probing of algorithms becoming a new and critical role in journalism, as newsrooms take it upon themselves to independently investigate systems through impulse response modeling and reverse engineering, whether it’s looking at the words that search engines censor from their autocomplete suggestions, how online retailers dynamically target different prices to different users, or how political campaigns generate fundraising emails.

 

Antagonism

 

cvd-banner

 

Third, rather than bending to the system or trying to better converse with it, some take an antagonistic stance: they break the system to assert their will. Adam Harvey’s CV Dazzle is one example of this approach, where people hack their hair and makeup in order to foil computer vision and opt out of participating in facial recognition systems. What’s interesting here is that, while the attitude here is antagonistic, it is also an extreme acknowledgement of a system’s power — understanding that one must alter one’s identity and appearance in order to simply exert free will in an interaction.

Rather than simply seeing these behaviors as a series of exploits or hacks, I see them as signals of a changing posture towards computational systems. Culturally, we are now familiar enough with computational logic that we can conceive of the computer as a subject, an actor with a controlled set of perceptions and decision processes. And so we are beginning to create relationships where we form mental models of the system’s subjective experience and we respond to that in various ways. Rather than seeing those systems as tools, or servants, or invisible masters, we have begun to understand them as empowered actors in a flat ontology of people, devices, software, and data, where our voice is one signal in a complex network of operations. And we are not at the center of this network. Sensing and computational algorithms are continuously running in the background of our lives. We tap into them as needed, but they are not there purely in service of the end user, but also in service of corporate goals, group needs, civic order, black markets, advertising, and more. People are becoming human nodes on a heterogeneous, ubiquitous and distributed network. This fundamentally changes our relationship with technology and information.

However, interactions and user interfaces are still designed so that users see themselves at the center of the network and the underlying complexity is abstracted away. In this process of simplification, we are abstracting ourselves out of many important conversations and in doing so, are disenfranchising ourselves.

Julian Oliver states this problem well, saying: “Our inability to describe and understand [technological infrastructure] reduces our critical reach, leaving us both disempowered and, quite often, vulnerable. Infrastructure must not be a ghost. Nor should we have only mythic imagination at our disposal in attempts to describe it. ‘The Cloud’ is a good example of a dangerous simplification at work, akin to a children’s book.”

So, what I advocate is designing interactions that acknowledge the peer-like status these systems now have in our lives. Interactions where we don’t shield ourselves from complexity but actively engage with it. And in order to engage with it, the conduits for those negotiations need to be accessible not only to experts and hackers but to the average user as well. We need to give our users more respect and provide them with more information so that they can start to have empowered dialogues with the pervasive systems around them.

This is obviously not a simple proposition, so we start with: what are the counterpart values? What’s the alternative to the black box, what’s the alternative to “it just works”? What design principles should we building into new interactions?

 

Transparency

The first is transparency. In order to be able to engage in a fruitful interaction with a system, I need to be able to understand something about its decision-making process. And I want to be clear that transparency doesn’t mean complete visibility, it doesn’t mean showing me every data packet sent or every decision tree. I say that because, in many discussions about algorithmic transparency, people have a tendency to throw their hands up, claiming that algorithmic systems have become so complex that we don’t even fully understand what they’re doing, so of course we can’t explain them to the user. I find this argument reductive and think it misunderstands what transparency entails in the context of interaction design.

As an analogy, when I have a conversation with a friend, I don’t know his whole psychological history or every factor that goes into his responses, let alone what’s happening at a neurological or chemical level, but I understand something about who he is and how he operates. I have enough signals to participate and give feedback — and more importantly, I trust that he will share information that is necessary and relevant to our conversation. Between us, we have the tools to delve into the places where our communication breaks down, identify those problems and recalibrate our interaction. Transparency is necessary to facilitate this kind of conversational relationship with algorithms. It serves to establish trust that a system is showing me what I need to know and is not doing anything I don’t want it to with my participation or data; and that it is giving me the necessary knowledge and input to correct a system when it’s wrong.

We’re starting to see some very nascent examples of this, like the functionality that both Amazon and Netflix have, where I can see the assumptions that are being made by a recommendation system and I am offered a way to give negative feedback; to tell Amazon when it’s wrong and why. It definitely still feels clunky — it’s not a very complex or nuanced conversation yet, but it’s a step in the right direction.

 

pngbase6442cb3a0677bc111c

 

More broadly, the challenge we’re facing has a lot to do with the shift from mechanical systems to digital ones. Mechanical systems have a degree of transparency in that their form necessarily reveals their function and gives us signals about what they’re doing. Digital systems don’t implicitly reveal their processes, and so it is a relatively new state that designers now bear the burden of making those processes visible and available to interrogate.

 

Agency

The second principle here is agency, meaning that a system’s design should empower users to not only accomplish tasks, but should also convey a sense that they are in control of their participation with a system at any moment. And I want to be clear that agency is different from absolute and granular control.

This interface, for example, gives us an enormous amount of precise control, but for anyone but an expert, probably not much sense of agency.

A car, on the other hand, is a good illustration of agency. There’s plenty of “smart” stuff that the car is doing for me, that I can’t directly adjust — I can’t control how electricity is routed or which piston fires when, but I can intervene at any time to control my experience. I have clear inputs to steer, stop, speed up, or slow down and I generally feel that the car is working at my behest.

 

Virtuosity

The last principle, virtuosity, is something that usually comes as a result of systems that support agency and transparency well. And when I say virtuosity, what I mean is the ability to use a technology expressively.

A technology allows for virtuosity when it contains affordances for all kinds of skilled techniques that can become deeply embedded into processes and cultures. It’s not just about being able to adapt something to one’s needs, but to “play” a system with skill and expressiveness. This is what I think we should aspire to. While it’s wonderful if technology makes our lives easier or more efficient, at its best it is far more than that. It gives us new superpowers, new channels for expression and communication that can be far more than utilitarian — they can allow for true eloquence. We need to design interactions that allow us to converse across complex networks, where we can understand and engage in informed and thoughtful ways, and the systems around us can respond with equal nuance.

These values deeply inform the work we do in The New York Times R&D Lab, whether we are exploring new kinds of environmental computing interfaces that respond across multiple systems, creating wearables that punctuate offline conversations with one’s online interests, or developing best practices for how we manage and apply our readers’ data. By doing research to understand the technological and behavioral signals of change around us, we can then build and imagine futures that best serve our users, our company, and our industry.

 

About the Author: Alexis Lloyd is the Creative Director of the Research and Development Lab at the New York Times where she investigates technology trends and prototypes future concepts for content delivery. Follow on twitter @alexislloyd

25.01.2014

PROJECT TALOS

A commission by Samsung to develop concepts and experience prototypes for a new range of smart products set for a 2016 release.

17.01.2014

VALLEY OF THE MEATPUPPETS

The Valley of the Meatpuppets is an ethereal space where people, agents, thingbots, action heroes and big dogs coexist.

25.01.2013

SUPERFLUX TALK @FABRICA

An overview of our studio’s research practice, process and ethos presented at Fabrica, Italy.

25.01.2013

STAYING WITH THE TROUBLE

A talk exploring cultural turbulence, technological acceleration and increasing complexity, in the context of our ongoing work.

17.01.2013

DESIGN FOR THE NEW NORMAL

A rapid fire talk highlighting disruptive trends that facilitate technological empowerment, and what that means for the design profession.

27.01.2010

NEW COMPANIONS

This article draws on advancements in Artificial Intelligence (AI) and robotics to question the orthodoxy of artificial Companions research.

SUPERFLUX

Somerset House Studios, London UK
hello@superflux.in
All rights reserved © 2017. No. 6601242

Web Design > SONIA DOMINGUEZ
Development > TOUTENPIXEL

We'd love to hear from you

New projects
Internships
General enquiries

Studio M48,
Somerset House Studios,
New Wing, Somerset House,
Strand, London, WC2R 1LA

Tackling the Ethical Challenges of Slippery Technology