Projects
OUR FRIENDS ELECTRIC
The film explores our developing relationship with voice activated AI assistants, and the future potential of these relationships through three fictional devices.
Here, we unpack some of the core themes we explored in this work:
— We wanted to challenge assumptions around Voice AI, specifically the current drive for this technology to focus on one particular kind of ‘command and control’ interaction, where you ask a device something and it simply spits out that answer. We wanted to explore some different dimensions, interactions and personalities.
— We also wanted to challenge some of the popular myths around “AI”- and open up the complexities of our relationships with increasingly “intelligent” devices. As these devices move from straight forward command and control towards more granular behaviour, how do we form assumptions about what they can and can’t do? And what assumptions are made about us by the data these devices collect and parse?
— We wanted to explore what it might mean to make the ‘training’ or ‘learning’ aspects of the device part of the experience of owning it. How might making such a learning process more transparent and personal affect the relationship between device and owner? For instance, the first device ‘Eddi’ continually asks “why” it is given commands, almost like a child learning about the world and then puts this new found knowledge to use – with mixed results.
— One of the popular concerns around Voice assistants is that they are constantly listening to us, and collecting private data with potentially ominous implications. So we wanted to explore the possibility of something that might be considered entirely private and trustworthy.
—What might constitute such a device? Transparency and trust that your data is secure and used responsibly in ways you understand and agree to. But also the more subtle and often overlooked need, to trust that the actions taken on your behalf will be done, with competence and tact. With Karma we wanted to touch upon the legislation and certification that might be needed to address the first point and play with the affordances that might enable the second.
— Another recurring theme is around personalisation. When it comes to such technologies, how much control do we want to have over interactions with these devices? In the last scene, Juliet hacks her “device” Sig to become a trusted confidante, training it to seemingly hold and expound her world view, until the device’s true modus operandi comes into play and illusions break.
Like one of our earlier design fiction works ‘Uninvited Guests’ the “devices” in the film are not actual products, or meant to be products, but are ‘diegetic prototypes’ – in this instance symbols or archetypes for the properties (or potential properties), of such devices. They become a means to explore our developing relationships with these technologies in a, hopefully more honest, yet playful way.
The project’s research started with a invitation to attend Mozilla’s IoT OpenStudio’s workshop at Rockerfeller’s charming Bellagio Centre with a group of very inspiring people including Ame Elliott, Babitha George, David Li, Davide Gomba, Gillian Crampton-Smith, Jon Rogers, Kate Crawford, Max von Grafenstein, Michael Henretty, Michelle Thorne, Pete Thomas, Ronaldo Lemos, Shannon Dosemagen, Shashank Sriram, Solana Larsen and Vladan Joler.
We would like to thank Michelle and Jon from Mozilla for inviting us to work with them on this experiment. We would also like to thank Loraine Clarke and Martin Skelly from Mozilla’s Open IoT Studio and the University of Dundee for making such brilliant prototypes.