Trust plays an important role in the adoption of any new technology. We investigate the introduction of voices interfaces, and especially: how do users actually deal with them in their daily lives?
In this article, we investigate the concept of ‘trust’ in relation to the digital technologies that surround us. The focus is on artificial intelligence. The behavior of people who use voice interfaces such as Google Home and Amazon Alexa in their daily lives is analyzed, and their response to the risks related to artificial intelligence and technology is assessed from a broader perspective. I show how existing networks are of fundamental importance for the way in which people trust technology. And how we try to find our way in an unregulated digital world through our own determined guidelines and limits.
It is hard to find someone in the Western world whose life has not changed in the last twenty years due to digital technology. Since the birth of the internet (as we know it) in 1990, we have witnessed the rise of e-mail, laptops, smartphones, apps, tablets, VR, and self-driving cars. And who knows what next year will bring, or the year after? The digital landscape is constantly changing.
More and more aspects of our lives are being digitized: interactions with companies, relationships with friends, the way in which politics and activism are organized, or the way we look for love. But this relatively new state of affairs is not accompanied by written rules that tell us how to behave. My research – focusing on everyday interactions with artificial intelligence – exposes how people navigate their digital lives and set limits and limits to justify the presence of technology in their homes. And the way in which the potential risks weigh up against each other.
Speaking about technological developments, one has caught our collective attention like no other: artificial intelligence (AI). Whether it is frightening images of apocalyptic Hollywood films with robots that run wild, or the potential of AI to play for God, AI creates a buzz. Silicon Valley speaks about ‘the next big thing’. The media is happy to make us believe that AI will kill us all or take over our work. As a result, the development of AI is positioned as something entangled with risks that both our economic stability and the survival of humanity endangered. It is not unusual for new developments to be accompanied by opponents, slanderers, and panic mongers.
Such people paint an image of AI as fundamentally risky, but risk, as will become apparent from this study, can take various forms. Risk does not only apply to the application of AI and digital technology. Modern society has become so complex, and we are so possessed by this complexity that there is a potential danger lurking everywhere. The risks of large-scale power outages, computing hacks, and nuclear fall-out are beyond our imagination.
We only know that they are catastrophic. This state of affairs led the German sociologist Ulrich Beck to characterize the modern world as a ‘risk society’. According to Beck, modern risks are ‘hidden’ and fall outside of our imagination. Because of this, we are taken up by ‘what if’ questions and we are constantly guessing for possible risks in our minds. In recognizing the widespread risks of modern life, British sociologist Anthony Giddens believes that confidence is essential for the proper functioning of a society.
In our complex society, we have to start from an abstract concept of trust in order to feel safe. We depend on experts we don’t know and will never meet. The way in which risk and trust are interrelated forms the theoretical basis for the approach to my research.
The degree to which we trust or do not trust AI will determine the role that AI will play in our daily lives in the coming decades. Nobody knows exactly what the future will bring, or in which direction AI will develop. But what we can say with certainty is that technology will continue to evolve and that machine-learning algorithms are already finding their way into our daily lives. How we respond to new technologies today will determine their future. The key to a future in which both the user and the designer can use AI in the right way is primarily to fully understand how AI is used in our daily lives and how people will gradually start to trust these technologies, despite all the sensational headlines about the danger of AI.
Now that AI is being used more and more, we can investigate some of the issues raised in the media. In particular, one technology – the voice interface – offers an interesting perspective to study what position AI occupies in people’s daily lives.
The use of conversational interface devices is becoming more common. People place voice assistants in the most personal areas of their homes to help them with daily tasks. These AI devices help users perform a variety of functions by talking to the device. Google and Amazon currently dominate this market. Amazon has sold more than twenty million Echo devices in 3.5 years, while Google’s device – Google Home – has almost five million sales. Amazon itself states that conversational AI: ‘can communicate in a way that feels natural, solves problems and gets smarter’.
But the rise of these devices is not without a struggle. There is a lot of fuss about these devices in the news and new stories keep appearing. Such messages are about the idea that the devices are constantly bugging you and storing data for future use, allowing Google and Amazon to capture complete consumer portraits. A kind of ‘digital corporate spy’ that we invite ‘voluntarily’ to our home. The nature and popularity of these devices is an interesting reason to study how people start using AI and how trust is formed in interactions with new, potentially high-risk, machines.
For this research, I wanted to talk to people who live together with a Google Home or Amazon Alexa. By studying the interaction between people and these devices, I was able to understand the motivations for owning a speech interface and see how the devices are being used. And whether the users associate risks with AI, and how young children are raised in an environment with an ‘intelligent’ speaker with a name. I conducted three in-depth interviews.
Petrushka, Lennart, and Saloua all live in Dutch cities and can be described as well-educated, tech-savvy people with an international, cosmopolitan background. In two of these cases, I saw firsthand how the devices were used at home. Two participants have a Google Home and the third uses Alexa from Amazon. By seeing the devices in use, I was able to determine how they worked and what positions they take in people’s daily lives.
Detailed descriptions, based on these interviews, form the basis for my understanding of how people live with AI and determine their own rules for navigating the digital world. By talking to people who have voice assistants, it became clear that, long before it is unpacked, there is already a certain amount of confidence in the device.
Before Lennart and Saloua decided to take a Google Home, they spent quite some time in the decision process. After her engineering studies, Saloua worked for a large bank in the field of smart chip technology. She is interested in technology that can make her life easier. After reading an article comparing two popular voice interfaces, she decided that the Google Home suits her best.
Lennart had tried his friend’s Amazon Alexa several times before taking a Google Home himself. A podcast about the devices supported his decision. As a regular listener, he trusts the opinions of the host. Petrushka did not buy the Amazon Alexa that is in her living room herself. Her husband did it. We can reasonably assume that she trusts her husband not to take harmful objects into their homes and bring them into contact with their children. In all three cases, it is clear that the confidence that motivated them to purchase these products comes from multiple sources and not unrelated to be seen together. It is ingrained in complex social relationships and existing trust networks.
For two of the participants, the voice assistant can be described as a digital assistant who helps them make their daily morning rush a bit smoother. Saloua, in his early thirties, comes down the stairs every morning, walks into her living room and greets her Google Home:
‘Hey Google, good morning’.
“Hey Saloua, the time is 7:56 am. Utrecht is currently 14 degrees and cloudy. Today will be sunny with a…. “
An undefined, British voice completes the weather forecast and continues reading the news. In the past, Saloua would look up the weather forecast and browse through the latest headlines – that is, as long as her laptop had enough battery life and was close by. Now she has summarized these everyday tasks in three words.
The mornings of Lennart follow a similar pattern. His Google Home reminds him of the appointments of the day and the weather forecast. In contrast to Saloua whose unit’s morning is primarily used Lennart device throughout the day. As an ‘early adopter’ who likes ‘cool things’, as he calls it himself, he uses his Google Home to perform train time searches and also helps in the kitchen. He often adds things to shopping lists, looks up recipes, and sets cooking timers.
The Amazon Alexa stands on top of Petrushka’s cupboard in her open living room, and also comes in handy when she’s cooking. Alexa lets her know when a certain amount of time has elapsed so that she turns off the oven on time. She also uses the timer if one of her three children has committed a criminal offense.
They may silently count down their penalty time in the corner. Alexa gives them a signal when the five minutes of punishment has expired. Setting timers removes the guessing of time from cooking and from the small disciplinary. With Alexa’s help, Petrushka tries to simplify her daily routine in the house. But Alexa does not only act as a timekeeper when her children are punished. It also acts as a digital playmate. It reads interactive stories and tells them jokes when they ask.
Petrushka has two Alexa devices in her house: one downstairs, which also contains a camera, and one upstairs in her twin daughter’s room. Her husband, whom she jokingly refers to as a ‘child’ because of his love for everything with buttons and wires, bought the devices and put them in the house. Petrushka explains how she initially had trouble seeing the usefulness of the voice assistant. When Alexa was connected to a pair of smart light bulbs, she gradually began to see the value. What started as a fun way to turn on the light now simplifies her daily routine a little more every day.
Although the use of a voice assistant may vary slightly, the device does have one thing in common in these three households. They are all designed to save time or simplify tasks: when she hears the weather in the morning, Saloua knows whether to pack her son’s raincoat. Lennart is less likely to miss an early appointment that was planned months ago. And preparing dinner is much smoother when Petrushka sets a timer. These small, time-saving functions have a lot of value in their busy routine.
Saloua expressed this feeling perfectly: ‘It just saves me time, the most valuable asset that we have’. New products often come on the market that promises us to save time or help organize and simplify our hectic lives. Historically, technology has always been praised as the ultimate example of this. Consider how microwave ovens, washing machines, vacuum cleaners, and dishwashers have changed housework. But there is also a feeling that we need more time than ever before, despite the technological progress and the abundance of ‘life hacks’ that exist.
The abundance of digital technologies around us and our infinite fixation on screens are some of the reasons why Lennart bought a Google Home. By using his voice to perform functions, he can prevent him from spending too much time on his phone or laptop. It may sound strange, but the solution to reducing the use of certain technology is to use more technology in this case.
Saloua and Petrushka also show how technologies can be used to gain more control over our digital lives and, in this case, their children. Petrushka likes that Alexa can read a story to her children, which means they spend less time watching TV. This stimulates their imagination, much more than pre-cooked TV images. Together with her husband, Saloua created a Netflix channel for her four-year-old son, to prevent him from seeing anything he encountered in the standard channel.
Such examples show how one technology can be used to protect against the disadvantages of another technology.
The fear of the potentially harmful effects of new technologies stems from the ‘unknown’. Nobody knows exactly what the seriousness of the risk is. Privacy was a specific concern for Lennart. He recently stopped using the Google services Gmail, Chrome, and Search to prevent the company from capturing a clear picture of who he is and thus targeting advertisements. Conflictingly enough, he then allowed a Google product with built-in listening devices in his living room. He used the cancellation of other Google services as justification for owning a Google Home. In addition, he has a good understanding of the operation of the voice interface, with the automatic switch-off functionality, and he feels reassured by this.
Petrushka is not impressed by the fact that Amazon might listen to her. She even assumes, based on an article she has read somewhere online, that microphones are eavesdropping on us everywhere. ‘If you can’t say it out loud, don’t say it at all,’ she says. However, she deliberately placed the device with the camera downstairs, and not in her daughter’s room. She was afraid that someone could see her children.
Both Lennart and Petrushka saw a number of risks associated with the voice assistants. They knew it could invade their privacy. Although these feelings were not based on concrete facts, Lennart and Petrushka still do everything to limit the potential risk of their devices. They created boundaries and rules that justify the presence of technology in their homes. They did personal interventions to compensate for the perceived hazards associated with their new devices and to make themselves feel safe.
In the same way, Saloua set personal limits for navigating through the digital world. She tells me, for example, that she refuses to post her son’s photos on social media. Because it is his intellectual property, it is not for her to distribute his photos. If she wants to share photos with family and friends, she has private platforms to do this.
She says she only got angry once when technology violated her life. Here too it concerned her young child. Her brother-in-law had given her son a storybook with beautiful illustrations with a speaker who could read the story: ‘I hate that thing,’ she tells me. Reading a story for her child was something she saw as a human activity and the idea that a machine could replace it does not fit Saloua. Everything that has a ‘real’ human quality, such as time with her loved ones, is forbidden territory for automation. We see again that a clear set of self-designed rules guide Saloua as it’s about technology. These rules help her to justify the presence of technology in her life.
Both Petrushka and Lennart have set limits on the extent to which they want to allow AI and algorithms into their lives. If intelligent devices could take over the everyday, repetitive tasks in our daily routine, that would make Petrushka very happy. That would enable her to focus on being a person. Lennart was a lot more specific. Health and finance are areas for him that he would rather not be entrusted to AI. In line with this line of thought, Lennart indicated that he would not want an AI device with a camera. All participants set individual limits on the way they use technology in the present. Moreover, they all had ideas about the extent to which they allow AI in their future lives.
The digital world is developing at lightning speed and there are certain risks involved. These are also reinforced if we give free rein to our imagination about the potential of AI. But the majority of people in the West need this consciousness to a certain extent. The risky and changing landscape comes without rules and without a standard or ‘right’ way of doing things. The ones I interviewed all made their own rules. Whether it is about leaving Google, not posting personal things on social media, or not allowing cameras in certain parts of the house, we consciously take action to justify the presence of technologies. These actions show that people determine the use of technology in their lives. Technology is not an autonomous force, independent of society. The way technology is applied determines its meaning.
You can extend this idea to your own use of digital technology. Do you cover the webcam of your computer? Do you know people who do that? Why are they doing this? Technology allows us to navigate in different and specific ways. So that, within certain limits, we can always exercise our individual freedom of choice. When we are faced with uncertainty about potential risks, we can do things to give ourselves more confidence.
In the end, we may not know everything about how the technology works or the policies of the big companies that make it (and that may not be completely transparent), but we trust ourselves and many of those around us. Research into how people use new technologies and how they feel about it shows that by trusting ourselves and others, we learn what impact we have to make sense of the technologies in our lives.
This essay has been submitted by a student. This is not an example of the work written by our professional essay writers. You can order our professional work here.