A family gathers around their kitchen island to unbox the digital assistant they just purchased. They will be more likely to trust this new voice-user interface, which might be a smart speaker like Amazon’s Alexa or a social robot like Jibo, if it exhibits some humanlike social behaviors, according to a new study by researchers in MIT’s Media Lab.

The researchers found that family members tend to think a device is more competent and emotionally engaging if it can exhibit social cues, like moving to orient its gaze at a speaking person. In addition, their study revealed that branding — specifically, whether the manufacturer’s name is associated with the device — has a significant effect on how members of a family perceive and interact with different voice-user interfaces.

When a device has a higher level of social embodiment, such as the ability to give verbal and nonverbal social cues through motion or expression, family members also interacted with one another more frequently while engaging with the device as a group, the researchers found.

Their results could help designers create voice-user interfaces that are more engaging and more likely to be used by members of a family in the home, while also improving the transparency of these devices. The researchers also outline ethical concerns that could come from certain personality and embodiment designs.

“These devices are new technology coming into the home and they are still very under-explored,” says Anastasia Ostrowski, a research assistant in the Personal Robotics Group in the Media Lab, and lead author of the paper. “Families are in the home, so we were very interested in looking at this from a generational approach, including children and grandparents. It was super interesting for us to understand how people are perceiving these, and how families interact with these devices together.”

Related work from others:  Latest from MIT Tech Review - AI models spit out photos of real people and copyrighted images

Coauthors include Vasiliki Zygouras, a recent Wellesley College graduate working in the Personal Robotics Group at the time of this research; Research Scientist Hae Won Park; Cornell University graduate student Jenny Fu; and senior author Cynthia Breazeal, professor of media arts and sciences, director of MIT RAISE, and director of the Personal Robotics Group, as well as a developer of the Jibo robot. The paper is published today in Frontiers in Robotics and AI.

Investigating interactions

This work grew out of an earlier study where the researchers explored how people use voice-user interfaces at home. At the start of the study, users familiarized themselves with three devices before taking one home for a month. The researchers noticed that people spent more time interacting with a Jibo social robot than they did the smart speakers, Amazon Alexa and Google Home. They wondered why people engaged more with the social robot.

To get to the bottom of this, they designed three experiments that involved family members interacting as a group with different voice-user interfaces. Thirty-four families, comprising 92 people between age 4 and 69, participated in the studies.

The experiments were designed to mimic a family’s first encounter with a voice-user interface. Families were video recorded as they interacted with three devices, working through a list of 24 actions (like “ask about the weather” or “try to learn the agent’s opinions”). Then they answered questions about their perception of the devices and categorized the voice-user interfaces’ personalities.

In the first experiment, participants interacted with a Jibo robot, Amazon Echo, and Google Home, with no modifications. Most found the Jibo to be far more outgoing, dependable, and sympathetic. Because the users perceived that Jibo had a more humanlike personality, they were more likely to interact with it, Ostrowski explains.

Related work from others:  Latest from MIT Tech Review - How do you solve a problem like out-of-control AI? 

An unexpected result

In the second experiment, researchers set out to understand how branding affected participants’ perspectives. They changed the “wake word” (the word the user says aloud to engage the device) of the Amazon Echo to “Hey, Amazon!” instead of “Hey, Alexa!,” but kept the “wake word” the same for the Google Home (“Hey, Google!”) and the Jibo robot (“Hey, Jibo!”). They also provided participants with information about each manufacturer. When branding was taken into account, users viewed Google as more trustworthy than Amazon, despite the fact that the devices were very similar in design and functionality.

“It also drastically changed how much people thought the Amazon device was competent or like a companion,” Ostrowski says. “I was not expecting it to have that big of a difference between the first and second study. We didn’t change any of the abilities, how they function, or how they respond. Just the fact that they were aware the device is made by Amazon made a huge difference in their perceptions.”

Changing the “wake word” of a device can have ethical implications. A personified name, which can make a device seem more social, could mislead users by masking the connection between the device and the company that made it, which is also the company that now has access to the user’s data, she says.

In the third experiment, the team wanted to see how interpersonal movement affected the interactions. For instance, the Jibo robot turns its gaze to the individual who is speaking. For this study, the researchers used the Jibo along with an Amazon Echo Show (a rectangular screen) with the modified wake word “Hey, Computer,” and an Amazon Echo Spot (a sphere with a circular screen) that had a rotating flag on top which sped up when someone called its wake word, “Hey, Alexa!”

Related work from others:  Latest from MIT Tech Review - Getty Images promises its new AI contains no copyrighted art

Users found the modified Amazon Echo Spot to be no more engaging than the Amazon Echo Show, suggesting that repetitive movement without social embodiment may not be an effective way to increase user engagement, Ostrowski says.

Fostering deeper relationships

Deeper analysis of the third study also revealed that users interacted more among themselves, like glancing at each other, laughing together, or having side conversations, when the device they were engaging with had more social abilities.

“In the home, we have been wondering how these systems promote engagement between users. That is always a big concern for people: How are these devices going to shape people’s relationships? We want to design systems that can promote a more flourishing relationship between people,” Ostrowski says.

The researchers used their insights to lay out several voice-user interface design considerations, including the importance of developing warm, outgoing, and thoughtful personalities; understanding how the wake word influences user acceptance; and conveying nonverbal social cues through movement.

With these results in hand, the researchers want to continue exploring how families engage with voice-user interfaces that have varying levels of functionality. For instance, they might conduct a study with three different social robots. They would also like to replicate these studies in a real-world environment and explore which design features are best suited for specific interactions.

This research was funded by the Media Lab Consortia.

Similar Posts