Artificial Intelligence Assistants (AIAs) embedded in smart speakers (e.g., Amazon’s Alexa) have become a popular addition to households all over the world. People use AIAs at home to carry out many tasks such as entertainment, smart device management, personal organization and even purchase. AIAs are special in the sense that they can naturally interact with their users in a human-like way. It is the first time that such a technology has been so centrally implanted in households, leading consumers to interact daily with it, but at the same time raise concerns regarding their privacy. In our new journal paper “Trojan Horse or Useful Helper? A Relationship Perspective on Artificial Intelligence Assistants with Humanlike Features” (authors: Ertugrul Uysal, Sascha Alavi, and Valéry Bezençon) published at the Journal of the Academy of Marketing Science, we study how the users may form relationships with their AIAs over extended periods of use and the consequences of such human-AI relationships.

We show that long-term users tend to attribute a mind to these assistants, which leads the users to form relationships with their AIAs. We find that such a mind perception is beneficial for users, as it increases the trust they feel towards the AIA. A device with a mind that helps and serves its user is certainly desirable. However, we find that the perception of a mind is also harmful and leads the users to feel a threat to their human identity. This threat originates from the perception that AIAs have a capable, rivaling mind powered by AI. As a result of this identity threat, we find that users have greater data privacy concerns and experience lower general well-being. Overall, users’ relationship with AIAs is marked with relationship benefits and costs.

Building on this finding, we wanted to know whether we can reduce this negative effect of identity threat. We tested three strategies in a field experiment to empower the users in relation to their personal data. We found that the negative effect of identity threat was reduced in long-time users when they were (1) informed about the privacy practices of the AIAs, (2) informed about ways to change their privacy preferences and (3) when they were asked to take action in relation to their privacy.

We have two practical recommendations to marketing managers to optimize outcomes from AIAs such that both the firms and the consumers profit from beneficial effects and avoid harmful consequences. First, we recommend managers to furnish AIAs with human-like features only if consumers are well-informed about the privacy of their data and are confident to take action about protecting their privacy in the relationship with the AIA. Otherwise, the human-like features have a negative effect on users. Second, managers should try to empower consumers regarding the protection of their personal data to minimize the harmful effects emerging from perceiving a mind in the AIA. We show that well-informed consumers are empowered in their relationships with AIAs.

Such consumer-friendly practices by the firms might increase the purchases through AIAs; right now, only 10% of AIA users have been reported to make a purchase through their AIA. Moreover, managing the privacy of consumers’ data transparently will positively impact the firm’s public reputation. Our study recommends that a consumer-centric data relationship management may be ideal to build mutually beneficial relationships with well-informed consumers.

To have a deeper look at our research, please check out the full paper on:

Suggested citation: Uysal Ertugrul, Artificial Intelligence Assistants: Trojan Horses or Useful Helpers?, published on 4 April 2022 by the LexTech Institute,…r-useful-helpers/?lang=en.

Author(s) of this blog post

Web page | Other publications

Ertugrul is a fourth year PhD student at the University of Neuchâtel working with Prof. Valéry Bezençon. At the Institute of Management, his work focuses on consumer behaviors especially in relation to technology, marketing through digital channels and the impact of such a transformation in marketing on consumers. More broadly, he is interested in the impact of technology, artificial intelligence and social media on people.

Web page | Other publications

Valéry Bezençon is dean of the Faculty of Economics and full Professor of marketing at the University of Neuchâtel. His interests include social marketing and behavior change, behavioral approaches and nudging, as well as consumer behavior in relation to technology and sustainability.