Artificial Intelligence Assistants (AIAs) embedded in smart speakers (e.g., Amazon’s Alexa) have become a popular addition to households all over the world. People use AIAs at home to carry out many tasks such as entertainment, smart device management, personal organization and even purchase. AIAs are special in the sense that they can naturally interact with their users in a human-like way. It is the first time that such a technology has been so centrally implanted in households, leading consumers to interact daily with it, but at the same time raise concerns regarding their privacy. In our new journal paper “Trojan Horse or Useful Helper? A Relationship Perspective on Artificial Intelligence Assistants with Humanlike Features” (authors: Ertugrul Uysal, Sascha Alavi, and Valéry Bezençon) published at the Journal of the Academy of Marketing Science, we study how the users may form relationships with their AIAs over extended periods of use and the consequences of such human-AI relationships.

We show that long-term users tend to attribute a mind to these assistants, which leads the users to form relationships with their AIAs. We find that such a mind perception is beneficial for users, as it increases the trust they feel towards the AIA. A device with a mind that helps and serves its user is certainly desirable. However, we find that the perception of a mind is also harmful and leads the users to feel a threat to their human identity. This threat originates from the perception that AIAs have a capable, rivaling mind powered by AI. As a result of this identity threat, we find that users have greater data privacy concerns and experience lower general well-being. Overall, users’ relationship with AIAs is marked with relationship benefits and costs.

Building on this finding, we wanted to know whether we can reduce this negative effect of identity threat. We tested three strategies in a field experiment to empower the users in relation to their personal data. We found that the negative effect of identity threat was reduced in long-time users when they were (1) informed about the privacy practices of the AIAs, (2) informed about ways to change their privacy preferences and (3) when they were asked to take action in relation to their privacy.

We have two practical recommendations to marketing managers to optimize outcomes from AIAs such that both the firms and the consumers profit from beneficial effects and avoid harmful consequences. First, we recommend managers to furnish AIAs with human-like features only if consumers are well-informed about the privacy of their data and are confident to take action about protecting their privacy in the relationship with the AIA. Otherwise, the human-like features have a negative effect on users. Second, managers should try to empower consumers regarding the protection of their personal data to minimize the harmful effects emerging from perceiving a mind in the AIA. We show that well-informed consumers are empowered in their relationships with AIAs.

Such consumer-friendly practices by the firms might increase the purchases through AIAs; right now, only 10% of AIA users have been reported to make a purchase through their AIA. Moreover, managing the privacy of consumers’ data transparently will positively impact the firm’s public reputation. Our study recommends that a consumer-centric data relationship management may be ideal to build mutually beneficial relationships with well-informed consumers.

To have a deeper look at our research, please check out the full paper on: https://link.springer.com/article/10.1007/s11747-022-00856-9

Citation recommandée : Uysal Ertugrul, Artificial Intelligence Assistants: Trojan Horses or Useful Helpers?, publié le 5 avril 2022 par le LexTech Institute, https://www.lextechinstitute.ch/artificial-intel…r-useful-helpers/.

Auteur(s) de cette contribution :

Page Web | Autres publications

Ertugrul est en quatrième année de doctorat à l'Université de Neuchâtel et travaille avec le Prof. Valéry Bezençon. A l'Institut de gestion, ses travaux portent sur les comportements des consommateurs, notamment en relation avec la technologie, le marketing par les canaux numériques et l'impact d'une telle transformation du marketing sur les consommateurs. Plus largement, il s'intéresse à l'impact de la technologie, de l'intelligence artificielle et des médias sociaux sur les personnes.

Page Web | Autres publications

Valéry Bezençon est doyen et professeur ordinaire de marketing au sein de la Faculté des sciences économiques de l’Université de Neuchâtel. Ses intérêts portent sur le marketing social et le changement de comportement, les approches comportementales et le nudging ainsi que sur le comportement des consommateur-trice-s en lien avec la technologie et la durabilité.