RMIT University
Browse

‘I think I misspoke earlier. My bad!’: Exploring how generative artificial intelligence tools exploit society’s feeling rules

Download (287.02 kB)
journal contribution
posted on 2025-10-06, 20:42 authored by Lisa GivenLisa Given, Sarah Polkinghorne, Alexandra RidgwayAlexandra Ridgway
Generative artificial intelligence (GenAI) tools that appear to perform with care and empathy can quickly gain users’ trust. For this reason, GenAI tools that attempt to replicate human responses have heightened potential to misinform and deceive people. This article examines how three GenAI tools, within divergent contexts, mimic credible emotional responsiveness: OpenAI’s ChatGPT, the National Eating Disorder Association’s Tessa and Luka’s Replika. The analysis uses Hochschild’s concept of feeling rules to explore how these tools exploit, reinforce or violate people’s internalised social guidelines around appropriate and credible emotional expression. We also examine how GenAI developers’ own beliefs and intentions can create potential social harms and conflict with users. Results show that while GenAI tools enact compliance with basic feeling rules – for example, apologising when an error is noticed – this ability alone may not sustain user interest, particularly once the tools’ inability to generate meaningful, accurate information becomes intolerable.<p></p>

History

Related Materials

  1. 1.
  2. 2.
    DOI - Is published in DOI: 10.1177/14614448251338276
  3. 3.
    ISSN - Is published in 1461-4448 (New Media & Society)
  4. 4.
    EISSN - Is published in 1461-7315 (New Media & Society)

Journal

New Media & Society

Volume

27

Issue

10

Start page

5525

End page

5545

Publisher

SAGE Publications

Language

en

Copyright

© The Author(s) 2025