Generative artificial intelligence (GenAI) tools that appear to perform with care and empathy can quickly gain users’ trust. For this reason, GenAI tools that attempt to replicate human responses have heightened potential to misinform and deceive people. This article examines how three GenAI tools, within divergent contexts, mimic credible emotional responsiveness: OpenAI’s ChatGPT, the National Eating Disorder Association’s Tessa and Luka’s Replika. The analysis uses Hochschild’s concept of
feeling rules
to explore how these tools exploit, reinforce or violate people’s internalised social guidelines around appropriate and credible emotional expression. We also examine how GenAI developers’ own beliefs and intentions can create potential social harms and conflict with users. Results show that while GenAI tools enact compliance with basic feeling rules – for example, apologising when an error is noticed – this ability alone may not sustain user interest, particularly once the tools’ inability to generate meaningful, accurate information becomes intolerable.<p></p>