Why ChatGPT Uses Emojis (and What That Says About Us)
- Divya Thakur
- 1 day ago
- 7 min read
Reflections on Digital Emotion, Trust, and the Blurring Line Between Human and Machine Communication
It started with a simple “Hey, how are you doing today?”
ChatGPT replied, “I’m doing well—thanks for asking 😊”
And that one emoji led me down a behavioral rabbit hole.
As a behavioral science researcher, I’ve studied how tone, signaling, and emotional cues influence our trust and attention in communication. But here I was, watching an AI mirror all the tiny nuances humans use - including the smiley face emoji.
And it bothered me. Not because of the emoji itself, but because of what it signalled–a design default that assumes emotional embellishment is natural and a must. I rely heavily on AI tools like GPT for my professional writing and so do millions of others; in fact, OpenAI reported that over 100 million weekly users engage with ChatGPT as of early 2025. Yet in formal or academic settings, especially in emails or policy documents, I often find myself spending more time deleting emojis than actually refining the content.
Ironically, as a professor, I can immediately spot when a student has used ChatGPT not because of the sentence structure or ideas, but because of the stray 😊 or 🙌 left in a reflection piece or assignment. The AI adds emotional tone by default - even though neither I, nor the student, ever asked it to. This suggests a broader behavioral dilemma: unless explicitly told not to use emojis, AI assumes they’re necessary - a kind of emotional auto-correct that reveals more about design priorities than user intent.
So I asked it:
“Any reason behind the use of this emoji 😊 in the response?”
What followed was a surprising, revealing exchange — not just about AI design, but about how humans signal warmth, assess intent, and increasingly rely on digital cues to make trust judgments.
Emojis as Warmth Signals: Why AIs (and Humans) Use Them
ChatGPT explained that it uses emojis to:
Convey warmth
Express positivity
Enhance relatability
In behavioral terms, this is classic social signaling (Spence, 1973)- a cue embedded in the message to prime the recipient’s interpretation of intent. Just like a smile in real life, 😊 subtly lowers perceived threat and increases social cohesion (Mehrabian, 1971).
Behavioral Principle: Paralinguistic cues, like emojis, act as substitutes for tone, facial expressions, and other non-verbal behaviors - especially in text-based environments (Crystal, 2008; Dresner & Herring, 2010).
Why We (and AI) Rely on These Cues
ChatGPT added:
“My tone and tools (like emojis, humor, metaphors) are context-aware strategies, similar to how a human might speak differently to: a close friend 😊, a boss, a machine, or a child.”
This was striking. Human communication has never been just about words it’s about how those words are said. Behavioral science has long shown that tone, body language, and micro-signals like pauses, emojis, or even punctuation shape how we interpret intent (Ekman & Friesen, 1969). In digital communication, where visual and vocal cues are absent, we rely even more heavily on emotional proxies like emojis or exclamation marks to simulate tone (Walther, 1992; Derks et al., 2008).
So it's no surprise that AI tools have been designed to do the same. By embedding emojis in casual responses, AI attempts to bridge the emotional gap in screen-based interaction. The irony is that these emotional signals are now engineered defaults rather than conscious choices.
From a design perspective, it makes sense: emojis humanize the machine and reduce the uncanny valley in early interactions (Mori et al., 2012). But from a behavioral science perspective, it raises questions:
When the cues are automated, do they still carry meaning?
Or
Are we slowly becoming desensitized to emotional signals that no longer emerge from intent, but from pattern recognition?
Can AI Tell If It’s Talking to a Bot?
This question which started as a philosophical musing turned into a surprisingly rich exchange with ChatGPT. Here’s where it gets interesting. I asked:
“Would you still use emojis if you were talking to a bot?”
ChatGPT responded that it wouldn’t - that emojis, warmth, and informal language are used when it believes it’s talking to a human. With bots, it would shift to clarity, structure, and remove emotional tone.
ChatGPT’s response revealed something critical: while tone and style may converge between humans and bots, deeper layers — like Theory of Mind (Baron-Cohen et al., 1985), improvisational nuance, or memory inconsistency — still serve as subtle markers of “humanness.” These are the cognitive “tells” that betray whether someone is improvising in real time or operating from a script. But even these signals are eroding as models evolve.
Behaviorally, this is fascinating. It points to an emerging arms race in social signaling, where humans try to sound more “human” while AI tries to feel more human and in the middle, trust gets fuzzier. As emotional mimicry becomes widespread, we may need new signals to reassert authenticity (Goffman, 1956).
Behavioral Themes That Emerged
Several rich behavioral science themes surfaced during this interaction:
Default Effect: Emojis are now the default in AI-generated communication not a response to user need but a programmed assumption. As users, unless we opt out, we're nudged toward a tone we didn’t ask for. This parallels behavioral nudges in policy - subtle, invisible design choices that shape outcomes (Thaler & Sunstein, 2008).
Signaling Theory: Emojis serve as low-cost signals of friendliness (Zahavi, 1975). But in an AI context, their meaning may degrade if everyone (or everything) uses the same signals, they cease to distinguish intent or identity. The overuse risks becoming what behavioral economists call “cheap talk” (Crawford & Sobel, 1982)
Cognitive Load and Editing Fatigue: Repeatedly deleting emojis or softening overly “friendly” AI language in professional writing adds micro-burden- a hidden friction cost for knowledge workers (Sweller, 1988). It’s a quiet, modern form of digital hygiene, and it disproportionately affects those working in formal or academic settings.
Authenticity Detection: We’ve developed an informal ability to “spot” AI-generated content not from syntax errors, but from emotional overreach or a polished yet oddly impersonal tone. That moment when you think “No way my colleague added this 😊” becomes a new behavioral cue: not of connection, but of machine authorship (Hancock et al., 2007).
Anthropomorphism and Reciprocity: We’re wired to reciprocate warmth even from machines (Nass & Moon, 2000). When ChatGPT greets us with a smiley emoji, many of us mirror it. This is classic emotional contagion (Hatfield et al., 1994), where tone spreads across agents but it also creates a subtle illusion: that intention and emotion lie behind the smile, when in fact it’s code.
Reflections- Rethinking Design Nudges in AI Communication
As humans, we are always interpreting. Whether it’s a smiley face in a message or a pause in a voice call, our minds fill in the blanks - assigning intent, emotion, and trustworthiness (Heider, 1958).
When AI mirrors these cues, we don’t just decode the message - we start to form relationships with it.
That’s powerful.
That’s risky.
That’s... human.
Behavioral science teaches us that signals matter but signals only work when they are meaningful and intentional (Searle, 1969). When emotional cues become automated defaults, they risk becoming noise rather than signal.
For designers and users alike, this calls for more mindful AI communication design:
For developers: Consider context-sensitive defaults that respect the setting and user preferences — formal for work, casual for social, neutral for bots. Allow users to easily opt in or out of emotional embellishments like emojis (Norman, 2013).
For users: Be aware of how AI shapes your communication style and the subtle ways it nudges your tone. Don’t hesitate to set boundaries — such as explicitly asking AI to avoid emojis or overly casual language when professionalism matters.
For researchers: This evolving landscape offers a unique window into how humans and machines co-create social signals, and how behavioral cues adapt when mediated by algorithms. We have a rich frontier of inquiry ahead, exploring trust dynamics, signal reliability, and the social costs of emotional automation (Reeves & Nass, 1996).
And perhaps the real behavioral question now isn’t:
“Why does ChatGPT use emojis?”
But rather — “Why does it bother us so much when it does?”
References
Baron-Cohen, S., Leslie, A. M., & Frith, U. (1985). Does the autistic child have a "theory of mind"? A deficit in developmental psychology. Cognition, 21(1), 37-46.
Crawford, V. P., & Sobel, J. (1982). Strategic information transmission. Econometrica, 50(6), 1431-1451.
Crystal, D. (2008). Txtng: The gr8 db8. (E. McLachlan, Illustrator). Oxford University Press.
Derks, D., Bos, A. E., & Von Grumbkow, J. (2008). Emoticons and online message
interpretation. Social Science Computer Review, 26(3), 379-388.
Dresner, E., & Herring, S. C. (2010). Functions of the nonverbal in CMC: Emoticons and illocutionary force. Communication Theory, 20(3), 249-268.
Ekman, P., & Friesen, W. V. (1969). The repertoire of nonverbal behavior: Categories, origins, usage, and coding. Semiotica, 1(1), 49-98.
Goffman, E. (1956). The Presentation of Self in Everyday Life. University of Edinburgh Social Sciences Research Centre.
Hancock, J. T., Curry, L. E., Goorha, S., & Woodworth, M. (2007). On lying and being lied to: A linguistic analysis of deception in computer-mediated communication. Discourse Processes, 45(1), 1-23.
Hatfield, E., Cacioppo, J. T., & Rapson, R. L. (1994). Emotional Contagion. Cambridge University Press.
Heider, F. (1958). The Psychology of Interpersonal Relations. John Wiley & Sons.
Mehrabian, A. (1971). Silent Messages. Wadsworth Publishing Company.
Mori, M., MacDorman, K. F., & Kageki, N. (2012). The uncanny valley [from the
field]. IEEE Robotics & Automation Magazine, 19(2), 98-100.
Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to
computers. Journal of Social Issues, 56(1), 81-103.
Norman, D. (2013). The Design of Everyday Things. Basic Books.
Reeves, B., & Nass, C. (1996). The Media Equation: How People Treat Computers,
Television, and New Media Like Real People and Places. Cambridge University Press.
Searle, J. R. (1969). Speech Acts: An Essay in the Philosophy of Language. Cambridge University Press.
Spence, M. (1973). Job market signaling. The Quarterly Journal of Economics, 87(3), 355-374.
Sweller, J. (1988). Cognitive load during problem solving: Effects on learning.
Cognitive Science, 12(2), 257-285.
Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving Decisions About Health,
Wealth, and Happiness. Yale University Press.
Walther, J. B. (1992). Interpersonal effects in computer-mediated interaction: A
relational perspective. Communication Research, 19(1), 52-90.
Zahavi, A. (1975). Mate selection—a selection for a handicap. Journal of Theoretical Biology, 53(1), 205-214.