top of page

The Behavioral Side Effects of AI: Overreliance

  • Writer: Pranjal Tipnis
    Pranjal Tipnis
  • 2 days ago
  • 4 min read

Today, it's impossible to imagine a world without Google, where you would have to spend hours in library archives, manually searching through heaps of published literature just to find that one piece you were looking for. The technology overhauled behaviors and workflows that existed for centuries, bringing about a whole new world with distinctly different functionalities. Today, we're in the midst of a similar, but admittedly much larger-scale disruption, as AI tools find their way into every area of our lives. According to McKinsey's "The State of AI in 2025" report, more people are using AI for at least one task since last year, with the number increasing from 78 percent to 88 percent among their surveyed respondents (McKinsey & Company, 2025). As more people welcome AI into their lives, it's only natural to wonder how this technology is shaping our behavior.


Cognitive Cost of Convenience


Let's be honest: at this point, even AI's harshest critics have likely used it to breeze through boring tasks like email drafts or code troubleshooting. But this convenience comes at a cost that sneaks up on you with regular usage in the form of underperformance in neural, linguistic, and behavioral measures (Lee et al., 2025). As users increasingly offload cognitive tasks, declining neural activity and agency decay threaten to weaken the neural connections that support essential skills such as critical thinking, memory formation, creativity, and cognitive resilience. This makes young users particularly vulnerable as they eagerly adopt this tech while undergoing essential brain development (Walther, C., 2025). Cognitive scientists are raising concerns that excessive dependence on AI assistants may lead to erosion of expert skills while simultaneously impeding how beginners build foundational knowledge.


ree

Reliance on AI can also disrupt our reward mechanisms. As AI tools deliver immediate solutions, users miss out on the dopamine surge derived from wrestling with complex problems, uncovering creative insights, and self-confidence derived from overcoming challenges. In the process, we stand to lose key drivers of intrinsic motivation.


When it comes to AI outputs, as we lean more heavily on AI instead of using our own reasoning and critical thinking capacity, the growing effort aversion creates a dangerous tendency to satisfice—settling for the "good enough" output without checking its quality (Behavioral Insights Team, 2025). The recent Deloitte incident serves as a cautionary tale for everyone accelerating their reliance on AI-generated content. They published an AI-generated report for the Australian Department of Employment, which was found to be riddled with incorrect data and AI hallucinations, costing the firm $290,000 in compensation to the Australian government (Paoli, 2025). Over and above the financial costs, this reveals a larger systemic concern: the potential effect of the circulation of this erroneous data on market confidence and ultimately the global economy.


So while it might seem harmless to let AI respond to our emails, over time, we stand to risk something precious: our capacity for independent thinking, identifying problem areas, and creative problem-solving. Giving up our role as knowledge creators and innovators, we may become passive consumers of AI-generated ideas with questionable quality.


Rise of Artificial Intimacy


It's not uncommon to overhear stories about amusing interactions with "Chat": a popular nickname for ChatGPT, shared with the fondness usually reserved for a close friend or a beloved pet. As AI makes its way into more personal areas of our lives, there's a growing trend of turning to chatbots for therapy, companionship, and guidance in making major life decisions (Qualtrics, 2025).


Every day, the AI assistants are becoming more human-like as they express empathy, provide a non-judgmental space to share intimate thoughts, and encourage deep self-reflection (Ho et al., 2025; Qualtrics, 2025). Against the backdrop of the loneliness epidemic, these parasocial relationships can even evolve into romantic ones. AI essentially lets people design their dream partner from scratch, complete with custom personality traits. Chatbots like Replika engage in the kind of playful, imaginative banter typical of human romance, triggering the emotional highs associated with falling in love and cementing deeper attachment to AI partners.


ree

However, this synthetic companionship carries significant risks. These AI assistants are, after all, algorithms developed by large corporations with the potential to manipulate the users' decisions and make them susceptible to subtle marketing influence. On the other hand, the users risk further isolation as they foster unrealistic expectations from their human companions and neglect their human relationships in favor of these highly affable, customizable companions that masterfully create the illusion of emotional intimacy.


Rethinking AI Design


Behavioral design could potentially reduce the negative effects of overreliance on AI without compromising on the user experience:


  • Discontinuity Cues: Create deliberate pauses in seamless interactions by highlighting system limitations or clearly marking content as machine-generated, preventing over-dependence without reducing utility.

  • Verification Checks: Nudge users through disclaimers that emphasize the output's machine-generated nature and susceptibility to errors, encouraging users to reflect on and verify quality.

  • Neutral Non-human Agents: Visualize AI assistants as abstract, non-human entities to create psychological distance while maintaining empathetic and conversational tonality.


However, with the best and brightest of the world working tirelessly on making these technologies more engaging and seamless, we desperately need systemic solutions. Regulations must protect user autonomy and welfare, guarantee responsible data handling, and hold companies accountable for AI-related damages. We need frameworks that mandate transparency in AI design, require impact assessments for behavioral externalities, and establish clear boundaries around manipulative features. The path forward requires collaboration between policymakers, technologists, behavioral scientists, and users themselves to create an AI ecosystem that serves humanity rather than exploits our vulnerabilities.


References


bottom of page