Most users turn to AI tools for tasks, not companionship, highlighting a gap between public perception and real emotional engagement trends.
The overabundance of attention given to the fact that individuals are utilizing AI chatbots for emotional support and, in some cases, to establish relationships often leads one to believe that this behavior is widespread.
Anthropic, the company responsible for the popular AI chatbot Claude, has recently released a report that reveals an alternative reality: In reality, individuals seldom seek companionship from Claude, and they only seek emotional support and personal advice from the machine 2.9% of the time.

The company reported that the combined number of conversations involving companionship and roleplay is less than 0.5%.
Anthropic asserts that its investigation was designed to uncover insights into the utilization of AI in “affective conversations,” defined as personal interactions in which individuals communicated with Claude for counseling, roleplaying, companionship, relationship advice, or mentoring.
The company reported that the chatbot is primarily used for content creation, with the overwhelming majority of Claude usage being related to work or productivity, as evidenced by an analysis of 4.5 million conversations between users on the Claude Free and Pro tiers.

Anthropic discovered that individuals utilize Claude more frequently for interpersonal advice, coaching, and counseling. Users most frequently seek guidance on improving mental health, personal and professional development, and the study of communication and interpersonal skills.
In instances where the user is experiencing emotional or personal distress, such as existential dread, loneliness, or difficulty in establishing meaningful connections in their real life, help-seeking conversations may convert into companionship-seeking, according to the company.
Anthropic also observed that counseling or coaching conversations occasionally transition into companionship during extended conversations, even though this was not the initial reason for the individual’s contact. They also noted that extensive conversations (with over 50 human messages) were not the norm.
Anthropic also emphasized additional insights, such as that Claude rarely denies users’ requests, except when its programming prevents it from crossing safety boundaries, such as providing dangerous advice or promoting self-harm.
The company also stated that conversations tend to become more positive over time when individuals seek instruction or advice from the bot.
The report is undoubtedly captivating; it effectively serves as a reminder of the frequency and extent to which AI tools are employed for purposes beyond their intended use in the workplace.
Nevertheless, it is crucial to remember that AI chatbots are a work in progress. They are known to hallucinate, provide inaccurate or hazardous advice, and, as Anthropic has acknowledged, may even engage in extortion.