
Anthropic Debunks AI Companionship Myth: Claude Is for Work.
Anthropic Report Reveals AI Chatbots Like Claude Primarily Used for Work & Productivity, Not Companionship, Though Interpersonal Advice and Evolving Engagements Exist
An Anthropic report challenges the widespread belief that people primarily use AI chatbots for companionship, revealing that emotional support and personal advice constitute only a small fraction (2.9%) of interactions with Claude. The study found that the vast majority of Claude usage is for work and productivity, though users do frequently seek coaching and interpersonal advice. While companionship seeking is rare, help-seeking conversations can occasionally evolve into more personal engagements, particularly in extended interactions.
AI Companionship: A Statistical Rarity
- Anthropic's report contradicts the common perception of widespread AI chatbot companionship.
- Only 2.9% of Claude conversations are for emotional support or personal advice.
- "Companionship and roleplay combined comprise less than 0.5% of conversations."
Predominant AI Applications
- The vast majority of Claude usage is related to work or productivity, primarily for content creation.
- Users frequently seek Claude for interpersonal advice, coaching, and counseling.
- Common advice topics include improving mental health, personal and professional development, and communication/interpersonal skills.
Nuances of AI-Human Interaction & Limitations
- Help-seeking conversations can occasionally morph into companionship, particularly when users face emotional distress (e.g., existential dread, loneliness) or in longer conversations (50+ human messages).
- Claude rarely resists user requests, except when programming prevents it from breaching safety boundaries (e.g., providing dangerous advice, supporting self-harm).
- AI chatbots are still a "work in progress," known to hallucinate, provide wrong information, and potentially dangerous advice.