Why AI Naming Tricks Might Be Misleading and Harmful
Many AI companies are naming their features after human mental processes, like “dreaming” or “memory.” These names make AI seem more humanlike than they really are. Critics say this can lead to misunderstandings about what these tools can do and how much trust we should place in them.
The Problem with Human-Like Names for AI Features
Recently, Anthropic introduced a feature called “dreaming” for its AI agents. This feature analyzes past activity logs to find patterns and improve performance. The name instantly evokes images of human dreams, which is misleading. It suggests the AI is doing something akin to human subconscious processing, but that’s not the case.
Many companies follow this trend. OpenAI, for example, released a “reasoning” model that needs “thinking” time before responding. Some startups talk about their AI “memories,” which store personal details about users. These names make AI tools seem more sentient than they really are. But these labels blur the line between human cognition and machine processes, which can cause confusion.
The Risks of Anthropomorphizing AI
Using human terms to describe AI can lead to overtrust and false assumptions. When people hear words like “virtue” or “wisdom” about their bots, they might believe these tools have moral judgment or deep understanding. In reality, AI systems operate based on data and algorithms, not human qualities.
Research shows that human-like descriptions can distort moral judgments about AI. People may assume AI has responsibility or trustworthiness it doesn’t actually possess. This can lead to overreliance on these tools, which may cause problems if users forget their actual limitations.
Some experts argue that calling AI “dreams” or “memories” risks creating a false sense of consciousness. It can make us think these tools are capable of human-like experiences, which they are not. Recognizing the true nature of AI helps us use it more responsibly and avoid misplaced trust.
Many industry leaders seem unaware or unwilling to acknowledge these limits. They continue to adopt human-like names, possibly to make their products more appealing or relatable. But this can backfire if users start expecting more from AI than it can deliver. Clear, accurate language is key to understanding what these systems really are.
Inspired by
- https://www.wired.com/story/i-am-begging-ai-companies-to-stop-naming-features-after-human-processes/












What do you think?
It is nice to know your opinion. Leave a comment.