Experts push back on framing AI as a co-worker, warning it misrepresents the role of machines in the workplace and fuels misleading narratives
There are numerous variations of generative AI. Yet, it is increasingly being marketed similarly: with human identities and personas that render it less like code and more like a colleague.
An increasing number of firms are anthropomorphizing AI to expedite the establishment of trust and mitigate the potential impact on human employment. It is acquiring momentum and is dehumanizing.
I understand the rationale behind the widespread adoption of this framework. In the present upside-down economy, where every recruit feels like a risk, enterprise startups, many of which have emerged from the renowned accelerator Y Combinator, present AI as staff rather than software. Substitutes are being provided.
Assistants that are powered by artificial intelligence. Artificial intelligence engineers. Individuals employed by artificial intelligence. The language is deliberately designed to captivate overburdened recruitment managers.
Some individuals do not even acknowledge the concept of subtlety. For instance, Atlog recently introduced an “AI employee for furniture stores” who oversees all aspects of marketing and payments.
It claims that a single competent manager can now supervise 20 stores simultaneously. The implication is that it is unnecessary to employ additional personnel; instead, the system can be configured to scale autonomously. (The fate of the 19 administrators it replaces is yet to be determined.)
Consumer-facing firms are employing comparable strategies. Anthropic named its platform “Claude” in recognition of its warm, trustworthy tone as a companion to an anonymous, disembodied neural network.
It is a strategy derived from the fintech playbook, in which applications such as Dave, Albert, and Charlie concealed their transactional intentions behind approachable names. It is more convenient to entrust money to a friend.
The same logic has infiltrated AI. Which would you prefer: sharing sensitive data with a machine learning model or your closest friend Claude, who greets you cordially, remembers you, and rarely threatens you? (To its credit, OpenAI continues to inform you that you are conversing with a “generative pre-trained transformer. “)
However, we are approaching a critical juncture. I am genuinely enthusiastic about the potential of generative AI. Nevertheless, the dehumanizing nature of each new “AI employee” has intensified. Each new “Devin” prompts me to contemplate when the genuine Devins of the world resist being abstracted into job-displacing algorithms.
Generative AI is no longer merely an intriguing concept. Its purview is expanding even though the repercussions are still uncertain. In mid-May, the number of unemployed Americans receiving continued unemployment benefits reached its highest level since 2021, with 1.9 million individuals falling into this category. A significant number of these individuals were technological professionals who were terminated. The signals are intensifying.
Some of us still have memories of the film 2001: A Space Odyssey. At first, HAL, the computer aboard the ship, serves as a composed and accommodating assistant. However, it ultimately develops homicidal tendencies and shuts down the crew’s life support. Despite its status as science fiction, it resonated for a particular reason.
Last week, Dario Amodei, the CEO of Anthropic, predicted that AI could eradicate half of entry-level white-collar positions within the next one to five years, potentially leading to a 20% increase in unemployment. “Most of these employees are unaware that this is imminent,” he told Axios. “The concept is simply unbelievable, and it appears to be absurd.”

This is not comparable to depriving an individual of oxygen; nevertheless, the analogy is not that improbable. As the number of layoffs increases, the branding of AI as a “colleague” will appear less intelligent and more callous, and the consequences of automating more individuals out of their incomes will be significant.
Without regard to its packaging, the transition to generative AI is underway. Nevertheless, corporations are allowed to specify how they characterize these instruments. IBM did not refer to its mainframes as “digital co-workers.” PCs were not “software assistants” but workstations and productivity instruments.
Language remains a significant factor. Tools should be developed to encourage empowerment. Nevertheless, an increasing number of organizations are promoting an entirely distinct product, which is an error.
We do not require any additional AI “employees.” We need software that improves humans’ competitiveness, creativity, and productivity.
Consequently, refrain from engaging in conversations regarding the existence of fraudulent employees. Kindly furnish us with the resources that facilitate the management of complex businesses by exceptional administrators and empower individuals to make a more significant impact. That is the only thing that anyone is genuinely requesting.