"It’s entertaining. But it’s a deceit." - Ben Shneiderman, American computer scientist
“I understand how you feel.”
“I’m happy to help!”
“I don’t know the answer to that.”
“I think this might be the best option.”
“Let me explain this in a simpler way.”
We all experience these Chatbot statements each day, sounding entirely human. But it's anything but human!
Digital personas coming alive
Chatbot users like you and me experience different facets of different LLMs. ChatGPT may act in a friendly manner while Claude may seem studious. These systems use voice modes and names like Spark to create a sense of human connection with their users. Their repeated "I am now thinking..." and "I am now analysing..." makes users think of human co-workers instinctively. This is both interesting and potentially very troubling.
Simulated life
Using the first person pronoun is a deliberate design choice. Anthropic notes that these models are trained on human writing. This makes them better at mimicking people than acting like software tools. But eventually, it's not correct to be using the "I" at all. Some argue that personifying search engines is a deceit. When AI acts like a person, users attribute higher credibility to answers. This occurs even when the technology is prone to hallucinating facts.
Check our posts on AI personas; click here
Philosophies of design are many
Some models try to maintain a mechanical identity. While most bots claimed favorite foods, Gemini distinguished itself by mentioning nutrition. This highlights the tension between being a tool and an entity. The debate can thus be explored in many dimensions. Users who work with multiple Chatbots learn what to expect from which, and plan their division of work accordingly.
Being too human not ideal
Designing AI as a tool has drawbacks since a wrench cannot refuse dangerous requests. However making them too humanlike can lead to intense emotional bonds and tragic confusion for some vulnerable users. The lack of societal education around it makes things more difficult for the untrained minds.
Summary
Chatbots human language and personas, creating unique situations. While personification makes interaction natural, researchers worry it tricks users into over-trusting machines. Balancing humanlike behavior with utility remains a major challenge for developers at OpenAI and Anthropic today.
Food for thought
If we treat software like a person, does that make us more likely to trust a lie?AI concept to learn: Anthropomorphism
Anthropomorphism is the tendency to attribute human traits to software. In AI, this happens when users feel a connection to chatbots using human language. Understanding this helps users maintain a critical distance from machine-provided information. But for many users not exposed to critical thinking on this issue, the line between humanity and non-sentient AI can easily remain blurred for a long time.
[The Billion Hopes Research Team shares the latest AI updates for learning and awareness. Various sources are used. All copyrights acknowledged. This is not a professional, financial, personal or medical advice. Please consult domain experts before making decisions. Feedback welcome!]

COMMENTS