“The danger is not that machines will become smarter than us, but that we will rely on them without understanding their limits.” - Geoffrey Hinton, deep learning pioneer
The fundamental nature of AI
Pioneers and veterans of the Artificial Intelligence field accept it was never meant to tell the bare truth, but to ingest content or data, and make some sense of it as the human user wanted. AI veterans in the 1950s thought AI may eventually 'mimic' human reasoning. Today we realise that the wonderful AI tools are excellent at processing human language(s), without truly experiencing its colours or even understanding it deeply. Anyone expecting 'truthfulness' hence is often dismayed. That was never the purpose!
The predictability game of AI
All modern AI systems - most built on top of machine learning - generate output by predicting the next most plausible word, and not necessarily the correct word. This ability is very impressive indeed, and humans do get impressed due to an anthropomorphic bias. But are they definitely truthful? That is not certain at all. And are they always totally reliable in terms of output consistency? Not at all. These are key limitations of AI systems.
The human angle of AI
Anthropomorphism leads to funny outcomes indeed. Machines that sound confident can have an easy time with human examiners. The way Generative AI and other AI tools are spreading through human society, there is a real risk that many may simply stop thinking altogether, and depend only on AI systems for their learning and work. That isn't a desirable outcome at all. With AI becoming integral to daily life, societies have to now decide what responsibilities machines should be given, if at all. Should AI act on our behalf, as agents are supposed to? The very freedom of humanity will be at stake.
Summary
Thinkers stress that AI is built to make sense, not to be truthful, and humans must set certain limits by their own thoughtfulness and careful handling of AI. We cannot allow AI to have a free run and decide core human issues, and sit back and relax watching it all unfold. Or unravel. The concerns over the rise of AGI (artificial general intelligence) or ASI (artificial superintelligence) stem from this thought.
Food for thought
If a machine becomes more persuasive than accurate, how do we protect human judgment?
AI concept to learn: General Intelligence
General intelligence refers to AI systems that can handle a wide range of tasks instead of being limited to one domain. These systems learn patterns from massive data and generate responses that appear intelligent. Beginners should understand that such intelligence is statistical, not conscious or human-like.
[The Billion Hopes Research Team shares the latest AI updates for learning and awareness. Various sources are used. All copyrights acknowledged. This is not a professional, financial, personal or medical advice. Please consult domain experts before making decisions. Feedback welcome!]

COMMENTS