“The key to artificial intelligence has always been the representation.” – Jeff Hawkins, AI researcher and neuroscientist
AI deployment needs stronger guardrails
As enterprises rush to adopt artificial intelligence, the debate has shifted from whether to use AI to how to deploy it safely. Experts warn that without proper checks, AI can distort judgment, amplify biases, and expose vulnerabilities. With banks, corporations, and public systems increasingly dependent on algorithms, oversight is crucial to prevent systemic harm.
The dangers of blind trust in algorithms
AI models learn from vast datasets, but as Anushree Verma of Gartner cautions, if the data is biased or poorly engineered, the result is “garbage in, garbage out.” Overreliance on AI without human judgment can lead to distorted outcomes, particularly when systems are treated as infallible. Transparency and audits are essential to detect flaws and ensure accountability.
Cybersecurity and governance challenges
Huzaefa Motiwala from Palo Alto Networks points out that AI-driven automation is being exploited by cybercriminals through manipulation and noise attacks. Weak regulatory frameworks further increase these risks. A robust AI governance model must include clear policies, defined oversight, and regular risk assessments to align AI operations with ethical and legal standards.
Building trustworthy AI systems
Experts emphasize the concept of AI TRiSM trust, risk, and security management—as the foundation for reliable AI governance. It demands transparency in data use, fairness in algorithmic design, and accountability in decision-making. Without these, public confidence and digital safety remain at stake.
India’s regulatory gap
India currently relies on the IT Amendment Act of 2008, lacking an AI-specific law. Policymakers are pushing for a framework that mandates transparency, audits, and oversight for AI platforms. Until such measures are implemented, the risks of algorithmic bias, inequality, and misuse remain high.

COMMENTS