"We need to think about how to steer AI. How do we transform today’s buggy and hackable AI systems into systems we can really trust and make sure they really do what we intend them to do?" - Yoshua Bengio, Turing Award winner and AI pioneer.
Old rules not working
Modern governance is struggling to keep pace with the swift evolution of technology, especially in the realm of AI. Relying on prescriptive, technology-specific laws is no longer effective because new systems quickly render them obsolete. Instead, legislation has to swing toward a framework based on core principles that guide the required outcomes, ensuring that safety, fairness, and accountability are built into the design, regardless of the technology used.
Data deluge and artificial scarcity
The thought that data is scarce, upon which many older legal models were founded, is now outdated. We live in an age of data abundance, or a 'deluge,' with AI systems designed to thrive on massive datasets. However, new regulations, such as the EU's general data protection regulation (GDPR) and India's digital personal data protection (DPDP) act, introduce necessary restrictions on data use, creation, in effect, a form of 'artificial scarcity.'
Focusing law on outcomes, not process
Time is right for shifting from process-driven law to principle-based law. If an AI system's performance hinges on the data it consumes, its governance must ensure that beneficial outcomes are delivered while societal risks are mitigated. This requires applying fundamental principles like retention limits and minimization, not as bureaucratic compliance steps, but as essential design features that prevent harm, discrimination, and unauthorized use.
A balancing act
India's privacy law serves as an example, attempting to strike a crucial balance. It acknowledges the enormous benefits AI can derive from data abundance while placing explicit responsibilities on those who process it. This design prevents a wholesale ban on beneficial data use. By mandating adherence to principles of fairness and ethical handling, the law attempts to ensure that technology is steered toward positive, trustworthy societal results.
Trust is all
The governance framework must ensure that when an algorithm is deployed, it consistently delivers a desirable and non-harmful outcome. This requires designers and practitioners to incorporate risk-mitigation measures directly into their systems, rather than simply paying lip service to rules. Only by making governance easy to implement and centered on establishing trust and mitigating risk can we fully harness the benefits of AI.
Summary
AI’s dependence on data abundance clashes with necessary privacy laws (like India's DPDP Act) that create scarcity. To resolve this, governance must abandon rigid, technology-specific rules and instead adopt principle-based legislation focusing on ethics, risk mitigation, and mandatory outcomes to ensure public trust and avoid hindering innovation.
Food for thought
If AI performs best with unrestricted data, can we truly achieve a beneficial AI-powered future while upholding absolute individual privacy rights?
AI concept to learn: Principle-based regulation
Principle-based regulation is a legislative approach that focuses on mandating broad, high-level objectives and ethical duties, such as fairness and accountability, rather than providing prescriptive technical rules. This structure allows the law to remain relevant even as technology rapidly evolves, forcing developers to build systems that inherently achieve the desired ethical outcomes.
[The Billion Hopes Research Team shares the latest AI updates for learning and awareness. Various sources are used. All copyrights acknowledged. This is not a professional, financial, personal or medical advice. Please consult domain experts before making decisions. Feedback welcome!]

COMMENTS