“As our models get more powerful, they also get more expensive to understand.” — Geoffrey Hinton, pioneer of deep learning and Turing Award laureate
The unexpected rise in AI expenses
Artificial intelligence was once expected to get cheaper as it advanced, but the opposite has happened. Developers who use AI for software analysis, document review, or coding are finding their bills higher than ever. Despite cheaper processing power, today’s smarter models consume far more tokens to perform complex reasoning.
Smarter, but hungrier machines
The drop in token cost is offset by models’ growing “thinking” needs. AI agents now cross-check data, write subroutines, and even self-verify before answering. These actions mean each task—whether summarizing a document or generating code—can use tens of thousands of tokens.
Business models under pressure
Companies like Notion, Cursor, and Replit are adjusting pricing as AI usage surges. Their customers, often developers relying on code-generating tools, now face bills that burn through monthly credits in days. For AI firms, balancing user growth with profit margins has become an economic tightrope.
The giants and the cost spiral
Even major players such as OpenAI, Google, and Anthropic are spending billions annually to train and run advanced models like GPT-5. Although smaller, cheaper versions like GPT-5 Nano exist, customers still favor full-featured systems for better performance—driving up infrastructure demands.
Searching for balance
Experts suggest “dumber” AI could be part of the answer—simpler models for lighter tasks and powerful ones for deeper reasoning. Until that balance is achieved, the cost of intelligence will keep climbing.

COMMENTS