Nobel Prize-winning economist Daron Acemoglu remains unconvinced about artificial intelligence’s transformative potential, even as new threats emerge from AI-powered cyberattacks. The MIT professor published a paper months before winning the 2024 Nobel Prize in economics, arguing that AI would deliver only modest productivity gains to the US economy while failing to eliminate human labor needs.
His skeptical stance has drawn criticism from Silicon Valley, where enthusiasm for AI continues to grow despite mixed evidence supporting the technology’s economic impact. Meanwhile, Google recently detected the first zero-day exploit created entirely by artificial intelligence, marking a dangerous milestone in cybersecurity.

Productivity Predictions Face Reality Check
Two years after Acemoglu’s cautious assessment, the data largely supports his position. While AI technology has advanced significantly, measurable productivity improvements remain limited across most sectors. The economist continues monitoring three specific areas in AI development, though he has not shifted his fundamental thesis about the technology’s economic potential.
His measured approach contrasts sharply with prevailing industry sentiment, where companies continue investing billions in AI infrastructure and capabilities. The disconnect between investment levels and productivity outcomes reflects broader questions about AI’s near-term economic value. Silicon Valley’s reaction to Acemoglu’s work demonstrates the tension between technological optimism and empirical evidence.
MIT Technology Review’s recent interview with Acemoglu explored whether recent AI developments have altered his perspective. The economist’s three focal points for AI monitoring remain undisclosed in the original coverage, but his overall skepticism persists despite technological progress. This position challenges widespread assumptions about AI’s immediate economic benefits.
Cybersecurity Enters Dangerous Territory
The discovery of AI-generated zero-day exploits represents a significant escalation in cyber warfare capabilities. Google’s security team identified and prevented what they described as a “mass exploitation event” created entirely through artificial intelligence systems. The attack demonstrated AI’s ability to discover previously unknown software vulnerabilities without human assistance.
This development coincides with OpenAI’s launch of a new cybersecurity tool designed to compete with Anthropic’s Claude Mythos system. OpenAI’s offering, called Mythos Daybreak, aims to patch software vulnerabilities before attackers can exploit them. The company promises “continuous software security” through AI-powered monitoring and response systems.

Industry Tensions and Legal Battles
Corporate drama continues surrounding OpenAI leadership, with co-founder Ilya Sutskever testifying about Sam Altman’s alleged dishonesty. Sutskever told the court he spent a full year documenting what he characterized as Altman’s “pattern of lying” in the ongoing Altman versus Musk trial. His testimony adds complexity to the legal dispute while simultaneously supporting some aspects of OpenAI’s defense strategy.
Microsoft CEO Satya Nadella dismissed earlier attempts to remove Altman as “amateur city,” highlighting the corporate politics surrounding OpenAI’s leadership structure. The trial has exposed internal tensions within the AI company while raising questions about governance and transparency in the rapidly evolving sector. These revelations occur as OpenAI expands access to its cybersecurity models more broadly than competitor Anthropic.
Meanwhile, President Trump’s upcoming trip to China includes tech industry leaders Elon Musk and Tim Cook, signaling continued government interest in AI development and regulation. The visit aims to promote American technology while potentially adopting elements of Beijing’s more restrictive regulatory approach. Investors have expressed preferences for reduced government interference in AI development from both American and Chinese authorities.
Stewart Brand’s new book “Maintenance: Of Everything, Part One” argues for greater recognition of maintenance work across technological systems. The counterculture icon and tech industry veteran frames maintenance as a “civilizational” act requiring more attention and resources. Yet critics note his vision focuses primarily on individual fulfillment rather than collective technological stewardship.

The contrast between AI’s promised benefits and current security vulnerabilities raises fundamental questions about deployment priorities. While economists like Acemoglu question productivity gains, cybersecurity experts confront AI systems capable of discovering and exploiting unknown software flaws. Can an industry struggling to measure economic value simultaneously defend against threats it helped create?








