O’Reilly Media – Why Multi-Agent Systems Need Memory Engineering

Most multi-agent AI systems fail expensively before they fail quietly. The pattern is familiar to anyone who’s debugged one: Agent A completes a subtask and moves on. Agent B, with no visibility into A’s work, reexecutes the same operation with slightly different parameters. Agent C receives inconsistent results from both and confabulates a reconciliation. The…

O’Reilly Media – Semantic Layers in the Wild: Lessons from Early Adopters

My first post made the case for what a semantic layer can bring to the modern enterprise: a single source of truth accessible to everyone who needs it—BI teams in Tableau and Power BI, Excel-loving analysts, application integrations via API, and the AI agents now proliferating across organizations—all pulling from the same governed, performant metric…

O’Reilly Media – How to Bet Against the Bitter Lesson

I’ve been telling myself and anyone who will listen that Agent Skills point toward a new kind of future AI + human knowledge economy. It’s not just Skills, of course. It’s also things like Jesse Vincent’s Superpowers and Anthropic’s recently introduced Plugins for Claude Cowork. If you haven’t encountered these yet, keep reading. It should…

O’Reilly Media – Why Capacity Planning Is Back

In a previous article, we outlined why GPUs have become the architectural control point for enterprise AI. When accelerator capacity becomes the governing constraint, the cloud’s most comforting assumption—that you can scale on demand without thinking too far ahead—stops being true. That shift has an immediate operational consequence: Capacity planning is back. Not the old…

Latest from MIT Tech Review – OpenAI’s “compromise” with the Pentagon is what Anthropic feared

On February 28, OpenAI announced it had reached a deal that will allow the US military to use its technologies in classified settings. CEO Sam Altman said the negotiations, which the company began pursuing only after the Pentagon’s public reprimand of Anthropic, were “definitely rushed.” In its announcements, OpenAI took great pains to say that…

Latest from MIT Tech Review – I checked out one of the biggest anti-AI protests ever

Pull the plug! Pull the plug! Stop the slop! Stop the slop! For a few hours this Saturday, February 28, I watched as a couple hundred anti-AI protesters marched through London’s King’s Cross tech hub, home to the UK headquarters of OpenAI, Meta and Google DeepMind, chanting slogans and waving signs. The march was organized…

Latest from MIT : Featured video: Coding for underwater robotics

During a summer internship at MIT Lincoln Laboratory, Ivy Mahncke, an undergraduate student of robotics engineering at Olin College of Engineering, took a hands-on approach to testing algorithms for underwater navigation. She first discovered her love for working with underwater robotics as an intern at the Woods Hole Oceanographic Institution in 2024. Drawn by the…

Latest from MIT Tech Review – AI is rewiring how the world’s best Go players think

Burrowed in the alleys of Hongik-dong, a hushed residential neighborhood in eastern Seoul, is a faded stone-tiled building stamped “Korea Baduk Association,” the governing body for professional Go. The game is an ancient one, with sacred stature in South Korea.  But inside the building, rooms once filled with the soft clatter of hands dipping into…

Latest from MIT Tech Review – Finding value with AI and Industry 5.0 transformation

For years, Industry 4.0 transformation has centered on the convergence of intelligent technologies like AI, cloud, the internet of things, robotics, and digital twins. Industry 5.0 marks a pivotal shift from integrating emerging technologies to orchestrating them at scale. With Industry 5.0, the purpose of this interconnected web of technologies is more nuanced: to augment…

Latest from MIT : New method could increase LLM training efficiency

Reasoning large language models (LLMs) are designed to solve complex problems by breaking them down into a series of smaller steps. These powerful models are particularly good at challenging tasks like advanced programming and multistep planning. But developing reasoning models demands an enormous amount of computation and energy due to inefficiencies in the training process….