TL;DR

Scientists have developed a foundational architecture for optical computing—using light rather than electricity—that processes AI tensor calculations in a single laser burst. The breakthrough could enable AI systems to scale beyond current GPU limitations whilst dramatically reducing energy consumption.

The Bottleneck Problem

At the heart of modern AI lies the tensor: a weighted data structure that organises information like a filing cabinet with priority markers. When AI models are trained, they sort data into these tensors. The speed at which models can process tensor data represents a fundamental performance bottleneck that limits how large and capable models can become.

Currently, the most powerful models from OpenAI, Anthropic, Google and xAI require thousands of GPUs running in tandem. Light-based computing offers speed and efficiency advantages, but most optical systems can’t run in parallel like GPUs—severely limiting their practical application.

POMMM: A New Approach

The new architecture, called Parallel Optical Matrix-Matrix Multiplication (POMMM), addresses this limitation by conducting multiple tensor operations simultaneously using a single laser burst. Published in Nature Photonics, the research details how scientists encoded digital data into light wave properties, allowing mathematical operations to occur passively as light propagates.

“This approach can be implemented on almost any optical platform,” said Zhipei Sun, leader of Aalto University’s Photonics Group. “In the future, we plan to integrate this computational framework directly onto photonic chips, enabling light-based processors to perform complex AI tasks with extremely low power consumption.”

Energy Efficiency Gains

Crucially, these optical operations don’t require additional power during processing—they occur naturally as light moves through the system. This eliminates the power-hungry control and switching operations that make current AI infrastructure so energy-intensive.

The researchers estimate integration into major AI platforms could occur within three to five years.

AGI Implications

While the paper focuses on general-purpose computing, representatives described the work as a step towards next-generation artificial general intelligence. By removing one of the field’s largest scaling bottlenecks, developers could theoretically build models far beyond current paradigm limits.

Looking Forward

The research arrives as AI companies face mounting infrastructure constraints. If POMMM delivers on its promise, it could fundamentally reshape how AI systems are built—replacing banks of power-hungry GPUs with elegant photonic chips processing calculations at the speed of light.


Source: Live Science

Share this article