
(Image credit: fotograzia via Getty Images)ShareShare by:
- Copy link
- X
Share this article 3Join the conversationFollow usAdd us as a preferred source on GoogleNewsletterSubscribe to our newsletter
Researchers have crafted a basic framework for the upcoming age of optical computation — utilizing light instead of electrical current to fuel microchips — that could radically alter the manner in which artificial intelligence (AI) systems are instructed and used.
A weighted organizational arrangement termed a “tensor,” which resembles a filing system equipped with sticky notes to mark the most accessed compartments, lies at the core of expansive language models (LLMs) and those rooted in deep learning.
When an AI system undergoes instruction to execute a task or function, like recognizing an image or forecasting a string of text, it arranges the data into these tensors. In today’s AI setups, the pace at which systems are able to process tensor data — or navigate through the filing systems — constitutes a crucial performance constraint, thereby enforcing a strict upper limit on the dimensions a system can attain.
You may like
-

‘Rainbow-on-a-chip’ could help keep AI energy demands in check — and it was created by accident
-

China solves ‘century-old problem’ with new analog chip that is 1,000 times faster than high-end Nvidia GPUs
-

Scientists create world’s first microwave-powered computer chip — it’s much faster and consumes less power than conventional CPUs
Commonly, in light-centric computation, systems decipher tensors through multiple emissions of laser arrays. These act akin to a device that scans a product’s barcode to discern its contents, but wherein each container denotes a mathematical challenge. The computational capacity needed to process these numbers increases proportionally with the inherent model abilities.
Despite light-based computation offering enhanced speed and diminished energy expenditure at tighter scales, the majority of optical systems cannot be operated concurrently. Unlike graphical processing units (GPUs), which can be linked to drastically increase the availability of processing capability, light-fueled systems typically operate sequentially. Owing to this limitation, the majority of developers neglect optical computation, preferring the parallel processing merits of heightened power across larger deployments.
This scalability issue explains why premier systems from organizations like OpenAI, Anthropic, Google, and xAI necessitate a multitude of GPUs functioning together for both instruction and execution.
However, the innovative architecture, termed Parallel Optical Matrix-Matrix Multiplication (POMMM), stands to resolve the obstacle inhibiting the advancement of optical computation. Distinct from preceding optical techniques, it executes multiple tensor operations at once by utilizing a single laser pulse.
The upshot is a fundamental hardware design for AI, holding the capability to elevate the tensor processing velocity of any given AI system past the performance of cutting-edge electronic equipment while decreasing its energy demand.
Next-generation optical computing and AI hardware
The paper, released November 14 in the journal Nature Photonics, specifies the outcomes from a trial optical computation model alongside an array of comparative examinations versus established optical and GPU processing methodologies.
The researchers utilized a particular configuration of widely used optical hardware elements alongside a groundbreaking encoding and processing approach to capture and interpret tensor packets in just one laser emission.
You may like
-

‘Rainbow-on-a-chip’ could help keep AI energy demands in check — and it was created by accident
-

China solves ‘century-old problem’ with new analog chip that is 1,000 times faster than high-end Nvidia GPUs
-

Scientists create world’s first microwave-powered computer chip — it’s much faster and consumes less power than conventional CPUs
They succeeded in encoding digital data within the extent and phase of light emissions, changing data into tangible characteristics in the optical dimension — these light emissions merging to execute mathematical tasks like tensor or matrix products.
These optical tasks don’t require surplus energy for processing within this structure, since they take place naturally as the light flows through. This removes any demand for governing or altering activities while under execution, along with the power required for these actions.
“This methodology has the potential to be implemented on virtually any optical setup,” relayed Zhipei Sun, lead researcher of the project and head of Aalto University’s Photonics Team, in a public statement. “Looking forward, our intention is to directly incorporate this calculation system onto photonic chips, allowing light-driven processors to tackle intricate AI undertakings while consuming extremely little energy.”
Zhang anticipates this methodology could find integration into major AI platforms within a time frame of three to five years.
An artificial general intelligence accelerator
Spokespersons have labeled this as a stride toward the cutting edge of Artificial General Intelligence (AGI) — a predicted AI paradigm surpassing human intelligence, capable of autonomously mastering various fields without specific training sets.
Zhang noted further in the statement: “This is poised to generate a novel wave of optical processing architecture, substantially quickening intricate AI assignments across varied sectors.”
While the document does not explicitly address AGI, it references general-purpose computation multiple times.
The concept that enlarging current AI refinement processes is a viable strategy toward realizing AGI is such a widespread belief among some sectors of the computing domain that you can purchase apparel stating that “scaling is all you need.”
RELATED STORIES
—Microsoft’s new light-based computer is inspired by 80-year-old technology — it could make AI 100 times more efficient | Live Science
—Scientists clear major roadblocks in mission to build powerful AI photonic chips | Live Science
—World’s first light-powered neural processing units (NPUs) could massively reduce energy consumption in AI data centers | Live Science
Conversely, other scholars, such as Meta’s departing chief AI scientist Yann LeCun, oppose this, asserting that LLMs — the existing benchmark AI design — will never attain AGI levels regardless of their extent or intensity of scaling.
With POMMM, the researchers propose they might hold a pivotal element of the equipment framework required to bypass one of the discipline’s prominent obstacles, granting developers the capability to vastly exceed the rudimentary bounds of the current standards.

Tristan Greene
Tristan is a journalist covering science and tech located in the U.S. His reporting focuses on themes such as artificial intelligence, theoretical physics, and progressive innovations.
His writings have been featured across various media channels like Mother Jones, The Stack, The Next Web, and Undark Magazine.
Previous to entering journalism, Tristan was a part of the US Navy for a decade, functioning as a programmer and engineer. Outside of his writing pursuits, he spends time gaming alongside his wife and delving into military chronicles.
Show More Comments
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.
LogoutRead more

‘Rainbow-on-a-chip’ could help keep AI energy demands in check — and it was created by accident

China solves ‘century-old problem’ with new analog chip that is 1,000 times faster than high-end Nvidia GPUs

Scientists create world’s first microwave-powered computer chip — it’s much faster and consumes less power than conventional CPUs

‘Putting the servers in orbit is a stupid idea’: Could data centers in space help avoid an AI energy crisis? Experts are torn.

‘This is easily the most powerful quantum computer on Earth’: Scientists unveil Helios, a record-breaking quantum system

New ‘Dragon Hatchling’ AI architecture modeled after the human brain could be a key step toward AGI, researchers claim
Latest in Computing

Scientists build ‘most accurate’ quantum computing chip ever thanks to new silicon-based computing architecture
