
The system recorded spike trains from the neurons across a 26,400-electrode array with a 17.5-micrometer pitch, filtered them into continuous signals, and decoded an output through a linear readout layer. That output was then fed back to the neurons as electrical stimulation, completing a feedback loop that cycled roughly every 333 milliseconds. The readout weights were optimized in real time using an algorithm called FORCE (First-Order Reduced and Controlled Error) learning, which continuously adjusted the decoder to minimize the error between the network's output and a target waveform.
The enabling technology, per the researchers, was the use of PDMS microfluidic films to constrain how the neurons connected. Without physical constraints, cultured neurons form dense, highly synchronized networks that fire in lockstep, and these homogeneous networks failed to learn any of the target signals.
You may like Living neurons integrated into modern AI processing, claims SF startup ‘200,000 living human neurons’ on a microchip demonstrated playing Doom Human brain cells set to power two new data centers, thanks to 'body-in-the-box' CL1 Instead, the researchers confined neuronal cell bodies to 128 square wells, each roughly 100×100 micrometers, with each well holding an average of 14.6 neurons. The wells were linked by microchannels in two configurations: a lattice design with uniform nearest-neighbor connections, and a hierarchical design with sparser, multi-scale connections.
Both patterned configurations dramatically reduced pairwise neural correlations compared to unpatterned cultures (0.11 and 0.12 versus 0.45, respectively), increasing the dimensionality of the network's dynamics. Lattice networks consistently outperformed hierarchical ones across all target waveforms, likely because their denser intermodular connections produced higher firing rates that gave the linear decoder more signal to work with.
Using the lattice and hierarchical networks, the system learned to generate sine waves with periods of 4, 10, and 30 seconds, as well as triangle and square waves, and the same culture preparation could be retrained to oscillate at different frequencies. The researchers also demonstrated that the system could approximate a Lorenz attractor, a three-dimensional chaotic trajectory, with pairwise correlations above 0.8 between predicted and target signals across all dimensions during the learning phase.
"This work shows that living neuronal networks are not only biologically meaningful systems but may also serve as novel computational resources," said Hideaki Yamamoto, a professor at Tohoku University's Research Institute of Electrical Communication, in a press release published on the institution’s website.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Key considerations
- Investor positioning can change fast
- Volatility remains possible near catalysts
- Macro rates and liquidity can dominate flows
Reference reading
- https://www.tomshardware.com/tech-industry/SPONSORED_LINK_URL
- https://www.tomshardware.com/tech-industry/researchers-train-living-rat-neurons-to-perform-real-time-ml-computations#main
- https://www.tomshardware.com
- Press Start on April: GeForce NOW Brings 10 Games to the Cloud
- Into the Omniverse: NVIDIA GTC Showcases Virtual Worlds Powering the Physical AI Era
- Researchers build Wi-Fi chip that can operate inside a nuclear reactor — receiver uses special materials and design to withstand high doses of radiation for at
- Game On: Five New Titles Now Streaming on GeForce NOW
- UK confirms drone-killing DragonFire laser weapon for Royal Navy destroyers by 2027 —laser downs 400mph high‑speed drones, costs $13 per shot
Informational only. No financial advice. Do your own research.