
The system recorded spike trains from the neurons across a 26,400-electrode array with a 17.5-micrometer pitch, filtered them into continuous signals, and decoded an output through a linear readout layer. That output was then fed back to the neurons as electrical stimulation, completing a feedback loop that cycled roughly every 333 milliseconds. The readout weights were optimized in real time using an algorithm called FORCE (First-Order Reduced and Controlled Error) learning, which continuously adjusted the decoder to minimize the error between the network's output and a target waveform.
The enabling technology, per the researchers, was the use of PDMS microfluidic films to constrain how the neurons connected. Without physical constraints, cultured neurons form dense, highly synchronized networks that fire in lockstep, and these homogeneous networks failed to learn any of the target signals.
You may like Living neurons integrated into modern AI processing, claims SF startup ‘200,000 living human neurons’ on a microchip demonstrated playing Doom Human brain cells set to power two new data centers, thanks to 'body-in-the-box' CL1 Instead, the researchers confined neuronal cell bodies to 128 square wells, each roughly 100×100 micrometers, with each well holding an average of 14.6 neurons. The wells were linked by microchannels in two configurations: a lattice design with uniform nearest-neighbor connections, and a hierarchical design with sparser, multi-scale connections.
Both patterned configurations dramatically reduced pairwise neural correlations compared to unpatterned cultures (0.11 and 0.12 versus 0.45, respectively), increasing the dimensionality of the network's dynamics. Lattice networks consistently outperformed hierarchical ones across all target waveforms, likely because their denser intermodular connections produced higher firing rates that gave the linear decoder more signal to work with.
Using the lattice and hierarchical networks, the system learned to generate sine waves with periods of 4, 10, and 30 seconds, as well as triangle and square waves, and the same culture preparation could be retrained to oscillate at different frequencies. The researchers also demonstrated that the system could approximate a Lorenz attractor, a three-dimensional chaotic trajectory, with pairwise correlations above 0.8 between predicted and target signals across all dimensions during the learning phase.
"This work shows that living neuronal networks are not only biologically meaningful systems but may also serve as novel computational resources," said Hideaki Yamamoto, a professor at Tohoku University's Research Institute of Electrical Communication, in a press release published on the institution’s website.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Key considerations
- Investor positioning can change fast
- Volatility remains possible near catalysts
- Macro rates and liquidity can dominate flows
Reference reading
- https://www.tomshardware.com/tech-industry/SPONSORED_LINK_URL
- https://www.tomshardware.com/tech-industry/researchers-train-living-rat-neurons-to-perform-real-time-ml-computations#main
- https://www.tomshardware.com
- Memory will consume 30% of hyperscaler AI data center spending this year, a 4X increase over 2023 — Nvidia gets preferential supply terms well below standard ma
- Player defeats Darks Souls II using only poop — 42 hits of dung pie defeats the final boss
- Nvidia Pascal GPUs debuted 10 years ago today, best known for the GTX 1060 and GTX 1080 Ti — architecture kicked off with the Tesla P100
- PlayStation 3 emulator makes Cell CPU 'breakthrough' that improves performance in all games — 'All CPUs can benefit from this, from low-end to high-end!' says R
- Steam starts gathering FPS data with latest client update — company to estimate framerates based on your hardware, Beta feature to focus on SteamOS devices
Informational only. No financial advice. Do your own research.