Faster than an H100 for solving 128x128 matrices. But it’s not clear to me how they tested this, code is only available on request.
> We have described a high-precision and scalable analogue matrix
equation solver. The solver involves low-precision matrix operations,
which are suited well to RRAM-based computing. The matrix operations
were implemented with a foundry-developed 40-nm 1T1R RRAM array
with 3-bit resolution. Bit-slicing was used to guarantee the high preci-
sion. Scalability was addressed through the BlockAMC algorithm, which
was experimentally demonstrated. A 16 × 16 matrix inversion problem
was solved with the BlockAMC algorithm with 24-bit fixed-point preci-
sion. The analogue solver was also applied to the detection process
in massive MIMO systems and showed identical BER performance
within only three iterative cycles compared with digital counterparts
for 128 × 8 systems with 256-QAM modulation.
The idea was always appealing, but the implementation has always remained challenging.
For over a decade, "Mythic AI" was making accelerator chips with analog multipliers based on research by Laura Fick and coworkers. They raised $165M and produced actual hardware, but at the end of 2022 have almost gone bankrupt and since then there has been very little heard from them.
Much earlier, the legendary chip designers Federico Faggin and Carver Mead founded Synaptics with an idea to make neuromorphic chips which would be fast and power efficient by harnessing analog computation. Carver Mead published a book on that in 1989: "Analog VLSI and Neural Systems", but making working chips turned to be too hard, and Synaptics successfully pivoted to touchpads and later many other types of hardware.
Of course, the concept can be traced to an even older and still more legendary Frank Rosenblatt's "Perceptron" -- the original machine learning system from 1950s. It implemented the weights of the neural network as variable resistors that were adjusted by little motors during training. Multiplication was simply input voltage times conductivity of the resistor producing the current -- which is what all the newer system are also trying to use.
This looks like one of many ideas for more efficient compute chips for machine learning. I'm waiting for the day some chip gets mass produced and works at scale for some large model and with sufficient reliability, but until then, I don't think there's anything particularly newsworthy here. I do think it'll eventually happen at some point maybe within a decade, but surely some alternative computing paradigm to the GPU will succeed. The analog chip in the article only seems to be a research prototype for now.
Faster than an H100 for solving 128x128 matrices. But it’s not clear to me how they tested this, code is only available on request.
> We have described a high-precision and scalable analogue matrix equation solver. The solver involves low-precision matrix operations, which are suited well to RRAM-based computing. The matrix operations were implemented with a foundry-developed 40-nm 1T1R RRAM array with 3-bit resolution. Bit-slicing was used to guarantee the high preci- sion. Scalability was addressed through the BlockAMC algorithm, which was experimentally demonstrated. A 16 × 16 matrix inversion problem was solved with the BlockAMC algorithm with 24-bit fixed-point preci- sion. The analogue solver was also applied to the detection process in massive MIMO systems and showed identical BER performance within only three iterative cycles compared with digital counterparts for 128 × 8 systems with 256-QAM modulation.
The idea was always appealing, but the implementation has always remained challenging.
For over a decade, "Mythic AI" was making accelerator chips with analog multipliers based on research by Laura Fick and coworkers. They raised $165M and produced actual hardware, but at the end of 2022 have almost gone bankrupt and since then there has been very little heard from them.
Much earlier, the legendary chip designers Federico Faggin and Carver Mead founded Synaptics with an idea to make neuromorphic chips which would be fast and power efficient by harnessing analog computation. Carver Mead published a book on that in 1989: "Analog VLSI and Neural Systems", but making working chips turned to be too hard, and Synaptics successfully pivoted to touchpads and later many other types of hardware.
Of course, the concept can be traced to an even older and still more legendary Frank Rosenblatt's "Perceptron" -- the original machine learning system from 1950s. It implemented the weights of the neural network as variable resistors that were adjusted by little motors during training. Multiplication was simply input voltage times conductivity of the resistor producing the current -- which is what all the newer system are also trying to use.
This looks like one of many ideas for more efficient compute chips for machine learning. I'm waiting for the day some chip gets mass produced and works at scale for some large model and with sufficient reliability, but until then, I don't think there's anything particularly newsworthy here. I do think it'll eventually happen at some point maybe within a decade, but surely some alternative computing paradigm to the GPU will succeed. The analog chip in the article only seems to be a research prototype for now.
Using all analog signal, why non analogue multiplying cells (operation amplifier)!
Huge if true, room temperature semiconductor if false
Semi or supra conductor ?
Now put it in a guitar pedal!
Seems a bit too good to be true.
[dead]
[dead]
What’s this good for?
Fear