Integer Addition Algorithm Could Reduce Energy Needs of AI by 95%
October 20th, 2024I have nowhere near enough propellers in my beanie to do a reality check on this: Addition is All You Need for Energy-efficient Language Models
Definitely interesting, if real.
Via: TechXplore:
A team of engineers at AI inference technology company BitEnergy AI reports a method to reduce the energy needs of AI applications by 95%. The group has published a paper describing their new technique on the arXiv preprint server.
…
In this new effort, the team at BitEnergy AI claims that they have found a way to dramatically reduce the amount of computing required to run AI apps that does not result in reduced performance.
The new technique is basic—instead of using complex floating-point multiplication (FPM), the method uses integer addition. Apps use FPM to handle extremely large or small numbers, allowing applications to carry out calculations using them with extreme precision. It is also the most energy-intensive part of AI number crunching.
…
The researchers call their new method Linear-Complexity Multiplication—it works by approximating FPMs using integer addition. They claim that testing, thus far, has shown that the new approach reduces electricity demand by 95%.
The one drawback it has is that it requires different hardware than that currently in use. But the research team also notes that the new type of hardware has already been designed, built and tested.
Kind of cool from a “looking back many decades from now when AI has utterly and completely taken over all life on Earth” perspective. If true, this would be seen as a monumental turning point in that takeover, I would think. Now, only hardward, not energy itself, stands in the way of building insanely (no pun intended!) huge models.