The use of energy by artificial intelligence technology is rapidly increasing. Massive data centers performing the intense computation needed for training large models are consuming huge amounts of energy. Last year, AI consumed over 4% of U.S. electricity, putting a real strain on electric grids. Projections are that AI energy demand will more than double by 2040.
Researchers at Cornell University, along with collaborators from IBM and RPI, are exploring ways to reduce the energy consumption by AI. They are looking at a technology called AIMC – analog in-memory computing.
AIMC is different from traditional architectures which constantly move data back and forth between memory and processors. AIMC stores and processes data in a single location. Calculations are performed instantly without moving data, potentially slashing power consumption by 1,000 times.
While this could be a major game changer, teaching these analog chips to perform AI tasks has been a major challenge. Analog hardware can behave imperfectly, leading to distortion and errors. But the team has developed an analog version of an AI training algorithm called backpropagation that systematically corrects imperfections.
There is a long way to go in applying their approach to AI systems, but it could end up being transformative, making it possible to train and fine-tune large AI models with far less energy and cost. It would enable the use of applications that are currently out of reach because of the amount of power required to perform them.