Smarter Training of Memristive Neural Networks

Neural networks consume massive amounts of power, and memristive implementations may offer a solution. But although memristors are much less power-hungry, they are also stochastic and—like most analog devices—less precise. How do we deal with that?

We have been collaborating with Imperial to make memristive neural networks

  • adapt to nonidealities
  • consume even less energy
  • be robust to uncertainty

As illustrated below, we did this by

  • redefining neural network node functions so that they take into account potential nonlinearity and stochasticity
  • rethinking how neural network weights are implemented using memristor conductances, so that regularization could act as a way of further tuning power consumption
  • computing validation error multiple times at checkpoints to take stochastic nature of memristors into account
Overview of our new techniques: taking into account nonlinearity and stochasticity, rethinking how weights are mapped onto conductances, and improving validation

The resulting paper just came out in Advanced Science, while the code can be found here.

I want to thank my coauthors for all their contributions: