Skip to Main content Skip to Navigation
New interface
Conference papers

Power-Efficient Deep Neural Networks with Noisy Memristor Implementation

Abstract : This paper considers Deep Neural Network (DNN) linear-nonlinear computations implemented on memristor crossbar substrates. To address the case where true memristor conductance values may differ from their target values, it introduces a theoretical framework that characterizes the effect of conductance value variations on the final inference computation. With only second-order moment assumptions, theoretical results on tracking the mean, variance, and covariance of the layer-bylayer noisy computations are given. By allowing the possibility of amplifying certain signals within the DNN, power consumption is characterized and then optimized via KKT conditions. Simulation results verify the accuracy of the proposed analysis and demonstrate the significant power efficiency gains that are possible via optimization for a target mean squared error.
Document type :
Conference papers
Complete list of metadata
Contributor : Elsa Dupraz Connect in order to contact the contributor
Submitted on : Tuesday, September 7, 2021 - 3:59:11 PM
Last modification on : Friday, August 5, 2022 - 2:54:52 PM
Long-term archiving on: : Wednesday, December 8, 2021 - 7:59:27 PM


Files produced by the author(s)



Elsa Dupraz, Lav R Varshney, François Leduc-Primeau. Power-Efficient Deep Neural Networks with Noisy Memristor Implementation. ITW 2021: IEEE Information Theory Workshop, Oct 2021, Kanazawa, Japan. ⟨10.1109/ITW48936.2021.9611431⟩. ⟨hal-03337122⟩



Record views


Files downloads