Low Power Hardware Accelerators for Deep Learning

There is growing interest in the design of special-purpose hardware accelerators for deep learning, the Google TPU being one example. In this research, we seek to enable 2x or more reduction in the power consumption of TPU and TPU-like accelerators using an old hardware trick: voltage underscaling. We propose to run the chip at a reduced voltage to dramatically cut its power consumption, but at nominal frequency. In return for lower power, voltage underscaling results in occasional errors in computation (referred to as timing errors). Motivated by techniques such as DropOut that are commonly used in training deep nets, we show that the TPU can simply detect and “drop” these erroneous computation with minimal loss in classification accuracy. We are thus able to cut power by more than 60% with only a 1% loss in accuracy for benchmark deep neural nets. Similar techniques can be used to enhance the resilience of TPU-like accelerators to permanent faults.

Publications

CitationResearch AreasDate

Jeff Jun Zhang, Siddharth Garg, “FATE: fast and accurate timing error prediction framework for low power DNN accelerator design,” ICCAD 2018: 24

Lower Power ADCsNovember 5, 2018

Jeff Zhang, Kartheek Rangineni, Zahra Ghodsi, Siddharth Garg, “Thundervolt: enabling aggressive voltage underscaling and timing error resilience for energy efficient deep learning accelerators,” DAC 2018: 19:1-19:6

Lower Power ADCsJune 18, 2018

Jeff Jun Zhang, Tianyu Gu, Kanad Basu, Siddharth Garg, “Analyzing and mitigating the impact of permanent faults on a systolic array based neural network accelerator,” VTS 2018: 1-6

Lower Power ADCsApril 22, 2018