• Skip to primary navigation
  • Skip to content

PSyLab

Digital, Mixed-signal and Integrated Power Circuits and Architectures
  • Home
  • Research
  • Publications
  • People
  • Chip Gallery
  • Positions

January 2, 2024 by Sarah Miller

MATIC

As a result of the increasing demand for deep neural network (DNN)-based services, efforts to develop dedicated hardware accelerators for DNNs are growing rapidly. However, while accelerators that have high performance and efficiency on convolutional deep neural networks (Conv-DNNs) have been developed, less progress has been made with regards to fully- connected DNNs (FC-DNNs), which are inherently memory-bound.

In this work, we propose MATIC (Memory Adaptive Training with In-situ Canaries), a methodology that enables aggressive voltage scaling of accelerator weight memories to improve the energy-efficiency of DNN accelerators. To enable accurate operation with voltage overscaling, MATIC combines the characteristics of destructive SRAM reads with the error resilience of neural networks in a memory-adaptive training process. Furthermore, PVT-related voltage margins are eliminated using bit-cells from synaptic weights as in-situ canaries to track runtime environmental variation. Demonstrated on a low-power DNN accelerator fabricated in 65 nm CMOS, MATIC enables up to 60-80 mV of voltage overscaling (3.3× total energy reduction versus the nominal voltage), or 18.6× application error reduction.

Further details about this work can be found in the Date paper and in the expanded TCAS-1 paper

Filed Under: Machine Learning, Research

Reader Interactions

  • Research
  • Publications
  • People
  • Chip Gallery
  • Positions

Copyright © 2025 · Executive Pro on Genesis Framework · WordPress · Log in