Skip to yearly menu bar Skip to main content


Contributed 7
in
Workshop: 2nd On-Device Intelligence Workshop

QAPPA: Quantization-Aware Power, Performance, and Area Modeling of DNN Accelerators (Ahmet Inci, CMU)


Abstract:

As the machine learning and systems community strives to achieve higher energy-efficiency through custom DNN
accelerators and model compression techniques, there is a need for a design space exploration framework that
incorporates quantization-aware processing elements into the accelerator design space while having accurate and
fast power, performance, and area models. In this work, we present QAPPA, a highly parameterized quantizationaware
power, performance, and area modeling framework for DNN accelerators. Our framework can facilitate
the future research on design space exploration of DNN accelerators for various design choices such as bit
precision, processing element type, scratchpad sizes of processing elements, global buffer size, device bandwidth,
number of total processing elements in the the design, and DNN workloads. Our results show that different bit
precisions and processing element types lead to significant differences in terms of performance per area and
energy. Specifically, our proposed lightweight processing elements achieve up to 4:9 more performance per area
and energy improvement when compared to INT16 based implementation.