Conference paper
Tunable Floating-Point for Artificial Neural Networks
Approximate computing has emerged as a promising approach to energy-efficient design of digital systems in many domains such as digital signal processing, robotics, and machine learning. Numerous studies report that employing different data formats in Deep Neural Networks (DNNs), the dominant Machine Learning approach, could allow substantial improvements in power efficiency considering an acceptable quality for results.
In this work, the application of Tunable Floating-Point (TFP) precision to DNN is presented. In TFP different precisions for different operations can be set by selecting a specific number of bits for significand and exponent in the floating-point representation. Flexibility in tuning the precision of given layers of the neural network may result in a more power efficient computation.
Language: | English |
---|---|
Publisher: | IEEE |
Year: | 2018 |
Pages: | 289-292 |
Proceedings: | 2018 IEEE 25th International Conference on Electronics, Circuits and Systems |
ISBN: | 1538691167 , 1538695626 , 9781538691168 and 9781538695623 |
Types: | Conference paper |
DOI: | 10.1109/ICECS.2018.8617900 |
ORCIDs: | Nannarelli, Alberto |
Approximate computing Artificial neural networks DNNs Decoding Machine learning Standards TFP Training approximate computing artificial neural networks data formats deep neural networks digital systems energy-efficient design floating point arithmetic floating-point representation learning (artificial intelligence) machine learning approach neural nets neural networks power aware computing power efficiency tunable floating-point precision