SMU

A Tensorflow Implementation of SMU: SMOOTH ACTIVATION FUNCTION FOR DEEP NETWORKS USING SMOOTHING MAXIMUM TECHNIQUE

arXiv

https://arxiv.org/abs/2111.04682

pytorch implementation

Please check https://github.com/iFe1er/SMU_pytorch for pytorch implementation.

requirements

Tested with Tensorflow 2.x. For Tensorflow 1, simply repalce tf.compat.v1.get_variable with tf.get_variable would do the trick.

Reference:

@ARTICLE{2021arXiv211104682B,
author = {{Biswas}, Koushik and {Kumar}, Sandeep and {Banerjee}, Shilpak and {Pandey}, Ashish Kumar},
title = “{SMU: smooth activation function for deep networks using smoothing maximum technique}”,
journal = {arXiv e-prints},
keywords = {Computer Science – Machine Learning, Computer Science – Artificial Intelligence, Computer Science – Computer Vision and Pattern Recognition, Computer Science – Neural and Evolutionary Computing},
year = 2021,
month = nov,
eid = {arXiv:2111.04682},
pages = {arXiv:2111.04682},
archivePrefix = {arXiv},
eprint = {2111.04682},
primaryClass = {cs.LG},
adsurl = {https://ui.adsabs.harvard.edu/abs/2021arXiv211104682B},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}

GitHub

View Github