posted on 2016-11-23, 17:34authored byA. N. Gorban, E. M. Mirkes, A. Zinovyev
Most of machine learning approaches have stemmed from the application of minimizing the mean squared distance principle, based on the computationally efficient quadratic optimization methods. However, when faced with high-dimensional and noisy data, the quadratic error functionals demonstrated many weaknesses including high sensitivity to contaminating factors and dimensionality curse. Therefore, a lot of recent applications in machine learning exploited properties of non-quadratic error functionals based on L1 norm or even sub-linear potentials corresponding to quasinorms Lp (0
Funding
This study was supported in part by Big Data Paris Science et Lettre Research University project ‘PSL Institute for Data Science’.
History
Citation
Neural Networks, 2016, 84, pp. 28-38
Author affiliation
/Organisation/COLLEGE OF SCIENCE AND ENGINEERING/Department of Mathematics
Version
AM (Accepted Manuscript)
Published in
Neural Networks
Publisher
Elsevier for European Neural Network Society (ENNS), International Neural Network Society (INNS), Japanese Neural Network Society (JNNS)