University of Leicester
Browse

Piece-wise quadratic approximations of arbitrary error functions for fast and robust machine learning

Download (478.03 kB)
journal contribution
posted on 2016-11-23, 17:34 authored by A. N. Gorban, E. M. Mirkes, A. Zinovyev
Most of machine learning approaches have stemmed from the application of minimizing the mean squared distance principle, based on the computationally efficient quadratic optimization methods. However, when faced with high-dimensional and noisy data, the quadratic error functionals demonstrated many weaknesses including high sensitivity to contaminating factors and dimensionality curse. Therefore, a lot of recent applications in machine learning exploited properties of non-quadratic error functionals based on L1 norm or even sub-linear potentials corresponding to quasinorms Lp (0

Funding

This study was supported in part by Big Data Paris Science et Lettre Research University project ‘PSL Institute for Data Science’.

History

Citation

Neural Networks, 2016, 84, pp. 28-38

Author affiliation

/Organisation/COLLEGE OF SCIENCE AND ENGINEERING/Department of Mathematics

Version

  • AM (Accepted Manuscript)

Published in

Neural Networks

Publisher

Elsevier for European Neural Network Society (ENNS), International Neural Network Society (INNS), Japanese Neural Network Society (JNNS)

issn

0893-6080

eissn

1879-2782

Acceptance date

2016-08-19

Available date

2017-08-30

Publisher version

http://www.sciencedirect.com/science/article/pii/S0893608016301113

Language

en

Usage metrics

    University of Leicester Publications

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC