Difference between revisions of "Publications:Training neural networks by stochastic optimisation"
| Line 4: | Line 4: | ||
{{PublicationSetupTemplate|Author=Antanas Verikas, Adas Gelzinis | {{PublicationSetupTemplate|Author=Antanas Verikas, Adas Gelzinis | ||
|PID=286845 | |PID=286845 | ||
| − | |Name=Verikas, Antanas | + | |Name=Verikas, Antanas (av) (0000-0003-2185-8973) (Högskolan i Halmstad (2804), Sektionen för Informationsvetenskap, Data– och Elektroteknik (IDE) (3905), Halmstad Embedded and Intelligent Systems Research (EIS) (3938));Gelzinis, Adas (Department of Applied Electronics, Kaunas University of Technology, Lithuania) |
|Title=Training neural networks by stochastic optimisation | |Title=Training neural networks by stochastic optimisation | ||
|PublicationType=Journal Paper | |PublicationType=Journal Paper | ||
Latest revision as of 21:39, 30 September 2016
Property "Publisher" has a restricted application area and cannot be used as annotation property by a user. Property "Author" has a restricted application area and cannot be used as annotation property by a user. Property "Author" has a restricted application area and cannot be used as annotation property by a user.
| Title | Training neural networks by stochastic optimisation |
|---|---|
| Author | |
| Year | 2000 |
| PublicationType | Journal Paper |
| Journal | Neurocomputing |
| HostPublication | |
| Conference | |
| DOI | http://dx.doi.org/10.1016/S0925-2312(99)00123-X |
| Diva url | http://hh.diva-portal.org/smash/record.jsf?searchId=1&pid=diva2:286845 |
| Abstract | We present a stochastic learning algorithm for neural networks. The algorithm does not make any assumptions about transfer functions of individual neurons and does not depend on a functional form of a performance measure. The algorithm uses a random step of varying size to adapt weights. The average size of the step decreases during learning. The large steps enable the algorithm to jump over local maxima/minima, while the small ones ensure convergence in a local area. We investigate convergence properties of the proposed algorithm as well as test the algorithm on four supervised and unsupervised learning problems. We have found a superiority of this algorithm compared to several known algorithms when testing them on generated as well as real data. |