https://arxiv.org/pdf/1406.2572.pdf
Identifying and attacking the saddle point problem in high-dimensional non-convex optimization
Abstract
A central challenge to many fields of science and engineering involves minimizing non-convex error functions over continuous, high dimensional spaces. Gradient descent or quasi-Newton methods are almost ubiquitously used to perform such minimizations, and it is often thought that a main source of difficulty for these local methods to find the global minimum is the proliferation of local minima with much higher error than the global minimum. Here we argue, based on results from statistical physics, random matrix theory, neural network theory, and empirical evidence, that a deeper and more profound difficulty originates from the proliferation of saddle points, not local minima, especially in high dimensional problems of practical interest. Such saddle points are surrounded by high error plateaus that can dramatically slow down learning, and give the illusory impression of the existence of a local minimum. Motivated by these arguments, we propose a new approach to second-order optimization, the saddle-free Newton method, that can rapidly escape high dimensional saddle points, unlike gradient descent and quasi-Newton methods. We apply this algorithm to deep or recurrent neural network training, and provide numerical evidence for its superior optimization performance. This work extends the results of Pascanu et al. (2014)
딥런닝의 모든 weight가 localminia에 빠진 상태여야 되기 때문에 high-dimension에서는 발생하기 어려운 현상이며,
고차원 공간에서늬 임계점은 대부분 saddle point이다.
또한 실험적으로 최적의 값은 global minimum, 또는 그에 가까운 값을 가진다.
'논문 > 원문 및 번역문' 카테고리의 다른 글
Deep Learning for ECG Analysis: Benchmarksand Insights from PTB-XL (2) | 2023.11.08 |
---|---|
[논문][DL] Neural Collaborative Filtering 원문 (0) | 2023.07.24 |
[논문][DL] TabPFN 원문, 한글 번역문 (0) | 2023.07.24 |