Логотип репозиторію
  • English
  • Yкраї́нська
  • Увійти
    Новий користувач? Зареєструйтесь.Забули пароль?
Логотип репозиторію
  • Фонди та зібрання
  • Пошук за критеріями
Користувачам
  • Положення
  • Авторський договір
  • Форма для зовнішніх авторів
  • Авторська етика
  • Глосарій
  • English
  • Yкраї́нська
  • Увійти
    Новий користувач? Зареєструйтесь.Забули пароль?
  1. Головна
  2. Переглянути за автором

Перегляд за Автор "A. Smorodin"

Зараз показуємо 1 - 3 з 3
Результатів на сторінці
Налаштування сортування
  • Документ
    THE USE OF CONTROL THEORY METHODS IN TRAINING NEURAL NETWORKS ON THE EXAMPLE OF TEETH RECOGNITION ON PANORAMIC X-RAY IMAGES
    (2021) A. Smorodin
    The article investigated a modification of stochastic gradient descent (SGD), based on the previously developed stabilization theory of discrete dynamical system cycles. Relation between stabilization of cycles in discrete dynamical systems and finding extremum points allowed us to apply new control methods to accelerate gradient descent when approaching local minima. Gradient descent is often used in training deep neural networks on a par with other iterative methods.  Two gradient SGD and Adam were experimented, and we conducted comparative experiments.  All experiments were conducted during solving a practical problem of teeth recognition on 2-D panoramic images. Network training showed that the new method outperforms the SGD in its capabilities and as for parameters chosen it approaches the capabilities of Adam, which is a “state of the art” method. Thus, practical utility of using control theory in the training of deep neural networks and possibility of expanding its applicability in the process of creating new algorithms in this important field are shown.
  • Документ
    THE USE OF CONTROL THEORY METHODS IN TRAINING NEURAL NETWORKS ON THE EXAMPLE OF TEETH RECOGNITION ON PANORAMIC X-RAY IMAGES
    (2021) A. Smorodin
    The article investigated a modification of stochastic gradient descent (SGD), based on the previously developed stabilization theory of discrete dynamical system cycles. Relation between stabilization of cycles in discrete dynamical systems and finding extremum points allowed us to apply new control methods to accelerate gradient descent when approaching local minima. Gradient descent is often used in training deep neural networks on a par with other iterative methods.  Two gradient SGD and Adam were experimented, and we conducted comparative experiments.  All experiments were conducted during solving a practical problem of teeth recognition on 2-D panoramic images. Network training showed that the new method outperforms the SGD in its capabilities and as for parameters chosen it approaches the capabilities of Adam, which is a “state of the art” method. Thus, practical utility of using control theory in the training of deep neural networks and possibility of expanding its applicability in the process of creating new algorithms in this important field are shown.
  • Документ
    THE USE OF CONTROL THEORY METHODS IN TRAINING NEURAL NETWORKS ON THE EXAMPLE OF TEETH RECOGNITION ON PANORAMIC X-RAY IMAGES
    (2021) A. Smorodin
    The article investigated a modification of stochastic gradient descent (SGD), based on the previously developed stabilization theory of discrete dynamical system cycles. Relation between stabilization of cycles in discrete dynamical systems and finding extremum points allowed us to apply new control methods to accelerate gradient descent when approaching local minima. Gradient descent is often used in training deep neural networks on a par with other iterative methods.  Two gradient SGD and Adam were experimented, and we conducted comparative experiments.  All experiments were conducted during solving a practical problem of teeth recognition on 2-D panoramic images. Network training showed that the new method outperforms the SGD in its capabilities and as for parameters chosen it approaches the capabilities of Adam, which is a “state of the art” method. Thus, practical utility of using control theory in the training of deep neural networks and possibility of expanding its applicability in the process of creating new algorithms in this important field are shown.

DSpace software copyright © 2002-2025 LYRASIS

  • Налаштування куків
  • Угода користувача
  • Зворотний зв’язок