0
Research Papers

Neural Dynamics and Newton–Raphson Iteration for Nonlinear Optimization

[+] Author and Article Information
Yunong Zhang

e-mail: zhynong@mail.sysu.edu.cn
School of Information Science
and Technology,
Sun Yat-sen University,
Guangzhou 510006, China

1Corresponding author.

Contributed by Design Engineering Division of ASME for publication in the JOURNAL OF COMPUTATIONAL AND NONLINEAR DYNAMICS. Manuscript received September 7, 2012; final manuscript received October 16, 2013; published online January 9, 2014. Assoc. Editor: Dan Negrut.

J. Comput. Nonlinear Dynam 9(2), 021016 (Jan 09, 2014) (10 pages) Paper No: CND-12-1141; doi: 10.1115/1.4025748 History: Received September 07, 2012; Revised October 16, 2013

In this paper, a special type of neural dynamics (ND) is generalized and investigated for time-varying and static scalar-valued nonlinear optimization. In addition, for comparative purpose, the gradient-based neural dynamics (or termed gradient dynamics (GD)) is studied for nonlinear optimization. Moreover, for possible digital hardware realization, discrete-time ND (DTND) models are developed. With the linear activation function used and with the step size being 1, the DTND model reduces to Newton–Raphson iteration (NRI) for solving the static nonlinear optimization problems. That is, the well-known NRI method can be viewed as a special case of the DTND model. Besides, the geometric representation of the ND models is given for time-varying nonlinear optimization. Numerical results demonstrate the efficacy and advantages of the proposed ND models for time-varying and static nonlinear optimization.

FIGURES IN THIS ARTICLE
<>
Copyright © 2014 by ASME
Topics: Optimization
Your Session has timed out. Please sign back in to continue.

References

Figures

Grahic Jump Location
Fig. 1

Trajectories of x*(t) and x(t) of CTND in Eq. (4) with different activation functions and γ-value to solve Eq. (19). (a) CTND with linear activation function. (b) CTND with power-sigmoid activation function.

Grahic Jump Location
Fig. 2

Residual errors |e(t)| of CTND in Eq. (4) with different activation functions and γ-value to solve Eq. (19). (a) CTND with linear activation function. (b) CTND with power-sigmoid activation function.

Grahic Jump Location
Fig. 3

Simulative results of GD in Eq. (6) by using different values of γ for online solution of Eq. (19). (a) Trajectories of x*(t) and x(t) of GD in Eq. (6). (b) Residual errors |e(t)| of GD in Eq. (6).

Grahic Jump Location
Fig. 4

Trajectories of x*(t) and x(t) and residual error |e(t)| of Eq. (20) for online solution of Eq. (19)

Grahic Jump Location
Fig. 5

Convergence performance of DTNDK in Eq. (8) with γ = 20 and τ = 0.01 for online solution of Eq. (19)

Grahic Jump Location
Fig. 6

Convergence performance of DTNDU in Eq. (11) with γ = 20 and τ = 0.01 for online solution of Eq. (19)

Grahic Jump Location
Fig. 7

MSSRE of DTNDK in Eq. (8) and DTNDU in Eq. (11) by using different values of γ and τ for online solution of Eq. (19). (a) With τ = 0.01 fixed. (b) With γτ = 0.2 fixed.

Grahic Jump Location
Fig. 8

Convergence performance of S-CTND in Eq. (15) with power-sigmoid activation function and different values of γ to solve Eq. (21)

Grahic Jump Location
Fig. 9

Convergence performances of S-DTND in Eq. (16) with power-sigmoid activation function and γτ=0.2 and NRI in Eq. (18) for static optimization

Grahic Jump Location
Fig. 10

Geometric representation of ND models for time-varying nonlinear optimization (and time-varying nonlinear/linear equation solving). (a) The proposed method approaches optimal solution x*(t). (b) Solutions with and without rotating trend considered. (c) The change of the gradient gx(x(t∧),t∧). (d) The gradient along the modified search direction.

Grahic Jump Location
Fig. 11

Analysis about the effect of different activation functions on the convergence performances of the presented ND models. (a) Activation functions ϕ(·). (b) Gradient along search direction.

Grahic Jump Location
Fig. 12

Convergence performance of the presented ND model in Eq. (13) by using γ=1 and different activation functions

Tables

Errata

Discussions

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In