In this paper, a special type of neural dynamics (ND) is generalized and investigated for time-varying and static scalar-valued nonlinear optimization. In addition, for comparative purpose, the gradient-based neural dynamics (or termed gradient dynamics (GD)) is studied for nonlinear optimization. Moreover, for possible digital hardware realization, discrete-time ND (DTND) models are developed. With the linear activation function used and with the step size being 1, the DTND model reduces to Newton–Raphson iteration (NRI) for solving the static nonlinear optimization problems. That is, the well-known NRI method can be viewed as a special case of the DTND model. Besides, the geometric representation of the ND models is given for time-varying nonlinear optimization. Numerical results demonstrate the efficacy and advantages of the proposed ND models for time-varying and static nonlinear optimization.
Neural Dynamics and Newton–Raphson Iteration for Nonlinear Optimization
Contributed by Design Engineering Division of ASME for publication in the JOURNAL OF COMPUTATIONAL AND NONLINEAR DYNAMICS. Manuscript received September 7, 2012; final manuscript received October 16, 2013; published online January 9, 2014. Assoc. Editor: Dan Negrut.
- Views Icon Views
- Share Icon Share
- Cite Icon Cite
- Search Site
Guo, D., and Zhang, Y. (January 9, 2014). "Neural Dynamics and Newton–Raphson Iteration for Nonlinear Optimization." ASME. J. Comput. Nonlinear Dynam. April 2014; 9(2): 021016. https://doi.org/10.1115/1.4025748
Download citation file:
- Ris (Zotero)
- Reference Manager