Solving variational inequality problems with linear constraints based on a novel recurrent neural network

6Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Variational inequalities with linear inequality constraints are widely used in constrained optimization and engineering problems. By extending a new recurrent neural network [14], this paper presents a recurrent neural network for solving variational inequalities with general linear constraints in real time. The proposed neural network has onelayer projection structure and is amenable to parallel implementation. As a special case, the proposed neural network can include two existing recurrent neural networks for solving convex optimization problems and monotone variational inequality problems with box constraints, respectively. The proposed neural network is stable in the sense of Lyapunov and globally convergent to the solution under a monotone condition of the nonlinear mapping without the Lipschitz condition. Illustrative examples show that the proposed neural network is effective for solving this class of variational inequality problems. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Xia, Y., & Wang, J. (2007). Solving variational inequality problems with linear constraints based on a novel recurrent neural network. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4493 LNCS, pp. 95–104). Springer Verlag. https://doi.org/10.1007/978-3-540-72395-0_13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free