Quarterly Publication

Document Type : Original Article


1 Department of Mathematics and Computer Sciences, Faculty of Mathematical Sciences, Lahijan Branch, Islamic Azad University, Lahijan, Iran.

2 Department of Applied Mathematics, Ayandegan Institute of Higher Education, Tonekabon, Iran.


In this paper we proposed a new neurodynamic model with recurrent learning process for solving ill-condition Generalized eigenvalue problem (GEP) Ax = lambda Bx. our method is based on recurrent neural networks with customized energy function for finding smallest (largest) or all eigenpairs. We evaluate our method on collected structural engineering data from Harwell Boeing collection with high dimensional parameter space and ill-conditioned sparse matrices. The experiments demonstrate that our algorithm using Adam optimizer, in comparison with other stochastic optimization methods like gradient descent works well in practice and improves complexity and accuracy of convergence.


  1. Erkan, U. (2021). A precise and stable machine learning algorithm: eigenvalue classification (EigenClass). Neural computing and applications33(10), 5381-5392.
  2. Datta, B. N. (2010). Numerical linear algebra and applications(Vol. 116). Siam.
  3. Najafi, H. S., & Refahi, A. (2007). FOM-inverse vector iteration method for computing a few smallest (largest) eigenvalues of pair (A, B). Applied mathematics and computation188(1), 641-647.
  4. Cichocki, A., & Unbehauen, R. (1992). Neural networks for computing eigenvalues and eigenvectors. Biological cybernetics68(2), 155-164.
  5. Feng, J., Chai, Y., & Xu, C. (2021). A novel neural network to nonlinear complex-variable constrained nonconvex optimization. Journal of the Franklin institute358(8), 4435-4457.
  6. Xu, C., Chai, Y., Qin, S., Wang, Z., & Feng, J. (2020). A neurodynamic approach to nonsmooth constrained pseudoconvex optimization problem. Neural networks124, 180-192.
  7. Qin, S., Feng, J., Song, J., Wen, X., & Xu, C. (2016). A one-layer recurrent neural network for constrained complex-variable convex optimization. IEEE transactions on neural networks and learning systems29(3), 534-544.
  8. Qin, S., Fan, D., Wu, G., & Zhao, L. (2015). Neural network for constrained nonsmooth optimization using Tikhonov regularization. Neural networks63, 272-281.
  9. Liao, L. Z., Qi, H., & Qi, L. (2004). Neurodynamical optimization. Journal of global optimization28(2), 175-195.
  10. Liu, L., Shao, H., & Nan, D. (2008). Recurrent neural network model for computing largest and smallest generalized eigenvalue. Neurocomputing71(16-18), 3589-3594.
  11. Feng, J., Yan, S., Qin, S., & Han, W. (2019). A neurodynamic approach to compute the generalized eigenvalues of symmetric positive matrix pair. Neurocomputing359, 420-426.
  12. Yi, Z., Fu, Y., & Tang, H. J. (2004). Neural networks-based approach for computing eigenvectors and eigenvalues of symmetric matrix. Computers & mathematics with applications47(8-9), 1155-1164.
  13. Wang, X., Che, M., & Wei, Y. (2016). Recurrent neural network for computation of generalized eigenvalue problem with real diagonalizable matrix pair and its applications. Neurocomputing216, 230-241.
  14. Najafi, H. S., & Khaleghi, E. (2004). A new restarting method in the Arnoldi algorithm for computing the eigenvalues of a nonsymmetric matrix. Applied mathematics and computation156(1), 59-71.
  15. Boisvert, R. F., Pozo, R., Remington, K., Barrett, R. F., & Dongarra, J. J. (1997). Matrix Market: a web resource for test matrix collections. Quality of numerical software(pp. 125-137). Springer, Boston, MA.
  16. Kingma, D. P., & Ba, J. (2014). Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. https://arxiv.org/abs/1412.6980
  17. Ghojogh, B., Karray, F., & Crowley, M. (2019). Eigenvalue and generalized eigenvalue problems: Tutorial. arXiv preprint arXiv:1903.11240. https://arxiv.org/abs/1903.11240