Scientific publications of the Ivan Franko National University of Lviv

Visnyk of the Lviv University. Series Applied Mathematics and Computer Science

Additional information
Year 2003
Volume 6
Authors Hodych O., Shcherbyna Yu.
Name of the article Solving the nonlinear least squares problem using artificial neural networks
Abstract Distributed computing is one of the most powerful approaches currently used to perform calculations and modeling of physical processes. A lot of researchers make an effort to port existing numerical methods, which were designed as single threaded algorithms, to multi-threaded environments. The artificial neural networks (ANN) approach is naturally suitable to perform multi-threaded calculations as neuron reactions can be calculated independently for unlinked neurons. The scalability of ANN is dependent upon the software implementation. It is possible to build neurosimulators, which can be used on a standalone PC, a cluster of workstations or on a mainframe. In this article the authors describe how to apply ANN to solve the nonlinear least squares problem. As well known, numerous problems can be solved thru the use of nonlinear least squares approach. For instance, the nonlinear problem when we need to find the curve, which would have the best approach of some experimental data or statistical samples. In a real life most problems about the best approach are solved using the principle of least squares. This can be represented by formula (2), where is a modeling function and is a vector of parameters for the best approach. We can consider that the nonlinear least squares problem can be divided into two sub problems: building the modeling function and calculating the parameters . Unfortunately, classical numerical methods give the answer only to the second sub problem. Also, classical methods (like Newtons method or its modifications) are not suitable for multi-threaded environments and thats why they have curse of dimensionality. Using ANN can potentially resolve both sub problems. The structure of the ANN represents the modeling function where parameters are the weights of links between neurons. By teaching the ANN, weights will be adjusted to make the ANN close to the ideal model as much as possible from least squares point of view. At the same time calculations are distributed between the artificial neurons, and the dimension of the solving problem does not have an impact once there is an appropriate multi-threaded environment. It is considered to use feed forward ANN with three layers. The count of neurons in the input layer equals the dimension of the solving problem. The inner layer consists of as many neurons as it is needed to represent the modeling function for particular problem (please note, that the amount of neurons can be modified during the teaching process). The output layer consists of one neuron, which simply summarizes reactions of neurons from the inner layer. As a teaching algorithm, any of the known supervised methods can be used (back propagation of error teaching algorithm or its modifications). Tests were performed for two problems [5, 6]. The results are shown in the table 1. The teaching process of the ANN is represented on the pictures 2-5.
Language Ukrainian
PDF format Hodych O., Shcherbyna Yu. Solving the nonlinear least squares problem using artificial neural networks
small logo
©2003-2012 Lviv University | Contacts