Visnyk of the Lviv University. Series Applied Mathematics and Computer Science
||Hodych O., Shcherbyna Yu.
|Name of the article
||ADD Neural Network
||In the paper that is presented, we propose a new neural network topology and a new learning method which utilises the distinct features of the proposed neural network. The proposed neural network topology was named ADD Neural Network, where ADD stands for Average Dimensional Distance – the function used as the basis for ADD neural network learning. The analytical representation of the function ADD is given in formulae (3) and (4). The core structural element of the ADD neural network is a meta-neuron – the new model of the artificial neuron. Its main virtue is the ability to represent a hypercube area in the input space which defines the boundaries of the data represented by the meta-neuron. Thus overcoming the limitation of the classical SOM neuron model of having just a weight-vector pointing into the cluster of data, but not being able to filter out those input data entries that are “close”, but do not belong to the represented cluster. Here the word “close” is used in the sense of the best matching neuron (BMU), which in the case of SOM is defined by formula (1). The learning algorithm ADD is a part of the life cycle of the ADD neural network in the sense that learning not only modifies meta-neurons, but also changes the structure of the neural network itself in a dynamic fashion by creating and removing meta-neurons. The winner-takes-all learning strategy is used as a basis for the modification of the BMU, but not in its pure meaning. During the adaptation stage, as presented in this paper, the ADD algorithm modifies the winner-meta-neuron of the current iteration and checks an additional criterion as to whether to keep the updated meta-neuron or to spawn an additional one. This criterion is based on the level of the information saturation of the updated meta-neuron as defined by the function ADD. If the level increase then the updated meta-neuron is considered to be successfully modified, otherwise it undergoes splitting which leads to spawning of a new meta-neuron, thus leading to a modified winner-takes-all strategy. There are two phases of the learning algorithm ADD. The first phase is the adaptation phase during which the ADD neural network learns the topology and clustering information from the input space. This phase is also characterized by the growing of meta-neurons in the ADD neural network. The second phase is the consolidation phase, which is specifically designed to reduce the number of meta-neurons spawned during the adaptation phase. This reduction is performed by combining or merging the meta-neurons in the case where the level of information saturation is higher for the merged meta-neuron when compared to the separate one prior to merge. For the purpose of demonstrating the clustering abilities of the proposed neural network topology, four data sets of different complexity have been presented, and the application of the ADD neural network was examined. As can be visually inspected from the illustrated pictures (5)-(12) the result of the clustering yielded successful application of the trained ADD neural network. It is important to note that the introduction of a more formal measurement is required to measure the true performance of the ADD neural network. Future research will occur in three main directions: development of the measurement techniques, which would allow measurement of the performance of the ADD neural network and compare it to the performance of SOM and SOM-based algorithms; development of a mechanism to clearly identify meta-neurons, which represent the same data cluster; performing additional tests involving more complex data sets in order to evaluate the clustering and topology representation features of the Add neural network
||Hodych O., Shcherbyna Yu.