(Return to ICNN97 Homepage) (Return to ICNN'97 Agenda)



ICNN'97 FINAL ABSTRACTS


AR: ARCHITECTURES


ICNN97 Architectures Session: AR1A Paper Number: 31 Oral

On the properties of periodic perceptrons

David B. McCaughan

Keywords: periodic perceptrons periodic activation function pattern classification

Abstract:

This paper presents a summary of recent research regarding a modified perceptron model in which processing elements utilize a periodic activation function. Empirical results for a number of benchmark tests provide some indication of the "in practice" power of networks containing periodic processors for pattern classification tasks. In addition, new results are reported in which the internal structure of the network is shown to be interpretable, and indeed provides a basis for rule extraction. Together, these results give some indication of the behaviour that can be expected when applying networks containing periodic perceptrons to pattern classification tasks, and provides dimensions along which such processing elements may be distinguished from standard sigmoid devices.

_____

ICNN97 Architectures Session: AR1B Paper Number: 161 Oral

Identification of a class of nonlinear systems using dynamic neural network structures

A. Yazdizadeh and K. Khorasani

Keywords: nonlinear systems dynamic neural network structures

Abstract:

Keywords: Identification, Nonlinear Systems, Dynamic Neural Networks

In this paper two dynamic neural network structures for identification of a class of nonlinear systems are introduced. In the first structure an all pole filter is used at the output of each static neuron to modify its structure, whereas in the second structure which is based on the Time Delay Neural Networks (TDNN) the static neuron is modified by introducing certain adaptive delays associated with each weight. It will be shown that the proposed structures are capable of representing the class of nonlinear systems considered. Moreover, selection criteria are proposed for specifying the fixed parameters of the networks.

_____

ICNN97 Architectures Session: AR1C Paper Number: 271 Oral

Function approximation in the framework of evidence theory: a connectionist approach

Thierry Denoeux

Keywords: Function approximation evidence theory radial basis function

Abstract:

We propose a novel approach to functional regression based on the Transferable Belief Model, a variant of the Dempster-Shafer theory of evidence.

This method uses reference vectors for computing a belief structure that quantifies the uncertainty attached to the prediction of the target data, given the input data. The procedure may be implemented in a neural network with specific architecture and adaptive weights. It allows to compute an imprecise assessment of the target data in the form of lower and upper expectations. The width of this interval reflects the partial indeterminacy of the prediction resulting from the relative scarcity of training data.

_____

ICNN97 Architectures Session: AR1D Paper Number: 273 Oral

Orthogonal functional basis neural network for functional approximation

C.L. Philip Chen, Y. Cao and Steven R. LeClair

Keywords: Orthogonal functional basis neural network functional approximation

Abstract:

_____

ICNN97 Architectures Session: AR1E Paper Number: 351 Oral

CBP networks as a generalized neural model

Sandro Ridella, Stefano Rovetta, and Rodolfo Zunino

Keywords: backpropagation pattern classification Radial basis functions Vector quantization

Abstract:

_____

ICNN97 Architectures Session: AR2A Paper Number: 119 Oral

A modified mixtures of experts architecture for classification with diverse features

Ke Chen and Huisheng Chi

Keywords: modified mixtures of experts architecture classification diverse features

Abstract:

A modular neural architecture, MME, is considered here as an alternative to the standard mixtures of experts architecture for classification with diverse features. Unlike the standard mixtures of experts architecture, a gate-bank consisting of multiple gating networks is introduced to the proposed architecture, and those gating networks in the gate-bank receive different input vectors while expert networks may be receiving different input vectors. As a result, a classification task with diverse features can be learned by the modular neural architecture through the use of different features simultaneously. In the proposed architecture, learning is treated as a maximum likelihood problem and an EM algorithm is presented for adjusting the parameters of the architecture. Comparative simulation results are presented for a real world problem called text-dependent speaker identification.

_____

ICNN97 Architectures Session: AR2B Paper Number: 358 Oral

Regularization and Error Bars for the mixture of Experts network

Viswanath Ramamurti and Joydeep Ghosh

Keywords: Functionapproximation Generalization Regularization

Abstract:

The mixture of experts architecture provides a modular approach to function approximation. Since different experts get attuned to different regions of the input space during the course of training, and data distribution may not be uniform, some experts may get overtrained while others are undertrained. This leads to overall poorer generalization. In this paper, we show how regularization applied to the gating network improves generalization performance during the course of training. Secondly, we address the issue of estimating the error bars for network prediction. This is useful to estimate the range of probable network outputs for a given input especially in performance critical applications. Equations are derived to estimate the variance of the network output for a given input. Simulation results are presented in support of the proposed methods which substantially improve the effectiveness of mixture of experts networks.

_____

ICNN97 Architectures Session: AR2C Paper Number: 202 Oral

CMNN: cooperative modular neural network

Gasser Auda and Mohamed Kamel

Keywords: modular neural network classification

Abstract:

The current generation of non-modular neural network classifiers is unable to cope with classification problems which have a wide range of overlap among classes. This is due to the high coupling among the networks' hidden nodes. We propose the Cooperative Modular Neural Network (CMNN) architecture, which deals with different levels of overlap in different modules. The modules share their information and cooperate in taking a global classification decision through voting. Moreover, special modules are dedicated to resolve high overlaps in the input-space. The performance of the new model outperforms that of the non-modular alternative when when applied to ten famous benchmark classification problems.

_____

ICNN97 Architectures Session: AR2D Paper Number: 519 Oral

Assembling engineering knowledge in a modular multilayer perceptron neural network

W.J. Jansen, M. Diepenhorst, J.A.G. Nijhuis and L. Spaanenburg

Keywords: engineering knowledge modular multilayer perceptron neural network

Abstract:

The popular multi-layer perceptron (MLP) topology with an error-back propagation learning rule doesn't allow the developer to use the (explicit) engineering knowledge as available in real-life problems. Design procedures described in literature start either with a random initialization or with a 'smart' initialization of the weight values based on statistical properties of the training data. This article presents a design methodology that enables

the insertion of pre-trained parts in a MLP network topology and illustrates the advantages of such a modular approach. Furthermore we will discuss the differences between the modular approach and a hybrid approach, where explicit knowledge is captured by mathematical models. In a hybrid design a mathematical model is embedded in the modular neural network as an optimization of one of the pre-trained subnetworks or because the designer wants to obtain a certain degree of transparency of captured know-ledge in the modular design.

_____

ICNN97 Architectures Session: AR2E Paper Number: 260 Oral

A hierarchical decision module based on multiple neural networks

Murat Sonmez, Mingui Sun, Ching-Ching Li and Robert J. Sclabassi

Keywords: hierarchical decision module multiple neural networks source localization

Abstract:

A new approach to source localization in human brain, an important problem of neuroscience, is presented.

A multiple neural network-based hierarchical decision making system is developed. Each network is assigned to an independent model, and the output is evaluated in the higher level of the decision making system. Utility functions are designed in the framework of decision making. The network with the highest utility value is selected as the best network, representing the best model in EEG based inverse mapping. Simulation results show that, the multiple neural network based decision making system has successfully produced preferred decisions in source localization.

_____

ICNN97 Architectures Session: AR2F Paper Number: 664 Oral

Algorithms for optimal linear combinations of neural networks

Sherif Hashem

Keywords: Algorithms optimal linear combinations of neural network

Abstract:

Recently, several techniques have been developed for combining neural networks. Combining a number of trained neural networks may yield better model accuracy, without requiring extensive efforts in training the individual networks or optimizing their architecture. However, since the corresponding outputs of the combined networks approximate the same physical quantity (or quantities), the linear dependency (collinearity) among these outputs may affect the estimation of the optimal combination-weights for combining the networks, resulting in a combined model which is inferior to the apparent best network.

In this paper, we present two algorithms for selecting the component networks for the combination in order to reduce the ill effects of collinearity, thus improving the generalization ability of the combined model. Experimental results are included.

_____

ICNN97 Architectures Session: AR3A Paper Number: 546 Oral

Improving ANN generalization using a priori knowledge to pre-structure ANNs

George G. Lendaris, Armin Rest and Thomas R. Misley

Keywords: ANN generalization priori knowledge pre-structure ANNs

Abstract:

_____

ICNN97 Architectures Session: AR3B Paper Number: 663 Oral

Analyzing the structure of a neural network using principal component analysis

David W. Opitz

Keywords: neural network structure principal component analysis penalty term

Abstract:

Often times when learning from data, one attaches a penalty term to a standard error term in an attempt to prefer simple models and thus prevent overfitting. Current penalty terms for neural networks, however, often do not take into account weight interaction. This is a critical drawback since the effective number of parameters in a network often differs dramatically from the total number of parameters. In this paper, we present a penalty term that uses Principal Component Analysis to detect redundancy in a neural network. Results show that our new algorithm gives a much more accurate estimate of network complexity than standard approaches.

As a result, our new term should be able to improve techniques that can make use of a penalty term, such as weight decay, weight pruning, feature selection, Bayesian, and prediction-risk techniques.

_____

ICNN97 Architectures Session: AR3C Paper Number: 599 Oral

Approximation capabilities of adaptive spline neural networks

Lorenzo Vecci, Paolo Campolucci, Francesco Piazza and Aurelio Uncini

Keywords: Approximation adaptive spline neural networks kernels

Abstract:

Keywords: Neural Networks, Spline Neural Networks, Adaptive Activation Function, Regularization Theory.

In this paper, we study the properties of neural networks based on adaptive spline activation functions (ASNN). Using the results of regularization theory, we show how the proposed architecture is able to produce smooth approximations of unknown functions; to reduce hardware complexity a particular implementation of the kernels expected by the theory is suggested. This solution, although sub-optimal, greatly reduces the number of neurons and connections as it gives an increased expressive power to each neuron, which is also able to produce a smooth activation function just controlling one fixed parameter of a Catmull-Rom cubic spline.

Experimental results demonstrate that there's also an advantage in terms of the number of free parameters that, together with smoothness, leads to an improved generalization capability.

_____

ICNN97 Architectures Session: AR3D Paper Number: 308 Oral

Incorporating functional knowledge into neural networks

Fang Wang and Qi-jun Zhang

Keywords: functional knowledge neural networks generalization

Abstract:

Embedding prior knowledge has been an important way to enhance generalization capability and training efficiency of neural networks. In this paper, a knowledge network is presented to incorporate prior knowledge in the form of continuous multidimensional nonlinear functions, which are typically obtained from engineering empirical experience and can be highly nonlinear. This type of network addresses some of the bottlenecks, i.e., reliability of model and limited training data, in the growing use of neural networks in providing multidimensional continuous nonlinear models for many engineering problems. Practical electrical engineering modeling examples have been used to demonstrate the enhanced accuracy of the proposed network as compared with conventional neural model approach.

_____

ICNN97 Architectures Session: AR4A Paper Number: 269 Oral

Stability and discriminative properties of the AMI model

D.B. Hoang and M. James

Keywords: AMI model stability negative feedback

Abstract:

We consider a basic biologically plausible neural circuit that employs supragranular self-gain, negative feedback via inhibitory infragranular neuron. Such circuitry has been used as fundamental building blocks in the AMI modular neural network[1]. We derive the conditions for stability of an adaptive model of such a circuit with nonlinear self-gain and nonlinear adaptation characteristics. We also present the simulation results which demonstrate the discriminative property of the discriminative compartment of an AMI module.

_____

ICNN97 Architectures Session: AR4B Paper Number: 267 Oral

Formalizing neural networks using graph transformations

Michael R. Berthold and Ingrid Fischer

Keywords: neural networks graph transformations training

Abstract:

Abstract: In this paper a unifying framework for the formalization of different types of Neural Networks and the corresponding algorithms for computation and training is presented. The used Graph Transformation System offers a formalism to verify properties of the networks and their algorithms. In addition the presented methodology can be used as a tool to visualize and design different types of networks along with all required algorithms. An algorithm that adapts network parameters using standard gradient descent as well as parts of a constructive, topology-changing algorithm for Probabilistic Neural Networks are used to demonstrate the proposed formalism.

_____

ICNN97 Architectures Session: AR4C Paper Number: 360 Oral

Fuzzy combination of Kohonen's and ART Neural network models to detect statistical regularities in a random sequence of Multi-valued input patterns

Andrea Baraldi and Flavio Parmiggiani

Keywords: Fuzzy ART Pattern classification Statistical regularities

Abstract:

Adaptive Resonance Theory 1 (ART 1), Improved ART 1 (IART 1) and Carpenter-Grossberg-Rosen's (CGR) Fuzzy ART neural network systems are affected by pattern mismatching sensitive to the order of presentation of the input sequence. The Simplified ART network (SART), proposed recently as an ART-based model performing multi-valued pattern recognition, supersedes the structural drawbacks affecting ART 1, IART 1 and CGR Fuzzy ART. A Fuzzy SART implementation is now proposed to combine SART architecture with a Kohonen-based soft learning strategy which employs a fuzzy membership function. Fuzzy SART consists of an attentional and an orienting subsystem. The Fuzzy SART attentional subsystem is a self-organizing feed-forward flat homogeneous network performing learning by examples. During the processing of a given data set, the Fuzzy SART orienting subsystem: i) adds a new neuron to the attentional subsystem whenever the system fails to recognize an input pattern; and ii) removes a previously allocated neuron from the attentional subsystem if the neuron is no longer able to categorize any input pattern. The performance of Fuzzy SART is compared with that of the CGR Fuzzy ART model when a two-dimensional data set and the four-dimensional IRIS data set are processed. Unlike the CGR Fuzzy ART system, Fuzzy SART: i) requires no input data preprocessing (e.g., normalization or complement coding); ii) features stability to small changes in input parameters and in the order of the input sequence; and iii) is competitive when compared to other neural network models found in the literature.

_____

ICNN97 Architectures Session: AR4D Paper Number: 311 Oral

Entropy-driven structural adaptation in sample-space self-organizing feature maps for pattern classification

O. Yanez-Suarez and M.R. Azimi-Sadjadi

Keywords: Self-organizing map, density estimation, classification

Abstract:

The relationship of the self-organizing map to the general problem of non-parametric density estimation has brought about diverse applications of this network to vector quantization and pattern recognition problems.

Unfortunately, the requirement of deciding a priori the number of processing units to use limits the ability of the network to deliver satisfactory solutions. In this paper we consider a new structural adaptation approach that is based on the measurement and monitoring of the relative entropy during the learning phase of self-organizing feature maps with sample-space neighborhoods and trained in a batch mode. Results on the classification accuracy of the networks built with the proposed scheme are presented.

_____

ICNN97 Architectures Session: ARP1 Paper Number: 275 Poster

PASS: a program for automatic structure search

Zhanbo Chen, Jing Xiao and Jie Cheng

Keywords: automatic structure search feedforward neural networks evolutionary computation

Abstract:

We utilized the heuristic knowledge gained from a vast number of experiments on using feedforward neural networks (FNNs) to approximate highly nonlinear real-valued functions and developed a program for automatic search of FNNs based on evolutionary computation techniques, called PASS (Program for Automatic Structure Search). PASS has been successfully tested in an industrial application -- finding FNNs to approximate the mappings betweenautomobile engine control variables and performance parameters. It has shown the promise to be a general and efficient tool for automatic determination of FNNs for real-valued function approximation.

_____

ICNN97 Architectures Session: ARP1 Paper Number: 473 Poster

A comparative study of fully and partially recurrent networks

J. Ludik, W. Prins, K. Meert and T. Catfolis

Keywords: fully recurrent networks partially recurrent networks network architecture

Abstract:

A number of fully and partially recurrent networks have been proposed to deal with temporally extended tasks. However, it is not yet clear which algorithms and network architectures are best suited to certain kinds of problems. In this paper we report on experimental investigations of a quantitative nature, which address this particular need by comparing fully recurrent networks using learning algorithms such as Backpropagation-through-time (BPTT), Batch BPTT, Quickprop-through-time, and Real-time Recurrent Learning with Elman and Jordan partially recurrent networks on four benchmark problems: detection of three consecutive zeros, nonlinear plant identification, Turing machine emulation, and real-world distillation column modelling.

Keywords: Recurrent networks Training algorithms Performance

_____

ICNN97 Architectures Session: ARP1 Paper Number: 83 Poster

Fault immunization technique for artificial neural networks

Chidchanok Lursinsap and Thitipong Tanprasert

Keywords: Fault immunization artificial neural networks cell

Abstract:

By injecting some chemical substances to a cell, it is able to enhance the ability of the cell to fight against the intruder. This immunization concept in biological cells has been applied to enhance the fault tolerance capability in a perceptron-like neuron. In this paper, we consider only the case where each neuron separates its input vectors into two classes. We mathematically model the cell immunization in terms of weight vector relocation and propose a polynomial time weight relocating algorithm. This algorithm can be generalized to the case where each neuron separating the input vectors into more than two classes.

_____

ICNN97 Architectures Session: ARP1 Paper Number: 391 Poster

Network Complexity and generalization

Sangbong Park and Cheol Hoon Park

Keywords: Network complexity Generalization Function approximation

Abstract:

This paper explains the relationship between complexity of the neural network with sigmoidal hidden neurons and its generalization capability in function approximation. Network complexity is decided in terms of the number of degrees

of freedom(DFs) and their dynamic range(DR). Performances between a large number of small network and a small number of large network have been compared with approximation capability for the two data generating neworks. Computer simulation shows that dynamic range as well as degrees of freedom affects training and generalization capability.


Web Site Author: Mary Lou Padgett (m.padgett@ieee.org)
URL: http://www.mindspring.com/~pci-inc/ICNN97/paperar.htm
(Last Modified: 30-Apr-1997)