(Return to ICNN97 Homepage) (Return to ICNN'97 Agenda)



ICNN'97 FINAL ABSTRACTS


OA: OPTIMIZATON & ASSOCIATIVE MEMORY


ICNN97 Optimization & Associative Memory Session: OA1A Paper Number: 4 Oral

Associative memory design via Perceptron learning

Derong Liu and Zanjun Lu

Keywords: Associative memories perceptron learning

Abstract:

In the present paper, a new synthesis approach is developed for associative memories based on the perceptron learning algorithm. The design (synthesis) problem of feedback neural networks for associative memories is formulated as a set of linear inequalities such that the use of perceptron learning is evident. The perceptron learning in the synthesis algorithms is guaranteed to converge. To demonstrate the applicability of the present results, a specific example is considered.

_____

ICNN97 Optimization & Associative Memory Session: OA1B Paper Number: 217 Oral

Two Simple strategies to improve bidirectional associative memory's performance: Unlearning and delta rule

Aluizio F. R. Araújo and Georves M. Haga

Keywords: Bidirectional associative memory Delta rule learning

Abstract:

Keywords: Bidirectional Associative Memory, unlearning, Widrow-Hoff rule, pseudo-relaxation.

This paper presents two strategies to improve the performance of the Bidirectional Associative Memory (BAM). The Unlearning of Spurious Attractors (USA-BAM) consists in dissociating any stimulus from an incorrect response. The Bidirectional Delta Rule (BDR-BAM) involves to extend the use of the Delta Rule to BAM bidirectional operation. These paradigms are based on cognitive assumptions, do not demand pre-processed inputs, train quickly the network, have stable behavior, present high noise tolerance and abstraction ability. The models are compared with the original BAM and the Pseudo-Relaxation Learning Algorithm (PRLAB). A number of experiments suggest that the new methods present better performance than PRLAB when dealing with noisy input patterns. These three methods are combined two by two and the resulting model USA-BDR-BAM presents the best overall performance.

_____

ICNN97 Optimization & Associative Memory Session: OA1C Paper Number: 99 Oral

Effect of pruning small weights in correlation associative memories

Ravi Kothari, and Rohit Lotlikar

Keywords: Pruning Associative memory Correlation

Abstract:

In this paper we examine the effect of pruning small weights in correlation associative memories. The effect is investigated by considering a fully interconnected matrix and removing weights satisfying $|w_{ij}| \le \epsilon$. We show analytically that under suitable constraints on $\epsilon$, the capacity of a sparsely connected associative memory is comparable to that of a fully connected one, without compromising the basins of attraction around the prototype fixed points. Simulation results supporting the analysis are also presented.

_____

ICNN97 Optimization & Associative Memory Session: OA1D Paper Number: 439 Oral

Decoupled-voting hamming associative memory networks

Paul B. Watta, Mohammed Akkal and MOhamad H. Hassoun

Keywords: Decoupled-voting hamming associative memory networks

Abstract:

_____

ICNN97 Optimization & Associative Memory Session: OA1E Paper Number: 215 Oral

Temporal multidirectional associative memory: adaptable, Continuous, and self connected MAM

Aluizio F. R. Araújo and Marcelo Vieira

Keywords: Multidirectional associative memory Widrow-Hoff rule learning

Abstract:

Keywords: Neural networks, associative memory, multidirectional associative memory,temporal sequences, Widrow-Hoff rule.

This paper introduces a new version of the Multidirectional Associative Memory (MAM) in which the state of activation of the processing units ranges from -1 to +1 and each unit has self-connections. Furthermore, the correlation matrices, generated by Hebbian rules, are trained by Widrow-Hoff rule. The model aims at reproducing temporal sequences.The results of the experiments suggest that the model has a fast training stage, improves the learning capacity of MAM, reproduces trained temporal sequences, interpolates and extrapolates states in a trained sequence, and improves the accuracy of the recall with the inclusion of states.

_____

ICNN97 Optimization & Associative Memory Session: OA1F Paper Number: 67 Oral

Overlapping decompositions in the design of associative memories

Mehmet Akar and M. Erol Sezer

Keywords: Associative memories overlapping decomposition

Abstract:

This paper is concerned with the design of neural networks to be used as associative memories. The idea of overlapping decompositions, which is extensively used in the solution of large-scale problems as a method of reducing the computational work, is applied to discrete-time neural networks with binary neurons. It is shown that if the desired memory matrix accepts a suitable overlapping decomposition, then the problem can be solved by synthesizing a number of smaller networks independently. The concept is illustrated with two examples.

_____

ICNN97 Optimization & Associative Memory Session: OA2A Paper Number: 514 Oral

GBAM: A general bidirectional memory model

Hongchi Shi, Yunxin Zhao, Xinhua Zhuang

Keywords: Bidirectional associative memory learning stability

Abstract:

This paper proposes a general model for bidirectional associative memories that associate patterns between the X-space and the Y-space. The general model does not require the usual assumption that the interconnection weight from a neuron in the X-space to a neuron in the Y-space is the same as the one from the Y-space to the X-space. We start by defining a supporting function to measure how well a state supports another state in a general bidirectional associative memory (GBAM). We then use the supporting function to formulate the associative recalling process as a dynamic system, explore its stability and asymptotic stability conditions, and develop an algorithm for learning the asymptotic stability conditions using the Rosenblatt perceptron rule. The effectiveness of the proposed model is demonstrated by several outstanding experimental results.

_____

ICNN97 Optimization & Associative Memory Session: OA2B Paper Number: 188 Oral

Chaotic Multidirectional Associative memory

Yuko Osana, Motonobu Hattori, Masafumi Hagwara

Keywords: Associative memory chaotic neurons

Abstract:

Most of the conventional associative memories can not deal with many-to-many associations because the stored common data cause superimposed patterns. Although only a few models enable many-to-many associations, they require manual controls or extra networks to separate the superimposed patterns. As a result, they have very complicated structures. In contrast, the structure of the proposed CMAM is very simple because it merely uses chaotic neuron models instead of the conventional neuron models used in the Multidirectional Associative Memory (MAM). In order to enable many-to-many associations, the CMAM memorizes each training data together with its own contextual information and employs chaotic neurons. Since the chaotic neurons change their states by chaos, many-to-many associations can be realized in the CMAM. A series of computer simulations shows not only the effectiveness of the proposed CMAM, but also its similarity to a psychological fact (priming effect).

_____

ICNN97 Optimization & Associative Memory Session: OA2C Paper Number: 557 Oral

Using Hysteresis to improve performance in synchronous networks

Tirunelveli Anand and Ali A. Minai

Keywords: Synchronous network Hysteresis Convergence

Abstract:

Keywords: Associative memory, Hysteresis, Neurodynamics.

In this paper, we present a signal-to-noise analysis of synchronous attractor networks of hysteretic threshold elements. The addition of hysteresis is known to enhance the capacity and convergence rate of attractor networks. The aim of this paper is partly to elaborate on results reported previously by other researchers, and to address the issue of whether there is an optimal value for hysteresis. Based on the simplified analysis reported here, we conclude that, for a given network loading (ratio of patterns stored to network size), there is an optimal value of hysteresis, but it changes as recovery proceeds to convergence. We hypothesize that a time-varying ``hysteresis schedule'' can be used to enhance the performance of attractor networks.

_____

ICNN97 Optimization & Associative Memory Session: OA2D Paper Number: 524 Oral

Random attractors for initial memory codes

Akira Date

Keywords: Random coding Temporal relations: :Spatial structure

Abstract:

The problem of how the temporal relations among events are converted to the spatial structure in neural connections are considered. "Random coding" is one hypothesis to this problem, in which items to be stored are associated with randomly chosen network attractors, independent of spatial relationships between the items. We demonstrate the idea with a simple network model, and discuss the relevant issues.

_____

ICNN97 Optimization & Associative Memory Session: OA3A Paper Number: 138 Oral

A Generalized updating rule for modified Hopfield neural network

Yi Sun

Keywords: Hopfield Network Energy function Local Minimum

Abstract:

In this paper, a generalized updating rule (GUR) for the modified Hopfield neural network (MHNN) is presented. It is proved that with the GUR, for any given sequence of updating modes the network monotonously converges to a fixed point. An upper bound on the gradient of the energy function at the fixed point is given. All conventional MHNN algorithms are shown to be instances of the GUR. It is shown that the class of wide sense sequential updating modes guarantee the network to converge to the local minimum points of the energy function.

_____

ICNN97 Optimization & Associative Memory Session: OA3B Paper Number: 454 Oral

Bipolar pattern association using a recurrent Winner take all network

John E. McInroy and Bogdan M. Wilamowski

Keywords: Bipolar pattern association recurrent neural network Winner take all network

Abstract:

A neural network for heteroassociative (or autoassociative) pattern recognition of input bipolar binary vectors is proposed. By combining the advantages of feedforward and recurrent techniques for heteroassociation, a simple network with guaranteed error correction is found. The heart of the network is based on a new, recurrent method of performing the Winner Take All function. The analysis of this network leads to design rules which guarantee its performance. The network is tested on a character recognition problem utilizing the entire IBM CGA character set.

_____

ICNN97 Optimization & Associative Memory Session: OA3C Paper Number: 340 Oral

Recognition of multidimensional affine patterns using a constrained GA

G. Calafiore and B. Bona

Keywords: genetic algorithms pattern recognition data processing

Abstract:

_____

ICNN97 Optimization & Associative Memory Session: OA3D Paper Number: 132 Oral

Suspiciousness of loading problems

P. Frasconi, M. Gori, S. Fanelli, and M. Protasi

Keywords: Learning Optimization Computational complexity gradient descent

Abstract:

_____

ICNN97 Optimization & Associative Memory Session: OA4A Paper Number: 426 Oral

Application of neuro-based optimization algorithm to three dimensional cylindric puzzles

Hiroyuki Yamamoto, Takeshi Nakayama, Hiroshi Ninomiya and Hideki Asai

Keywords: neuro-based optimization three dimensional cylindric puzzles 2-dimensional tiling technique

Abstract:

_____

ICNN97 Optimization & Associative Memory Session: OA4B Paper Number: 539 Oral

Primal-Target neural net heuristics for the hypergraph k-coloring problem

Dmitri Kaznachey, Arun Jagota, Sajal Das

Keywords: Hypergraph K-coloring Graph optimization

Abstract:

The hypergraph strong coloring problem (HC) is an NP-hard problem on hypergraphs arising naturally in applications involving conflict-free access of data in parallel memory systems. There has been much work on solving graph optimization problems using neural networks; however almost none on solving hypergraph problems. Hypergraph problems present interesting challenges to neural networks because they involve higher-order structures than those in graphs.

In this paper we introduce a primal-target approach to solve hard combinatorial problems having inequality constraints. We consider the variant of HC, -- the maximum induced subhypergraph strong k-coloring problem (HkC). We present two primal-target algorithms that map HkC onto a Hopfield network. The first algorithm uses a larger set of target variables which leads to a more elaborate search mechanism. Experiments showed that both algorithms, PT1 and PT2, were competetive with a naive optimal algorithm on small instances of random hypergraphs while being significantly faster. Algorithm PT1 scaled significantly better to larger hypergraphs than PT2, while PT2 ran faster than PT1.

_____

ICNN97 Optimization & Associative Memory Session: OA4C Paper Number: 101 Oral

Determination of parameters in relaxation-search neural networks for optimization problems

Gursel Serpen, David L. Livingston, and Azadeh Parvin

Keywords: Hopfield Network Optimization

Abstract:

We propose a method to define constraint weight parameters of the Hopfield network in order to establish the solutions of the optimization problem as stable equilibrium points in the state space. Application of the methodology is demonstrated on a well known benchmark problem, the Traveling Salesman Problem. Simulation results indicate that the proposed bounds on the constraint weight parameters establish the solutions as stable points and consequently, the Hopfield network consistently converges to a solution after each relaxation.

_____

ICNN97 Optimization & Associative Memory Session: OA4D Paper Number: 62 Oral

An evolutionary neural network algorithm for max cut problems

Nobuo Funabiki, Junji Kitamichi, and Seishi nishikawa

Keywords: Evolutionary neural network Max cut problems Graphs

Abstract:

An "Evolutionary Neural Network (ENN)" is presented for the max cut problem of an undirected graph G(V, E) in this paper. The goal of the NP-hard problem is to find a partition of V into two disjoint subsets such that the cut size be maximized. The cut size is the sum of weights on edges in E whose endpoints belong to different subsets. The ENN combines the evolutionary initialization scheme of the neural state into the energy minimization criteria of the binary neural network. The performance of ENN is evaluated through simulations in randomly weighted complete graphs and unweighted random graphs with up to 1000 vertices. The results show that the evolutionary initialization scheme drastically improves the solution quality. ENN can always find better solutions than the maximum neural network, the mean field annealing, the simulated annealing, and the greedy algorithm.

_____

ICNN97 Optimization & Associative Memory Session: OA4E Paper Number: 63 Oral

A binary neural network approach for one-shot Scheduling problems in multicast packet switching systems

Takayuki Baba, Nobuo Funabiki, Seishi Nishikawa

Keywords: Scheduling optimization binary neural network

Abstract:

A multicast packet switching system can replicate a packet in the window of each input port to send out the copies from different output ports simultaneously. In order to maximize the throughput, a combinatorial optimization problem must be solved in real time of finding a switching configuration which does not only satisfy the constraints on the system, but also maximize the number of copies under transmission demands. In this paper, we focus on the one-shot scheduling problem where all the copies of selected packets must be sent out simultaneously. We propose the neural network composed of (W * N) binary neurons for the problem in the W-window-N-input-port system. The motion equation is newly defined with three heuristic methods. We verify the performance through simulations in up to 3-window-1000-input-port systems, where our binary neural network provides the better performance than the existing methods so as to reduce the delay time under practical situations.

_____

ICNN97 Optimization & Associative Memory Session: OA4F Paper Number: 65 Oral

Arbitrary distance function estimation using discrete vector quantization

John Oommen, I. Kuban Altmel, Necati Aras

Keywords: Vector quantization automata learning Distance function

Abstract:

There are currently many vastly different areas of research involving adaptive learning. Two of these are the ones which concern neural networks and learning automata. This paper develops a method by which the general philosophies of Vector Quantization (VQ) and discretized automata learning can be incorporated for the computation of arbitrary distance functions. The latter is a problem which has important applications in Logistics and Location Analysis. The input to our problem is the set of coordinates of a large number of nodes whose inter-node arbitrary ``distances'' have to be estimated. To render the problem interesting, non-trivial and realistic, we assume that the explicit form of this distance function is both unknown and uncomputable. Unlike traditional Operations Research methods, which use optimized parametric functional estimators, we have utilized discretized VQ principles to first adaptively polarize the nodes into sub-regions. Subsequently, the parameters characterizing the sub-regions are learnt by using a variety of methods (including, for academic purposes a VQ strategy in the meta-domain). After an initial training phase, a system which achieves distance estimation attempts to yield an estimate of any node-pair distance without actually deriving an explicit form for the unknown function. The algorithms have been rigorously tested for the actual road-travel distances involving cities in Turkiye and the results obtained are conclusive. Indeed, these present results are the best currently available from any single or hybrid strategy.

_____

ICNN97 Optimization & Associative Memory Session: OA5A Paper Number: 556 Oral

Multiple-resolution divide and conquer for large-scale combinatorial optimization

Steven Noel and Harold Szu

Keywords: combinatorial optimization divide-and-conquer approach Clustering

Abstract:

We describe a multiple-resolution divide-and-conquer neural network approach for large-scale constrained optimization problems like the TSP. The goal of the approach is to divide a problem into sub-problems, solve them with parallel networks, then combine the resulting sub-solutions. The divide-and-conquer approach is based on a mathematical principle of orthogonal division errors, which provides the criteria for optimal problem division. The resulting division minimizes the cross-correlation among sub-problems, and hence the necessary communication among them. It therefore provides a minimum-communication allocation of sub-problems to parallel processors. Moreover, the divide and conquer can be done recursively, so that it occurs at all resolutions of the data. We show how wavelets can perform the necessary multiple-resolution data clustering. The divide-and-conquer approach is particularly effective for large-scale fractal data, which exhibits clustering over a large number of resolutions.

_____

ICNN97 Optimization & Associative Memory Session: OA5B Paper Number: 200 Oral

Parallel and Synchronous search for combinatory quasi-optimum solutions

Hideki Kakeya and Yoichi Okabe

Keywords: Optimization parallel processors

Abstract:

The present paper presents an algorithm which realizes fast search for the solutions of combinatory optimization problems with digital and parallel processors. By modifying the weight matrix, the proposed model avoids oscillation and realizes energy reduction under the synchronous discrete dynamics. As a result, the proposed model realize to obtain quasi-optimum solutions with much fewer iterations than the existing models.

_____

ICNN97 Optimization & Associative Memory Session: OA5C Paper Number: 64 Oral

A proposal of neuron mask in neural network algorithm for combinatorial optimization problems

Yoichi Takenaka, Nobuo Funabiki, Seishi Nishikawa

Keywords: Hopfield Neural network optimization

Abstract:

A constraint resolution scheme of the Hopfield neural network named "Neuron Mask" is presented for a class of combinatorial optimization problems. Neuron Mask always satisfies constraints of selecting a solution candidate from each group so as to force the state of the neural network into a solution space. This paper presents the definition of Neuron Mask and the introduction into the neural network through the N-queens problem. The performance is verified by simulations on three computation modes, where Neuron Mask improves the performance of the neural network.

_____

ICNN97 Optimization & Associative Memory Session: OA5D Paper Number: 15 Oral

A dual neural network for shortest-path routing

Jun Wang

Keywords: Shortest path Optimization recurrent neural networks

Abstract:

The shortest path problem is a classical combinatorial optimization problem arising in numerous planning and designing contexts. This paper presents a recurrent neural network for solving the shortest path problem. Based on the dual problem formulation, the recurrent neural network called the dual routing network has much simpler architecture than the one based on the primal problem formulation. The dynamics and architecture of the dual routing network are defined and the operating characteristics are demonstrated via simulation.

_____

ICNN97 Optimization & Associative Memory Session: OA5E Paper Number: 238 Oral

Fault-tolerance in a Boltzmann machine

Camille C. Price, John B. Hanks and Jeffrey N. Stephens

Keywords: Fault-tolerance Boltzmann machine combinatorial optimization

Abstract:

Connectionist computing models represent a promising advance in the field of neural networks. The Boltzmann machine is a model for connectionist computing that has been applied to the solution of combinatorial optimization problems and which can be viewed as a massively parallel simulated annealing procedure. We have designed a Boltzmann machine to solve quadratic assignment problems, and have demonstrated its effectiveness by comparing its results with optimal solutions, and by comparing its performance with that of other heuristic algorithms. In anticipation of hardware implementation of the Boltzmann machine,it is desirable to develop a quantitative characterization of the inherent fault-tolerant properties of this computational model under inevitable conditions of component failure. We have investigated the fault- tolerance of this connectionist model experimentally by injecting a variety of patterns of component failures, including single node failures, column node failures, and random multiple node failures, with the purpose of observing and measuring the deterioration in the quality of the objective function results that are produced. We observed low-percentage degradations in performance that are acceptable from a practical standpoint, and conclude that the Boltzmann model offers an effective and robust heuristic mechanism for combinatorial optimization.

_____

ICNN97 Optimization & Associative Memory Session: OAP1 Paper Number: 338 Poster

Selecting the toroidal self-organizing feature maps (TSOFM) best organized to object recognition

G. Andreu, A. Crespo and J. Valiente

Keywords: neural networks self-organizing feature maps border effects pattern recognition

Abstract:

Self-organizing feature maps (SOFM) are an important tool to visualize high-dimensional data as a two-dimensional image. One of the possible applications of this network is in image recognition. However, this architecture presents some problems mainly due to border effects. In this paper a new organization of the feature maps termed toroidal self-organizing feature maps (TSOFM) is presented. Its main advantage consists in the elimination of the border effects and, consequently, an increase in the recognition rate. Another important aspect presented in this paper is the measurement of how well networks are organized during the training phase.

This proposal has been experimented with a real data set, consisting of different pieces as result to cut up chickens, demonstrating the better quality results.

_____

ICNN97 Optimization & Associative Memory Session: OAP1 Paper Number: 324 Poster

Pulsed noise-based stochastic optimisation with the Hopfield model

Jacek Mandziuk

Keywords: Pulsed noise stochastic optimisation Hopfield model

Abstract:

In the paper a new, simple approach to solving combinatorial optimization problems is discussed and preliminary simulation results for the small-size Travelling Salesman Problem are presented. The Pulsed Noise Model (PNM) introduced in the paper is based on combining the Langevin Equation minimization method with the Hopfield model, in a very straightforward manner. The main advantage of this approach is its conceptual simplicity without sacrificing efficiency. Unlike in the previous related works (stochastic neural networks, diffusion machine), in the PNM, intensities of gaussian noises injected to the system are independent of neuron's potentials. Moreover, in PNM noises are injected to the system only at certain time instances in the opposite to continuously maintained $\delta$-correlated noises used in previous approaches. Finally, instead of impractically long inverse logarithmic cooling schedules, the linear cooling is applied. Definitely, with the above strong simplifications PNM is not expected to rigorously maintain Thermal Equilibrium (TE). However, approximate numerical test based on the canonical Gibb-Boltzman distribution shows, that differences between the rigorous and estimated values are relatively low (within a few percent). In this sense PNM is said to perform Quasi-Termal Equilibrium.

_____

ICNN97 Optimization & Associative Memory Session: OAP1 Paper Number: 441 Poster

ATM call admission control using sparse distributed memory

Hee-Yong Kwon

Keywords: ATM call admission control sparse distributed memory associative memory

Abstract:

We have proposed a neural network call admission control(CAC) method based on sparse distributed memory(SDM). CAC is a key technology of ATM network traffic control. It should be adaptable to the rapid and various changes of the ATM network environment. Conventional approaches to the ATM CAC require network analysis in detail in all cases. The optimal implementation is said to be very difficult. Therefore, neural approaches have recently been employed. However, it does not meet the adaptability requirements. We, thus, have proposed a method which based on SDM as the neural network controller. Since SDM is an RAM-like associative memory, it has the property of good adaptability. It provides CAC with good adaptability to manage changes. Experimental results are as good as those of the previous neural approaches without additional analytical data, and without relearning from initial state.

_____

ICNN97 Optimization & Associative Memory Session: OAP1 Paper Number: 201 Poster

Capacity Expansion and Concept formation in sequential associative memory

Hideki Kakeya and Yoichi Okabe

Keywords: Associative memory memory retrieval

Abstract:

The present paper presents an algorithm which realizes fast search for the solutions of combinatory optimization problems with digital and parallel processors. By modifying the weight matrix, the proposed model avoids oscillation and realizes energy reduction under the synchronous discrete dynamics. As a result, the proposed model realize to obtain quasi-optimum solutions with much fewer iterations than the existing models.

_____

ICNN97 Optimization & Associative Memory Session: OAP1 Paper Number: 503 Poster

High order ortogonal tensor networks: Informationcapacity and reliability

Alexander N. Gorban and Yeugenii M. Mirkes and Donald C. Wunsch II

Keywords: Information capacity estimation Orthogonal tensor

Abstract:

Neural networks based on construction of orthogonal projectors in the tensor power of space of signals are described. A sharp estimate of their ultimate information capacity is obtained. The number of stored prototype patterns (prototypes) can many times exceed the number of neurons. A comparison with the error control codes is made.

_____

ICNN97 Optimization & Associative Memory Session: OAP1 Paper Number: 142 Poster

Associative ability of higher order neural networks

Shuji Yatsuki, Hiromi Miyajima

Keywords: Optimization Associative memory Memory capacity

Abstract:

Higher order neural networks(HONNs) have been proposed as new systems. In this paper, we show some theoretical results of associative ability of HONNs. As one of them, the associative ability of HONNs depends on the information quantity on the networks, where let the information quantity be defined as the number of weights which each neuron has. Especially, a capacity of auto-correlation associative memory is C(m,K)/2 log m, where $m$ is the number of neurons, $K$ is the order of connections and the function C( ) represents a combination. And further, results of heteroassociative, error-correcting and for memory patterns which aren't orthogonal to each other are shown.

_____

ICNN97 Optimization & Associative Memory Session: OAP1 Paper Number: 424 Poster

A module neural network and its basic behaviors

Yoshimitsu Shoji and Hiroshi Inaba

Keywords: module neural network equilibrium solution

Abstract:

This paper proposes a neural network having a special structure, called a module neural network, and studies basic behaviors of such a network. A module neural network is constructed so that a prescribed set of vectors is assigned as its asymptotically stable equilibrium solutions, and its behaviors around the equilibrium solutions are investigated in terms of domains of attraction.


Web Site Author: Mary Lou Padgett (m.padgett@ieee.org)
URL: http://www.mindspring.com/~pci-inc/ICNN97/paperoa.htm
(Last Modified: 30-Apr-1997)