(Return to ICNN'97 Home Page)


ICNN'97 PLENARY ADDRESSES

SCHEDULE .......... DETAILS .......... BANQUET ADDRESS .......... CHAIR: J. Zurada



SCHEDULE OF PLENARY ADDRESSES


Monday, June 9

8:30 - 10:30 AM


Tuesday, June 10

8:30 - 9:40 AM

13:20 - 14:30 PM


Wednesday, June 11

8:30 - 9:40 AM

13:30 - 14:30 PM

19:00 - 22:00 PM (Banquet)


Thursday, June 12

8:30 - 9:30 PM

Return to Top



ICNN'97 PLENARY ADDRESS DETAILS Return to Top


RESEARCH AND APPLICATION ASPECTS IN SOFT COMPUTING:
HISTORY AND RECENT TRENDS IN JAPAN

Kaoru Hirota
Tokyo Institute of Technology
Interdisciplinary Graduate School of Science and Engineering
Department of Computational Intelligence and Systems Science
4259 Nagatsuta-cho, Midori-ku, Yokohama 226, Japan

Tel +81-45-924-5686, Fax +81-45-924-5676 e-mail hirota@hrt.dis.titech.ac.jp

ABSTRACT
Research and application aspects in the field of soft computing mainly in Japan have been surveyed. In the middle of 1980's the fuzzy technology became a central issue for mainly process control and the year 1990 became a so called "fuzzy-home-electronics year." These technologies are mainly based on if-then rule based fuzzy inference with instrumentation (i.e., sensor and actuator) engineering. Then the neural network technology was merged in fuzzy technology in 1991 and again many consumer products were sent to the real market in Japan. Such neuro-fuzzy technologies are classified into 9 categories. In 1993 chaos technologies were also taken part in research and development of such high-tech issues. Very recently other technologies such as chaos, genetic algorithms and artificial life are also investigated by company engineers in Japan. These kinds of practical, technological aspects in Japan are discussed and the future trends are also indicated by giving many examples.

Keywords: Soft Computing, Fuzzy, Neuro, Chaos, Industrial Applications

Return to Top


NEURAL NETS AND AI: TIME TO FOR A SYNTHESIS

David Waltz
Vice President, Computer Science Research
NEC Research Institute
4 Independence Way, Princeton, NJ 08540

Tel (609) 951-2700 Fax (609) 951-2483 e-mail waltz@research.nj.nec.com

ABSTRACT
Throughout its history, neural net research has been heavily impacted by AI, nearly always negatively. Neural net research and applications are finally thriving as an enterprise largely divorced from AI, though with the upsurge of interest in learning in AI, there are communities of researchers who feel affinities with both fields. But in a broader perspective, AI and neural nets could learn a great deal from each other: AI is unlikely to succeed in its central goals if researchers ignore learning and insist on hand construction of programs grounded in logical primitives; and neural nets are unlikely to add much to our overall understanding of intelligence, or to break out of their role as useful application tools if researchers ignore representational issues and constrain each system to begin as "tabula rasa." Moreover, while both fields have developed useful insights and applications, both AI and neural net researchers will need to look at larger architectural issues if we are ever to build systems that are intelligent in any sense comparable with human or animal intelligence.

Return to Top


TOWARDS NEURALLY PLAUSIBLE BAYESIAN NETWORKS

Geoffrey Hinton
Department of Computer Science
University of Toronto
Toronto, Ontario, Canada

Tel (416) 978-3707 Fax (416) 978-1455 e-mail hinton@ai.toronto.edu

ABSTRACT

Bayesian networks have been one of the major advances in statistics and artificial intelligence over the last decade. Multilayer logistic Bayes nets which compute posterior distributions over hidden states using Gibbs sampling are considerably more efficient than Boltzmann machines at unsupervised learning (Neal, 1992). However, they are implausible as biological models because to handle "explaining away" effects properly, a unit in one layer needs to know not only the state of a unit in the layer below but also that unit's total top-down input. Seung has recently shown how explaining away can be handled in a biologically plausible way using lateral connections, provided the generative model is linear. We extend Seung's trick to multilayer non-linear generative models and show that these models are very effective in extracting sparse distributed representations with easily interpreted hidden units. This talk describes joint work with Z. Ghahramani.

Return to Top


A GEOMETRIC APPROACH TO EDGE DETECTION

Jim Bezdek
Computer Science Department
University of West Florida
Pensacola, FL 32514

Tel (904) 474-2784 Fax (904) 474-3023 e-mail jbezdek@dcsuwf.dcsnod.uwf.edu

ABSTRACT
This paper describes edge detection as a composition of four steps: conditioning, feature extraction, blending and scaling. We examine the role of geometry in determining good features for edge detection and in setting parameters for functions to blend the features. Our main results: (i) statistical features such as the range and standard deviation of window intensities can be as effective as more traditional features such as estimates of digital gradients; (ii) blending functions that are roughly concave near the origin of feature space can provide visually better edge images than the traditional choices such as the city-block and Euclidean norms; (iii) geometric considerations can be used to specify the parameters of generalized logistic functions and Takagi-Sugeno input0output systems that yield a rich variety of edge images; and (iv), understanding the geometry of the feature extraction and blending functions is the key to using models based on computational learning algorithms such as neural networks and fuzzy systems for edge detection. Edge images derived from a digitized mammogram are given to illustrate various facets of our approach.

Keywords: Edge detection, Fuzzy systems, Logistic functions, Mammography, Model-based training, Takagi-Sugeno model
Return to Top


ADAPTIVE APPROXIMATION NETWORKS FOR STABLE LEARNING AND CONTROL

Jean-Jacques E. Slotine
Nonlinear Systems Laboratory
MIT 3-449
Cambridge MA 02139 USA

Tel (617) 253-0490 Fax (617) 258-5802 e-mail jjs@mit.edu

ABSTRACT
Real-time estimation and adaptive control using "neural" networks presents specific challenges and opportunities. Intuitively, because the estimated model is used in closed-loop {\it at the same time as it is being built}, the main difficulty is to guarantee and quantify the overall stability and convergence of the three concurrent processes of structural adaptation (basis function selection), coefficient (weight) adaptation, and actual control or estimation. The main opportunity is that learning performance is specified in terms of task convergence rather than global function approximation, so that stable real-time algorithms and representations can be derived that, in a sense, are just complex enough to get the job done. Specifically, we study an algorithm for stable real-time estimation and control using on-line construction of a multiresolution dynamic model. We illustrate the discussion experimentally on robotic catching and throwing tasks.

Return to Top


FROM NEUROCONTROL TO BRAIN-LIKE INTELLIGENCE

Paul Werbos
Room 675, National Science Foundation
Arlington, VA 22230
Tel (703) 306-1339 Fax (703) 306-0305 e-mail pwerbos@nsf.gov

ABSTRACT
Formally, the ENTIRE brain is a neurocontroller -- a learning-based system of neural nets designed to output actions or decisions, to achieve results over time. But what kind of neurocontroller is it, and how do we replicate its capabilities? In 1981, I published a first-order theory of the brain as a neurocontroller, in a design combining reinforcement learning, expectations and backpropagation. As of 1995, applied neurocontrol has "climbed up the ladder" of designs high enough to implement that theory, and demonstrate its superior capabilities on simulated control problems; a physical demonstration is well underway, and a couple of stability theorems have been proved. This talk will review this progress, and then describe a more complete theory of brain-like intelligence -- "three brains in one" -- which addresses issues such as generalized spatial navigation, planning, discrete choice and the role of the basal ganglia, with a few related simulation results.

Return to Top


REINFORCEMENT LEARNING BY AN ECONOMY OF AGENTS

Eric Baum
NEC Research Institute
4 Independence Way, Princeton, NJ 08540

Tel (609) 951-2712 Fax (609) 951-2482 e-mail eric@research.nj.nec.com

ABSTRACT

To learn to deal with truly complex environments-- for example to understand how human-like mental capabilities are possible-- we must model how massive computational tasks can be autonomously broken down into smaller components. A critical question is how each subagent can be given the right incentive during learning and performance. A strategy is proposed that uses property rights to correctly motivate each agent. A realization of this strategy called ``The Hayek Machine'' learns by reinforcement to interact profitably with a complex world. Hayek has been tested on a simulated Blocks World (BW) planning problem and found to solve far more complex BW problems than any previous learning algorithm. If given intermediate reward and simple features it learns to solve arbitrary BW problems.

Hayek maintains an economy of agents. It is argued that, except for an interesting ``cherrypicking'' phenomena, a new agent can profitably enter Hayek's economy if and only if it improves the performance of the system as a whole. Hence Hayek learns by entry and death of agents-- performing a hillclimbing search on collection of agent space. By keeping exactly those agents which are profitable, Hayek hopes to dynamically address the critical ``curse of dimensionality''. Many agents act in consort to solve problems.

We also discuss ongoing work on expanding the Hayek model to incorporate powerful agents capable of complex computation and such actions as creating new agents. This recurses the Hayek approach to the problem of suggesting new agents for Hayek. In our view previous metalearning efforts were plagued by difficulty in specifying correctly the incentives for meta-agents, but Hayek's economic model clarifies these issues.

Return to Top


EXPLORATION OF VERY LARGE DATABASES BY SELF-ORGANIZING MAPS

Teuvo Kohonen
Helsinki University of Technology
Neural Networks Research Centre
Rankentajanaukio 2C, FIN-02150 Espoo, Finland

Tel +358-0-451 3268 Fax +358-0-451 3277 e-mail teuvo.kohonen@hut.fi

ABSTRACT
Exploratory data analysis or "data mining" is a new area in neural-network research. The main problem thereby is the vast dimensionality. Neurocomputers have a high computing speed but their local memory capacities are still rather limited for those tasks. Due to the latter restriction, for really big problems such as organization of very large text collections one therefore still has to use general-purpose computers, but effective shortcuts to computations are then badly needed. The talk first discusses data mining from a general point of view. The talk then concentrates on a case example, an architecture and several computational solutions in which two cascaded Self-Organizing Maps of very high dimensionality are used to cluster documents according to their semantic contents. This architecture facilitates the retrieval of documents that are semantically most similar or relevant to a piece of given text. Using this system one can also specify a personalized mailbox into which such documents are automatically directed that belong to some defined semantic cluster. In the summer of 1996 the size of the document map was 49,152 (forty-nine thousand and 152) nodes or locations, and the total number of documents mapped onto these nodes was 306,350 (three hundred six thousand and 350). Semantically most similar documents were mapped onto the same node, and when moving to other nodes on the map, the topic area gradually changed.
Return to Top


FUNCTIONAL VOLUME MODELS: SYSTEM LEVEL MODELS FOR FUNCTIONAL NEUROIMAGING

Peter Fox
Research Imaging Center
University of Texas Health Sciences Center
7703 Floyd Curl Dr, San Antoio, TX 78284-6240

Tel (210) 567-8100 Fax (210) 567-8152 email fox@uthscsa.edu

ABSTRACT

Human functional brain mapping observations are most often reported as 3-D coordinates representing the centers of mass of activation sites in grand-mean (multi-subject) images. Functional volume modeling (FVM) derives 3-D spatial probability distributions from metanalysis of reported activation coordinates. FVM models are modular and can be assembled to form predictions of brain activation patterns during any task, based on an analysis of the cognitive subcomponents (elementary operations) recruited by that task.

Return to Top


STRUCTURE AND DYNAMICS OF NETWORK MEMORY

Joaquin Fuster
Department of Psychiatry
UCLA Medical Center,
Los Angeles, CA 90024

Tel (310) 825-0247 Fax: (310) 825-6792 e-mail: joaquinf@ucla.edu

ABSTRACT

Memory and knowledge are represented in widely distributed and hierarchically organized networks of interconnected neocortical neurons. These networks transcend cytoarchitecturally defined areas and modules. Perceptual memory is organized in networks of postrolandic cortex, motor (action) memory in prerolandic cortex. The prefrontal cortex is the highest hierarchical level of motor memory. The retrieval of memory -- or knowledge -- in recall and recognition, as well as its recall in "working memory," consist in the associative activation of preestablished neuronal networks. Probably an essential mechanism of active memory is the sustained reentry of neural impulses within a network.

Return to Top


THE DEEP AND SURFACE STRUCTURE OF MEMORY

Karl H. Pribram
Brain Center
Radford University
423 Russell Hall
Radford, VA 24142

Tel (540) 831-6108 Fax (540) 831-6630 e-mail: kpribram@runet.edu

ABSTRACT

Memory loss due to brain injury ordinarily encompasses a category of processing: prosopagnosia (inability to recognize faces); tactile agnosia; aphasia (inability to speak) and so forth. But the category can be narrowly restricted -- for instance, to living versus non-living items or unfamiliar perspectives on familiar objects. Furthermore, whenever we wish to recall something or other, we find it useful to employ a very specific trigger that provides entry into the retrieval struc- ture. Still, specific memories (engrams) are rarely "lost" due to brain injury. This has given rise to the view that ultimately, storage of experience in the brain is distributed. What kind of brain process can account for both the specificity of memory and distribution?

I will conceive of the organization of memory storage to res- emble somewhat the organization proposed by Chomsky (1965) for language: Memory has a deep and a surface structure. The deep structure of memory is distributed in the connection web of brain tissue; its surface structure is encompassed in specific circuits which are dispostions toward patterned propagation of signals preformed genetically and/or on the basis of experience. Re- trieval entails a process whereby brain circuitry addresses the distributed store. Smolensky (1986) has captured the formal essence of the process that characterizes the retrieval process, the surface structure of memory: "The dyanmical system [embodied in the function of a circuit] towards a point attractor [a trigger] whose position in the state space [the distributed store] is the memory. You naturally get dynamics of the system so that its attractors are located where the memories are sup- posed to be ..." (pp. 194-281). In short, the process of remembering operates on a dis-membered store by initiating a temporary dominant focus of excitation in the dendritic net.

Smolensky's suggestion is made more plausible if the "location" of attractors is content determined; that is, if the process is essentially content addressable -- by a similarity matching procedure -- rather than location addressable.

Return to Top


Plenary/Special Sessions Chair
Prof. Jacek M. Zurada
University of Louisville
Dept. of Electrical Engineering
Louisville KY 40292 USA

Phone: (502) 852-6314
Fax: (502) 852-6807
Email: jmzura02@starbase.spd.louisville.edu

Return to Top


Web Site Author: Mary Lou Padgett (m.padgett@ieee.org)
URL: http://www.mindspring.com/~pci-inc/ICNN97/plenary.htm
(Last Modified: 30-Apr-1997)