Hugendubel.info - Die B2B Online-Buchhandlung 

Merkliste
Die Merkliste ist leer.
Bitte warten - die Druckansicht der Seite wird vorbereitet.
Der Druckdialog öffnet sich, sobald die Seite vollständig geladen wurde.
Sollte die Druckvorschau unvollständig sein, bitte schliessen und "Erneut drucken" wählen.
E-BookPDFDRM AdobeE-Book
416 Seiten
Englisch
Elsevier Science & Techn.erschienen am12.05.2014
Connectionist Models contains the proceedings of the 1990 Connectionist Models Summer School held at the University of California at San Diego. The summer school provided a forum for students and faculty to assess the state of the art with regards to connectionist modeling. Topics covered range from theoretical analysis of networks to empirical investigations of learning algorithms; speech and image processing; cognitive psychology; computational neuroscience; and VLSI design.
Comprised of 40 chapters, this book begins with an introduction to mean field, Boltzmann, and Hopfield networks, focusing on deterministic Boltzmann learning in networks with asymmetric connectivity; contrastive Hebbian learning in the continuous Hopfield model; and energy minimization and the satisfiability of propositional logic. Mean field networks that learn to discriminate temporally distorted strings are described. The next sections are devoted to reinforcement learning and genetic learning, along with temporal processing and modularity. Cognitive modeling and symbol processing as well as VLSI implementation are also discussed.
This monograph will be of interest to both students and academicians concerned with connectionist modeling.
mehr

Produkt

KlappentextConnectionist Models contains the proceedings of the 1990 Connectionist Models Summer School held at the University of California at San Diego. The summer school provided a forum for students and faculty to assess the state of the art with regards to connectionist modeling. Topics covered range from theoretical analysis of networks to empirical investigations of learning algorithms; speech and image processing; cognitive psychology; computational neuroscience; and VLSI design.
Comprised of 40 chapters, this book begins with an introduction to mean field, Boltzmann, and Hopfield networks, focusing on deterministic Boltzmann learning in networks with asymmetric connectivity; contrastive Hebbian learning in the continuous Hopfield model; and energy minimization and the satisfiability of propositional logic. Mean field networks that learn to discriminate temporally distorted strings are described. The next sections are devoted to reinforcement learning and genetic learning, along with temporal processing and modularity. Cognitive modeling and symbol processing as well as VLSI implementation are also discussed.
This monograph will be of interest to both students and academicians concerned with connectionist modeling.
Details
Weitere ISBN/GTIN9781483214481
ProduktartE-Book
EinbandartE-Book
FormatPDF
Format HinweisDRM Adobe
Erscheinungsjahr2014
Erscheinungsdatum12.05.2014
Seiten416 Seiten
SpracheEnglisch
Artikel-Nr.3162074
Rubriken
Genre9200

Inhalt/Kritik

Inhaltsverzeichnis
1;Front Cover;1
2;Connectionist Models;2
3;Copyright Page;3
4;Table of Contents;4
5;Foreword;8
6;Participants in the 1990 Connectionist Models Summer School;10
7;List Of Accepted Students;11
8;Part I: Mean Field, Boltzmann, and Hopfield Networks;14
8.1;Chapter 1. Deterministic Boltzmann Learning in Networks with Asymmetric Connectivity;16
8.1.1;Abstract;16
8.1.2;1 INTRODUCTION;16
8.1.3;2 DETERMINISTIC BOLTZMANN LEARNING IN SYMMETRIC NETWORKS;16
8.1.4;3 ASYMMETRIC NETWORKS;17
8.1.5;4 SIMULATION RESULTS;19
8.1.6;5 DISCUSSION;21
8.1.7;Acknowledgement;21
8.1.8;References;21
8.1.9;APPENDIX;21
8.2;Chapter 2. Contrastive Hebbian Learning in the Continuous Hopfield Model;23
8.2.1;Abstract;23
8.2.2;1 INTRODUCTION;23
8.2.3;2 STABILITY OF ACTIVATIONS;24
8.2.4;3 CONTRASTIVE LEARNING;24
8.2.5;4 DISCUSSION;25
8.2.6;5 APPENDIX;27
8.2.7;Acknowledgements;30
8.2.8;References;30
8.3;Chapter 3. Mean field networks that learn to discriminate temporally distorted strings;31
8.3.1;Abstract;31
8.3.2;INTRODUCTION;31
8.3.3;PREVIOUS APPROACHES USING NEURAL NETS;32
8.3.4;THE LEARNING PROCEDURE FOR THE MEAN FIELD MODULES;32
8.3.5;THE TASK USED IN THE SIMULATIONS;33
8.3.6;RESULTS AND DISCUSSION;34
8.3.7;Acknowledgements;34
8.3.8;References;35
8.4;Chapter 4. Energy Minimization and the Satisfiability of Propositional Logic;36
8.4.1;Abstract;36
8.4.2;1 Introduction;36
8.4.3;2 Satisfiability and models of propositional formulas;37
8.4.4;3 Equivalence between WFFs;37
8.4.5;4 Conversion of a WFF into Conjunction of Triples Form (CTF);37
8.4.6;5 Energy functions;38
8.4.7;6 The equivalence between high order models and low order models;39
8.4.8;7 Describing WFFs by energy functions;40
8.4.9;8 The penalty function;40
8.4.10;9 Mapping from a satisfiability problem to a minimization problem and vice versa;41
8.4.11;10 Summary, applications and conclusions;42
8.4.12;Acknowledgments;43
8.4.13;References;43
9;Part II: Reinforcement Learning;46
9.1;Chapter 5. On the Computational Economics of Reinforcement Learning;48
9.1.1;Abstract;48
9.1.2;1 INTRODUCTION;48
9.1.3;2 INDIRECT AND DIRECT ADAPTIVE CONTROL;49
9.1.4;3 MARKOV DECISION PROBLEMS;50
9.1.5;4 INDIRECT AND DIRECT LEARNING FOR MARKOV DECISION PROBLEMS;51
9.1.6;5 AN INDIRECT ALGORITHM;51
9.1.7;6 Q-LEARNING;52
9.1.8;7 SIMULATION RESULTS;53
9.1.9;8 DISCUSSION;54
9.1.10;9 CONCLUSION;55
9.1.11;Acknowledgements;55
9.1.12;References;55
9.2;Chapter 6. Reinforcement Comparison;58
9.2.1;Abstract;58
9.2.2;1 INTRODUCTION;58
9.2.3;2 THEORY;58
9.2.4;3 RESULTS;60
9.2.5;4 CONCLUSIONS;61
9.2.6;Acknowledgements;62
9.2.7;References;62
9.3;Chapter 7. Learning Algorithms for Networks with Internal and External Feedback;65
9.3.1;Abstract;65
9.3.2;1 Terminology;65
9.3.3;2 The Neural Bucket Brigade Algorithm;66
9.3.4;3 A Reinforcement Comparison Algorithm for Continually Running Fully Recurrent Probabilistic Networks;67
9.3.5;4 Two Interacting Fully Recurrent Self-Supervised Learning Networks for Reinforcement Learning;68
9.3.6;5 An Example for Learning Dynamic Selective Attention: Adaptive Focus Trajectories for Attentive Vision;71
9.3.7;6 An Adaptive Subgoal Generator for Planning Action Sequences;72
9.3.8;References;73
10;Part III: Genetic Learning;76
10.1;Chapter 8. Exploring Adaptive Agency I: Theory and Methods for Simulating the Evolution of Learning;78
10.1.1;Abstract;78
10.1.2;1 INTRODUCTION;78
10.1.3;2 NATURAL SELECTION AND THE EVOLUTION OF SUBSIDIARY ADAPTIVE PROCESSES;79
10.1.4;3 A BRIEF HISTORY OF LEARNING THEORY IN (COMPARATIVE) PSYCHOLOGY;80
10.1.5;4 HOW ECOLOGICAL LEARNING THEORY CAN INFORM CONNECTIONIST LEARNING THEORY;82
10.1.6;5 TOWARDS A TAXONOMY OF ADAPTIVE FUNCTIONS FOR LEARNING;84
10.1.7;6 A SIMULATION FRAMEWORK FOR EXPLORING ADAPTIVE AGENCY;86
10.1.8;7 A SIMPLE SCENARIO FOR THE EVOLUTION OF UNSUPERVISED LEARNING;88
10.1.9;8 PLANNED EXTENSIONS AND FUTURE RESEARCH;90
10.1.10;Acknowledgements;91
10.1.11;References;91
10.2;Chapter 9. The Evolution of Learning: An Experiment in Genetic Connectionism;94
10.2.1;Abstract;94
10.2.2;1 INTRODUCTION;94
10.2.3;2 EVOLUTION OF LEARNING IN NEURAL NETWORKS;96
10.2.4;3 RESULTS;99
10.2.5;4 DISCUSSION AND FURTHER DIRECTIONS;102
10.2.6;Acknowledgements;103
10.2.7;References;103
10.3;Chapter 10. Evolving Controls for Unstable Systems;104
10.3.1;Abstract;104
10.3.2;1 INTRODUCTION;104
10.3.3;2 NEURAL NETWORKS;107
10.3.4;3 GENETIC ALGORITHM;108
10.3.5;4 POLE BALANCING;109
10.3.6;5 DISCUSSION;112
10.3.7;APPENDIX;113
10.3.8;Acknowledgements;114
10.3.9;References;114
11;Part IV: Temporal Processing;116
11.1;Chapter 11. BACK-PROPAGATION, WEIGHT-ELIMINATION AND TIME SERIES PREDICTION;118
11.1.1;Abstract;118
11.1.2;1 INTRODUCTION;118
11.1.3;2 NETWORKS FOR TIME SERIES PREDICTION;118
11.1.4;3 SUNSPOTS;122
11.1.5;4 SUMMARY;128
11.1.6;Appendix: Parameters of the Network;128
11.1.7;References;129
11.2;Chapter 12. Predicting the Mackey-Glass Timeseries With Cascade-Correlation Learning;130
11.2.1;Abstract;130
11.2.2;1 THE MACKEY-GLASS TIMESERIES;130
11.2.3;2 THE CASCADE-CORRELATION LEARNING ALGORITHM;131
11.2.4;3 BENCHMARK RESULTS;132
11.2.5;4 Conclusions;135
11.2.6;Acknowledgments;135
11.2.7;References;136
11.3;Chapter 13. Learning in Recurrent Finite Difference Networks;137
11.3.1;Abstract;137
11.3.2;1 A FINITE DIFFERENCE ALGORITHM;137
11.3.3;2 SIMULATIONS;138
11.3.4;3 DISTORTED WAVE FORMS WITH THE RTRL ALGORITHM;140
11.3.5;4. DISCUSSION;142
11.3.6;Acknowledgements;143
11.3.7;References;143
11.4;Chapter 14. Temporal Backpropagation: An Efficient Algorithm for Finite Impulse Response Neural Networks;144
11.4.1;Abstract;144
11.4.2;1 INTRODUCTION;144
11.4.3;2 NETWORK STRUCTURE;144
11.4.4;3 TRAINING;146
11.4.5;4 APPLICATIONS;150
11.4.6;5 CONCLUSION;150
11.4.7;References;150
12;Part V: Theory and Analysis;152
12.1;Chapter 15. Optimal Dimensionality Reduction Using Hebbian Learning;154
12.1.1;Abstract;154
12.1.2;1 Introduction;154
12.1.3;2 Statement of The Problem;154
12.1.4;3 Main Results;155
12.1.5;4 Discussion;155
12.1.6;5 appendix;156
12.1.7;6 references;157
12.2;Chapter 16. Basis-Function Trees for Approximation in High-Dimensional Spaces;158
12.2.1;Abstract;158
12.2.2;1 INTRODUCTION;158
12.2.3;2 NETWORK STRUCTURE;158
12.2.4;3 GROWING THE TREE;160
12.2.5;4 EXAMPLES;160
12.2.6;5 DISCUSSION;162
12.2.7;Acknowledgements;164
12.2.8;References;164
12.3;Chapter 17. Effects of Circuit Parameters on Convergence of Trinary Update Back-Propagation;165
12.3.1;Abstract;165
12.3.2;1. INTRODUCTION;165
12.3.3;2. TRIT ALGORITHM;166
12.3.4;3. EFFECTS OF CIRCUIT LIMITATIONS;167
12.3.5;4. RESULTS;168
12.3.6;5. CONCLUSIONS;169
12.3.7;References;170
12.4;Chapter 18. Equivalence Proofs for Multi-Layer Perceptron Classifiers and the Bayesian Discriminant Function;172
12.4.1;Abstract;172
12.4.2;1 INTRODUCTION;172
12.4.3;2 A GENERAL DESCRIPTION OF THE N-CLASS PROBLEM AND THE BAYESIAN DISCRIMINANT FUNCTION;173
12.4.4;3 REASONABLE ERROR MEASURES: BAYESIAN PERFORMANCE VIA ACCURATE ESTIMATION OF A POSTERIORI PROBABILITIES;173
12.4.5;4 CLASSIFICATION FIGURES OF MERIT: LIMITED BAYESIAN PERFORMANCE WITHOUT EXPLICIT ESTIMATION OF A POSTERIORI PROBABILITIES;181
12.4.6;5 COMMENTS ON THE APPLICABILITY OF THESE PROOFS TO THE STUDY OF GENERALIZATION IN MLP CLASSIFIERS;183
12.4.7;6 SUMMARY;184
12.4.8;Acknowledgments;185
12.4.9;References;185
12.5;Chapter 19. A Local Approach to Optimal Queries;186
12.5.1;Abstract;186
12.5.2;1 Overview;186
12.5.3;2 Concept Learning and Generalization;186
12.5.4;3 Generalization From Queries;187
12.5.5;4 Selective Sampling as Sequential Querying;188
12.5.6;5 Optimal queries;189
12.5.7;6 Conclusion;191
12.5.8;Acknowledgemnts;192
12.5.9;References;192
13;Part VI: Modularity;194
13.1;Chapter 20. A Modularization Scheme for Feedforward Networks;196
13.1.1;Abstract;196
13.1.2;1 INTRODUCTION;196
13.1.3;2 CONSTRAINING INTERNAL REPRESENTATIONS;197
13.1.4;3 APPLICATIONS;197
13.1.5;4 DISCUSSION;199
13.1.6;A LEARNING PROCEDURE ADJUSTMENTS;199
13.1.7;Acknowledgement;199
13.1.8;References;199
13.2;Chapter 21. A Compositional Connectionist Architecture;201
13.2.1;Abstract;201
13.2.2;1 INTRODUCTION;201
13.2.3;2 CompoNet: A COMPOSITIONAL CONNECTIONIST ARCHITECTURE;201
13.2.4;3 SIMULATION RESULTS;205
13.2.5;4 SUMMARY;210
13.2.6;References;210
14;Part VII: Cognitive Modeling and Symbol Processing;212
14.1;Chapter 22. From Rote Learning to System Building: Acquiring Verb Morphology in Children and Connectionist Nets;214
14.1.1;Abstract;214
14.1.2;1 INTRODUCTION;214
14.1.3;2 METHOD;218
14.1.4;3 RESULTS;221
14.1.5;4 DISCUSSION;226
14.1.6;5 CONCLUSION;230
14.1.7;References;230
14.2;Chapter 23. Parallel Mapping Circuitry in a Phonological Model;233
14.2.1;Abstract;233
14.2.2;1 Introduction;233
14.2.3;2 Sequence Manipulation Via a Change Buffer;233
14.2.4;3 Operation of The Mapping Matrix;235
14.2.5;4 Projections;237
14.2.6;5 Clustering;237
14.2.7;6 M P: The Big Picture;239
14.2.8;7 Discussion;239
14.2.9;Acknowledgements;240
14.2.10;References;240
14.3;Chapter 24. A modular Neural Network Model of the Acquisition of Category Names in Children;241
14.3.1;Abstract;241
14.3.2;1 INTRODUCTION;241
14.3.3;2. EXPERIMENT;244
14.3.4;3. RESULTS AND DISCUSSION;245
14.3.5;4. CONCLUSIONS AND FUTURE WORK;247
14.3.6;Acknowledgments;247
14.3.7;References;247
14.4;Chapter 25. A Computational Model of Attentional Requirements in Sequence Learning;249
14.4.1;Abstract;249
14.4.2;1 BEHAVIORAL CHARACTERISTICS OF SEQUENCE LEARNING;249
14.4.3;2 A COMPUTATIONAL MODEL OF SEQUENCE LEARNING;250
14.4.4;3 GENERAL DISCUSSION;253
14.4.5;Acknowledgements;255
14.4.6;References;255
14.5;Chapter 26. Recall of Sequences of Items by a Neural Network;256
14.5.1;Abstract;256
14.5.2;1 INTRODUCTION;256
14.5.3;2 THE SIMULATION;256
14.5.4;3 RESULTS;260
14.5.5;4 DISCUSSION;262
14.5.6;Acknowledgements;265
14.5.7;Rererences;265
14.6;Chapter 27. Binding, Episodic Short-Term Memory, and Selective Attention, Or Why are PDP Models Poor at Symbol Manipulation?;266
14.6.1;Abstract;266
14.6.2;1 INTRODUCTION;266
14.6.3;2 SYMBOL SYSTEMS;267
14.6.4;3 PARALLEL DISTRIBUTED PROCESSING;267
14.6.5;4 HUMAN COGNITION;267
14.6.6;5 THE BINDING PROBLEM OR WHAT GOES WITH WHAT;270
14.6.7;6 PADSYMA, THE NEW MODEL;271
14.6.8;7 PROPERTIES OF THE MODEL;275
14.6.9;8 DISCUSSION;275
14.6.10;Acknowledgements;276
14.6.11;References;276
14.7;Chapter 28. Analogical Retrieval Within a Hybrid Spreading-Activation Network;278
14.7.1;ABSTRACT;278
14.7.2;1. INTRODUCTION;278
14.7.3;2. CONNECTIONIST MODELS;279
14.7.4;3. A HYBRID SPREADING-ACTIVATION MODEL OF DISAMBIGUATION AND RETRIEVAL;280
14.7.5;4. SIMULATION RESULTS;285
14.7.6;5. DISCUSSION;286
14.7.7;ACKNOWLEDGEMENTS;288
14.7.8;REFERENCES;288
14.8;Chapter 29. Appropriate Uses of Hybrid Systems;290
14.8.1;Abstract;290
14.8.2;1 Introduction;290
14.8.3;2 Motivating a Hybrid Solution;291
14.8.4;3 Problems with Hybrids;292
14.8.5;4 The SCALIR System;292
14.8.6;5 Discussion and Conclusions;298
14.8.7;Acknowledgements;298
14.8.8;References;298
14.9;Chapter 30. Cognitive Map Construction and Use: A Parallel Distributed Processing Approach;300
14.9.1;Abstract;300
14.9.2;1 INTRODUCTION;300
14.9.3;2 THE PREDICTIVE MAP;300
14.9.4;3 THE ORIENTING SYSTEM;305
14.9.5;4 THE INVERSE MODEL;307
14.9.6;5 NAVIGATION;309
14.9.7;6 THEORETICAL MOTIVATION;310
14.9.8;7 CONCLUSIONS;312
14.9.9;Acknowledgements;312
14.9.10;References;312
15;Part VIII: Speech and Vision;314
15.1;Chapter 31. UNSUPERVISED DISCOVERY OF SPEECH SEGMENTS USING RECURRENT NETWORKS;316
15.1.1;Abstract;316
15.1.2;References;322
15.2;Chapter 32. Feature Extraction using an Unsupervised Neural Network;323
15.2.1;Abstract;323
15.2.2;1 How to construct optimal unsupervised feature extraction;323
15.2.3;2 Feature Extraction using ANN;324
15.2.4;3 Comparison with other feature extraction methods;326
15.2.5;4 Discussion;329
15.2.6;Acknowledgements;329
15.2.7;References;329
15.2.8;Mathematical Appendix;331
15.3;Chapter 33. Motor Control for Speech Skills a Connectionist Approach;332
15.3.1;Abstract;332
15.3.2;1 INTRODUCTION;332
15.3.3;2 GENERAL PRINCIPLES;332
15.3.4;3 FROM ACOUSTIC SIGNAL TO ARTICULATORY GESTURES: THE ROLE OF CONSTRAINTS;334
15.3.5;4 GENERATION OF CONTROLLED OSCILLATORS BY SEQUENTIAL NETWORKS;337
15.3.6;5 NON-LINEAR TRANSFORMATION OF A SIMPLE OSCILLATOR TRAJECTORY INTO COMPLEX GESTURES;338
15.3.7;6 COMPLETE MODEL;338
15.3.8;7 CONCLUSION;339
15.3.9;Acknowledgements;339
15.3.10;References;339
15.4;Chapter 34. Extracting features from faces using compression networks: Face, identity, emotion, and gender recognition using holons;341
15.4.1;Abstract;341
15.4.2;1 INTRODUCTION;341
15.4.3;2 COMPRESSION NETWORKS;341
15.4.4;3 FACE RECOGNITION USING COMPRESSION NETWORKS;342
15.4.5;4 GROUNDING MEANING IN PERCEPTION;346
15.4.6;5 CONCLUSIONS;349
15.4.7;References;349
15.5;Chapter 35. The Development of Topography and Ocular Dominance;351
15.5.1;Abstract;351
15.5.2;1 INTRODUCTION;351
15.5.3;2 COMPUTATIONAL MODELS;352
15.5.4;3 THE BINOCULAR NEURAL ACTIVITY MODEL;354
15.5.5;4 THE ELASTIC NET APPROACH;356
15.5.6;5 DISCUSSION;359
15.5.7;6 CONCLUSIONS;359
15.5.8;Acknowledgements;360
15.5.9;References;360
15.6;Chapter 36. On Modeling Some Aspects of Higher Level Vision;363
15.6.1;Abstract;363
15.6.2;1. BACKGROUND VIEWS AND ASSUMPTIONS;363
15.6.3;2. PERCEPTUAL PRIMING;365
15.6.4;3. EXPECTATION AS IMAGINATIVE SELF-PRIMING;366
15.6.5;4. THE IMAGE NORMALIZATION PROBLEM;367
15.6.6;5. IMAGE SEGMENTATION AGAIN;369
15.6.7;Acknowledgements;370
15.6.8;References;370
16;Part IX: Biology;374
16.1;Chapter 37. Modeling cortical area 7a using Stochastic Real-Valued (SRV) units;376
16.1.1;Abstract;376
16.1.2;1 INTRODUCTION;376
16.1.3;2 NETWORK STRUCTURE AND TRAINING;377
16.1.4;3 SIMULATION RESULTS;379
16.1.5;4 DISCUSSION AND CONCLUSIONS;380
16.1.6;Acknowledgements;380
16.1.7;References;381
16.2;Chapter 38. Neuronal signal strength is enhanced by rhythmic firing;382
16.2.1;Abstract;382
16.2.2;1 INTRODUCTION;382
16.2.3;2 SIMULATION OF A CORTICAL PYRAMIDAL NEURON;382
16.2.4;3 COMPETITIVE EFFECTS IN VISUAL ATTENTION;383
16.2.5;4 APPENDIX: SIMULATION;383
16.2.6;Acknowledgements;386
16.2.7;References;387
17;Part X: VLSI Implementation;390
17.1;Chapter 39. An Analog VLSI Neural Network Cocktail Party Processor;392
17.1.1;Abstract;392
17.1.2;1 INTRODUCTION;392
17.1.3;2 BINDING BY PHASE CORRELATION;392
17.1.4;3 THE MALSBURG & SCHNEIDER MODEL;393
17.1.5;4 AN ANALOG VLSI IMPLEMENTATION;395
17.1.6;5 PRESENT STATUS AND FUTURE DIRECTIONS;398
17.1.7;Acknowledgements;398
17.1.8;References;399
17.2;Chapter 40. A VLSI Neural Network with On-Chip Learning;400
17.2.1;Abstract;400
17.2.2;1 THE NETWORK MODEL;400
17.2.3;2 TRAINING;400
17.2.4;3 HARDWARE REALIZATION;402
17.2.5;4 PARALLEL RANDOM NUMBER GENERATION;405
17.2.6;5 CONCLUDING REMARKS;406
17.2.7;Acknowledgements;407
17.2.8;References;407
18;Index;414
mehr

Autor