Hugendubel.info - Die B2B Online-Buchhandlung 

Merkliste
Die Merkliste ist leer.
Bitte warten - die Druckansicht der Seite wird vorbereitet.
Der Druckdialog öffnet sich, sobald die Seite vollständig geladen wurde.
Sollte die Druckvorschau unvollständig sein, bitte schliessen und "Erneut drucken" wählen.

Learning Automata and Their Applications to Intelligent Systems

E-BookEPUB2 - DRM Adobe / EPUBE-Book
272 Seiten
Englisch
John Wiley & Sonserschienen am11.01.20241. Auflage
Comprehensive guide on learning automata, introducing two variants to accelerate convergence and computational update speed
Learning Automata and Their Applications to Intelligent Systems provides a comprehensive guide on learning automata from the perspective of principles, algorithms, improvement directions, and applications. The text introduces two variants to accelerate the convergence speed and computational update speed, respectively; these two examples demonstrate how to design new learning automata for a specific field from the aspect of algorithm design to give full play to the advantage of learning automata.
As noisy optimization problems exist widely in various intelligent systems, this book elaborates on how to employ learning automata to solve noisy optimization problems from the perspective of algorithm design and application.
The existing and most representative applications of learning automata include classification, clustering, game, knapsack, network, optimization, ranking, and scheduling. They are well-discussed. Future research directions to promote an intelligent system are suggested.
Written by two highly qualified academics with significant experience in the field, Learning Automata and Their Applications to Intelligent Systems covers such topics as: Mathematical analysis of the behavior of learning automata, along with suitable learning algorithms
Two application-oriented learning automata: one to discover and track spatiotemporal event patterns, and the other to solve stochastic searching on a line
Demonstrations of two pioneering variants of Optimal Computing Budge Allocation (OCBA) methods and how to combine learning automata with ordinal optimization
How to achieve significantly faster convergence and higher accuracy than classical pursuit schemes via lower computational complexity of updating the state probability

A timely text in a rapidly developing field, Learning Automata and Their Applications to Intelligent Systems is an essential resource for researchers in machine learning, engineering, operation, and management. The book is also highly suitable for graduate level courses on machine learning, soft computing, reinforcement learning and stochastic optimization.


JunQi Zhang, PhD, is a Full Professor with Tongji University in Shanghai. He has published 10+ papers in IEEE Transactions and 30+ papers in conferences. His current research interests include learning automata, swarm intelligence, swarm robots, multi-agent systems, reinforcement learning, and big data. MengChu Zhou, PhD, is a Distinguished Professor at New Jersey Institute of Technology. He has over 1100 publications including 14 books, 750+ journal papers (600+ in IEEE transactions), 31 patents, and 32 book-chapters. He is Fellow of IEEE, IFAC, AAAS, CAA and NAI.
mehr
Verfügbare Formate
BuchGebunden
EUR140,50
E-BookPDF2 - DRM Adobe / Adobe Ebook ReaderE-Book
EUR111,99
E-BookEPUB2 - DRM Adobe / EPUBE-Book
EUR111,99

Produkt

KlappentextComprehensive guide on learning automata, introducing two variants to accelerate convergence and computational update speed
Learning Automata and Their Applications to Intelligent Systems provides a comprehensive guide on learning automata from the perspective of principles, algorithms, improvement directions, and applications. The text introduces two variants to accelerate the convergence speed and computational update speed, respectively; these two examples demonstrate how to design new learning automata for a specific field from the aspect of algorithm design to give full play to the advantage of learning automata.
As noisy optimization problems exist widely in various intelligent systems, this book elaborates on how to employ learning automata to solve noisy optimization problems from the perspective of algorithm design and application.
The existing and most representative applications of learning automata include classification, clustering, game, knapsack, network, optimization, ranking, and scheduling. They are well-discussed. Future research directions to promote an intelligent system are suggested.
Written by two highly qualified academics with significant experience in the field, Learning Automata and Their Applications to Intelligent Systems covers such topics as: Mathematical analysis of the behavior of learning automata, along with suitable learning algorithms
Two application-oriented learning automata: one to discover and track spatiotemporal event patterns, and the other to solve stochastic searching on a line
Demonstrations of two pioneering variants of Optimal Computing Budge Allocation (OCBA) methods and how to combine learning automata with ordinal optimization
How to achieve significantly faster convergence and higher accuracy than classical pursuit schemes via lower computational complexity of updating the state probability

A timely text in a rapidly developing field, Learning Automata and Their Applications to Intelligent Systems is an essential resource for researchers in machine learning, engineering, operation, and management. The book is also highly suitable for graduate level courses on machine learning, soft computing, reinforcement learning and stochastic optimization.


JunQi Zhang, PhD, is a Full Professor with Tongji University in Shanghai. He has published 10+ papers in IEEE Transactions and 30+ papers in conferences. His current research interests include learning automata, swarm intelligence, swarm robots, multi-agent systems, reinforcement learning, and big data. MengChu Zhou, PhD, is a Distinguished Professor at New Jersey Institute of Technology. He has over 1100 publications including 14 books, 750+ journal papers (600+ in IEEE transactions), 31 patents, and 32 book-chapters. He is Fellow of IEEE, IFAC, AAAS, CAA and NAI.
Details
Weitere ISBN/GTIN9781394188529
ProduktartE-Book
EinbandartE-Book
FormatEPUB
Format Hinweis2 - DRM Adobe / EPUB
FormatFormat mit automatischem Seitenumbruch (reflowable)
Erscheinungsjahr2024
Erscheinungsdatum11.01.2024
Auflage1. Auflage
Seiten272 Seiten
SpracheEnglisch
Dateigrösse20248 Kbytes
Artikel-Nr.12814365
Rubriken
Genre9201

Inhalt/Kritik

Leseprobe

1
Introduction

It is an expensive process to rank and select the best one from many complex discrete event dynamic systems that are computationally intensive to simulate. Therefore, learning the optimal system, action, alternative, candidate, or design is a classical problem and has many applications in the areas of intelligent system design, statistics and stochastic simulation, machine learning, and artificial intelligence.

Stochastic ranking and selection of given system designs can be treated as a simulation optimization problem. Its solution requires one to design statistical procedures that can select the one with the highest mean performance from a finite set of systems, alternatives, candidates, or designs. Their mean values are unknown and can be estimated only by statistical sampling, because their reward from environments is not deterministic but stochastic due to unknown noise. It is a classical problem in the areas of statistics and stochastic simulation.

Noise comes mainly from three kinds of uncertainties [1]. (i) Environmental uncertainties: operating temperature, pressure, humidity, changing material properties and drift, etc. are a common kind of uncertainties. (ii) Design parameter uncertainties: the design parameters of a product can only be realized to a certain degree of accuracy because high precision machinery is expensive. (iii) Evaluation uncertainties: the uncertainty happens in the evaluation of the system output and the system performance including measuring errors and all kinds of approximation errors if models instead of the real physical objects are used.
1.1 Ranking and Selection in Noisy Optimization

Real-world optimization problems are often subject to uncertainty as variables can be affected by imprecise measurements or just corrupted by other factors such as communication errors. In either case, uncertainty is an inherent characteristic of many such problems and therefore needs to be considered when tailoring metaheuristics to find good solutions. Noise is a class of uncertainty and corrupts the objective values of solutions at each evaluation [2]. Noise has been shown to significantly deteriorate the performance of different metaheuristics such as Particle Swarm Optimizer (PSO), Genetic Algorithms (GA), Evolutionary Strategies (ES), Differential Evolution (DE), and other metaheuristics.

Uncertainty has to be taken into account in many real-world optimization problems. For example, in the engineering field, the signal returned from the real world usually includes a significant amount of noise due to the measure error or various uncertainties. It is usually modeled as sampling noise from a Gaussian distribution [4]. Therefore, the noise can be characterized by its standard deviation and classified into additive and multiplicative ones. Its impact to function can be expressed as:

(1.1)

(1.2)

where represents the real fitness value of solution and are illusory fitness values of solution in additive and multiplicative noisy environments, respectively. It is worth noting that they are identical in the environment where . It is obvious that multiplicative noise is much more challenging, since it can bring much larger disturbance to fitness values than additive noise.

Figure 1.1 3-D map of a Sphere function with additive and multiplicative noise. (a) True. (b) Corrupted by additive noise with . (c) Corrupted by multiplicative noise with .

Figure 1.2 3-D map of a Different Powers function with additive and multiplicative noise. (a) True. (b) Corrupted by additive noise with . (c) Corrupted by multiplicative noise with .

Therefore, if the problem is subject to noise, the quality of the solutions deteriorates significantly. The basic resampling method uses many re-evaluations to estimate the fitness of a candidate solution. Thus, the efficiency of resampling determines the accuracy of ranking and selection of the elite solutions, which are critical to intelligent optimization methods during their evolution to the optimal solutions. Yet the resampling costs too much. Considering a fixed and limited computational budget of function evaluations or environmental interactions, resampling methods better estimate the objective values of the solutions by performing fewer iterations. Intelligent allocation of resampling budget to candidate solutions can save many evaluations and improve the estimation of the solution fitness.
1.2 Learning Automata and Ordinal Optimization

Ranking and selection procedures were developed in the 1950s for statistical selection problems such as choosing the best treatment for a medical condition [6]. The problem of selecting the best among a finite set of alternatives needs an efficient learning scheme given a noisy environment and limited simulation budget, where the best is defined with respect to the highest mean performance, and where the performance is uncertain but may be estimated via simulation. Approaches presented in the literature include frequentist statistics, Bayesian statistics, heuristics [7], and asymptotic convergence in probability [6]. This book focuses on learning automata and ordinal optimization methods to solve stochastic ranking and selection problems.

Investigation of a Learning Automaton (LA) began in the erstwhile Soviet Union with the work of Tsetlin [8, 9] and popularized as LA in a survey paper in 1974 [10]. These early models were referred to as deterministic and stochastic automata operating in random environments. Systems built with LA have been successfully employed in many difficult learning situations and the reinforcement learning represents a development closely related to the work on LA over the years. This has also led to the concept of LA being generalized in a number of directions in order to handle various learning problems.

An LA represents an important tool in the area of reinforcement learning and aims at learning the optimal action that maximizes the probability of being rewarded out of a set of allowable actions by the interaction with a random environment. During a cycle, an automaton chooses an action and then receives a stochastic response that can be either a reward or penalty from the environment. The action probability vector of choosing the next action is then updated by employing this response. The ability of learning how to choose the optimal action endows LA with high adaptability to the environment. Various LAs and their applications have been reviewed in survey papers [10], [11] and books [12-15].

Ordinal Optimization (OO) is introduced by Ho et al. [16]. There are two basic ideas behind it: (i) Estimating the order among solutions is much easier than estimating the absolute objective values of each solution. (ii) Softening the optimization goal and accepting good enough solutions leads to an exponential reduction in computational burden. The Optimal Computing Budget Allocation (OCBA) [17-22] is a famous approach that uses an average-case analysis rather than a worst-case bound of the indifference zone. It attempts to sequentially maximize the probability that the best alternative can be correctly identified after the next stage of sampling. Its procedure tends to require much less sampling effort to achieve the same or better empirical performance for correct selection than the procedures that are statistically more conservative.

The next example shows the reason of softening the optimization goal, i.e., reducing computational burden.
1.3 Exercises
What is the objective of stochastic ranking and selection?
What are the difficulties caused by noise in solving optimization problems?
What are the advantages and constraints of resampling in noisy optimization?
What is the objective of a learning automaton?
What are the basic ideas of ordinal optimization?
For additive and multiplicative noise, which one is more difficult to handle and why?
In Example 1.5., how to find the tallest student without their accurate height data?
References
1 H.-G. Beyer and B. Sendhoff, Robust optimization-a comprehensive survey, Computer Methods in Applied Mechanics and Engineering, vol. 196, no. 33-34, pp. 3190-3218, 2007.
2 J. Rada-Vilela, Population statistics for particle swarm optimization on problems subject to noise, Ph.D. dissertation, 2014.
3 A. Di Pietro, L. While, and L. Barone, Applying evolutionary algorithms to problems with noisy, time-consuming fitness functions, in Proceedings of Congress on Evolutionary Computation, vol. 2. IEEE, 2004, pp. 1254-1261.
4 Y. Jin and J. Branke, Evolutionary optimization in uncertain environments-a survey, IEEE...
mehr

Autor

JunQi Zhang, PhD, is a Full Professor with Tongji University in Shanghai. He has published 10+ papers in IEEE Transactions and 30+ papers in conferences. His current research interests include learning automata, swarm intelligence, swarm robots, multi-agent systems, reinforcement learning, and big data.

MengChu Zhou, PhD, is a Distinguished Professor at New Jersey Institute of Technology. He has over 1100 publications including 14 books, 750+ journal papers (600+ in IEEE transactions), 31 patents, and 32 book-chapters. He is Fellow of IEEE, IFAC, AAAS, CAA and NAI.