Hugendubel.info - Die B2B Online-Buchhandlung 

Merkliste
Die Merkliste ist leer.
Bitte warten - die Druckansicht der Seite wird vorbereitet.
Der Druckdialog öffnet sich, sobald die Seite vollständig geladen wurde.
Sollte die Druckvorschau unvollständig sein, bitte schliessen und "Erneut drucken" wählen.

Advances in Data Analysis, Data Handling and Business Intelligence

E-BookPDF1 - PDF WatermarkE-Book
695 Seiten
Englisch
Springer Berlin Heidelbergerschienen am14.10.20092010
Data Analysis, Data Handling and Business Intelligence are research areas at the intersection of computer science, artificial intelligence, mathematics, and statistics. They cover general methods and techniques that can be applied to a vast set of applications such as in marketing, finance, economics, engineering, linguistics, archaeology, musicology, medical science, and biology. This volume contains the revised versions of selected papers presented during the 32nd Annual Conference of the German Classification Society (Gesellschaft für Klassifikation, GfKl). The conference, which was organized in cooperation with the British Classification Society (BCS) and the Dutch/Flemish Classification Society (VOC), was hosted by Helmut-Schmidt-University, Hamburg, Germany, in July 2008.mehr
Verfügbare Formate
BuchKartoniert, Paperback
EUR160,49
E-BookPDF1 - PDF WatermarkE-Book
EUR149,79

Produkt

KlappentextData Analysis, Data Handling and Business Intelligence are research areas at the intersection of computer science, artificial intelligence, mathematics, and statistics. They cover general methods and techniques that can be applied to a vast set of applications such as in marketing, finance, economics, engineering, linguistics, archaeology, musicology, medical science, and biology. This volume contains the revised versions of selected papers presented during the 32nd Annual Conference of the German Classification Society (Gesellschaft für Klassifikation, GfKl). The conference, which was organized in cooperation with the British Classification Society (BCS) and the Dutch/Flemish Classification Society (VOC), was hosted by Helmut-Schmidt-University, Hamburg, Germany, in July 2008.
Details
Weitere ISBN/GTIN9783642010446
ProduktartE-Book
EinbandartE-Book
FormatPDF
Format Hinweis1 - PDF Watermark
FormatE107
Erscheinungsjahr2009
Erscheinungsdatum14.10.2009
Auflage2010
Seiten695 Seiten
SpracheEnglisch
IllustrationenXXVI, 695 p. 172 illus., 68 illus. in color.
Artikel-Nr.1443846
Rubriken
Genre9200

Inhalt/Kritik

Inhaltsverzeichnis
1;Preface;5
2;Contents;8
3;Contributors;15
4;Part I Invited;25
4.1;Semi-supervised Probabilistic Distance Clusteringand the Uncertainty of Classification;26
4.1.1;1 Introduction;26
4.1.1.1;1.1 Clustering;26
4.1.1.2;1.2 Classification;27
4.1.1.3;1.3 Learning;27
4.1.1.4;1.4 Semi-supervised Clustering;28
4.1.1.5;1.5 Matching Labels;28
4.1.1.6;1.6 Plan of This Paper;28
4.1.2;2 Probabilistic Distance Clustering;29
4.1.2.1;2.1 Notation;29
4.1.2.2;2.2 Probabilistic Clustering;29
4.1.2.3;2.3 Probabilistic Distance Clustering;29
4.1.2.4;2.4 Probabilities and the Joint Distance Function;30
4.1.2.5;2.5 The Classification Uncertainty Function;31
4.1.2.6;2.6 An Extremum Problem for the Cluster Probabilitiesat a Point;32
4.1.2.7;2.7 An Extremum Problem for Clustering the Data Set;32
4.1.2.8;2.8 An Outline of the Probabilistic Distance ClusteringAlgorithm of Ben-Israel and Iyigun (2008);33
4.1.3;3 Prior Information and Classification;34
4.1.3.1;3.1 Probabilistic Labels;34
4.1.3.2;3.2 An Extremum Problem for Classification;34
4.1.4;4 Semi-supervised Distance Clustering;34
4.1.4.1;4.1 An Extremum Problem for Semi-supervised Clustering;34
4.1.4.2;4.2 Probabilities;35
4.1.4.3;4.3 Cluster Centers;35
4.1.4.4;4.4 Algorithm;36
4.1.5;5 Examples;37
4.1.6;Appendix 1: The Membership Probabilities;39
4.1.7;Appendix 2: The Classification Uncertainty Function;41
4.1.8;References;42
4.2;Strategies of Model Construction for the Analysisof Judgment Data;44
4.2.1;1 Introduction;44
4.2.2;2 Strategies of Model Construction;45
4.2.3;3 Theories of Judgment and Empirical Findings;46
4.2.3.1;3.1 The Problem of Information Weighting;47
4.2.3.2;3.2 Illusory Correlations in Judgments;49
4.2.4;4 Three-Way Two-Mode Models;50
4.2.4.1;4.1 Application;52
4.2.4.1.1;4.1.1 Results;52
4.2.5;5 Conclusions;54
4.2.6;References;54
4.3;Clustering of High-Dimensional Data via FiniteMixture Models;56
4.3.1;1 Introduction;56
4.3.2;2 Definition of Mixture Models;57
4.3.3;3 Choice of Starting Values for the EM Algorithm;58
4.3.4;4 Clustering via Normal Mixtures;59
4.3.5;5 Some Recent Extensions for High-Dimensional Data;60
4.3.6;6 Factor Analysis Model for Dimension Reduction;61
4.3.7;7 Mixtures of Common Factor Analyzers;63
4.3.8;8 Fitting of Factor-Analytic Models;65
4.3.9;References;66
4.4;Clustering and Dimensionality Reduction toDiscover Interesting Patterns in Binary Data;68
4.4.1;1 Introduction;68
4.4.2;2 Basic Notation and Definitions;69
4.4.3;3 Cluster Analysis of Binary Data;70
4.4.3.1;3.1 Dissimilarity Measures for Binary Data;70
4.4.4;4 Strategies for Patterns Identification;71
4.4.4.1;4.1 Column-Wise Quantification of Binary Attributes;72
4.4.4.2;4.2 Row-Wise Quantification of Binary Attributes;73
4.4.5;5 Empirical Evidence from a Real Data-Set;75
4.4.6;6 Conclusion;77
4.4.7;References;78
4.5;Kernel Methods for Detecting the Direction of Time Series;79
4.5.1;1 Introduction;79
4.5.2;2 Statistical Methods;81
4.5.2.1;2.1 A Hilbert Space Embedding for Distributions;81
4.5.2.2;2.2 Hilbert Schmidt Independence Criterion;82
4.5.2.3;2.3 Autoregressive Moving Average Models;82
4.5.3;3 Learning the True Time Direction;83
4.5.3.1;3.1 The Classification Method;83
4.5.3.2;3.2 The ARMA Method;84
4.5.4;4 Experiments;85
4.5.5;5 Conclusion and Discussion;86
4.5.6;References;88
4.6;Statistical Processes Under Change: EnhancingData Quality with Pretests;89
4.6.1;1 Introduction;89
4.6.2;2 The Model of Multiple Sources Mixed-Mode Design;90
4.6.3;3 Qualitative and Quantitative Test Methodsfor Questionnaires;91
4.6.3.1;3.1 Qualitative Test Methods;92
4.6.3.1.1;3.1.1 Cognitive Interviews;92
4.6.3.1.2;3.1.2 Expert Discussion Groups;95
4.6.3.1.3;3.1.3 Observation;96
4.6.3.2;3.2 Quantitative Test Methods;96
4.6.3.2.1;3.2.1 Behaviour Coding;96
4.6.3.2.2;3.2.2 Interviewer and Interviewee Debriefing;97
4.6.3.2.3;3.2.3 Follow-up Interviews;97
4.6.3.2.4;3.2.4 Experiments;97
4.6.3.2.5;3.2.5 Post-evaluation Methods;98
4.6.4;4 The Pretest Concerning the Change-over to the 2008 Economic Sector Classification;99
4.6.5;References;101
5;Part II Clustering and Classification;102
5.1;Evaluation Strategies for Learning Algorithmsof Hierarchies;103
5.1.1;1 Introduction;103
5.1.2;2 Evaluation Strategies in the Literature;105
5.1.3;3 Interdisciplinary Comparison of Evaluation Measures;107
5.1.4;4 Experiments and Conclusion;110
5.1.5;References;112
5.2;Fuzzy Subspace Clustering;113
5.2.1;1 Introduction;113
5.2.2;2 Preliminaries and Notation;114
5.2.3;3 Attribute Weighting;115
5.2.3.1;3.1 Axes-Parallel Gustafson-Kessel Fuzzy Clustering;115
5.2.3.2;3.2 Attribute Weighting Fuzzy Clustering;116
5.2.4;4 Attribute Selection;116
5.2.5;5 Principal Axes Weighting;118
5.2.5.1;5.1 Gustafson-Kessel Fuzzy Clustering;118
5.2.5.2;5.2 Reformulation of Gustafson-Kessel Fuzzy Clustering;119
5.2.6;6 Principal Axes Selection;120
5.2.7;7 Experiments;120
5.2.8;8 Summary;122
5.2.9;References;122
5.3;Motif-Based Classification of Time Series withBayesian Networks and SVMs;124
5.3.1;1 Introduction;124
5.3.2;2 Related Work;125
5.3.3;3 Discovery of Generalized Semi-Continuous Motifs;127
5.3.4;4 Experimental Evaluation;131
5.3.5;5 Conclusion;132
5.3.6;References;132
5.4;A Novel Approach to Construct DiscreteSupport Vector Machine Classifiers;134
5.4.1;1 Introduction;134
5.4.2;2 Discrete Support Vector Machines;136
5.4.2.1;2.1 Motivation and Mathematical Formulation;136
5.4.2.2;2.2 Constructing DSVM Classifiers by Integer Programming;138
5.4.3;3 Empirical Evaluation;140
5.4.4;4 Conclusions;143
5.4.5;References;143
5.5;Predictive Classification Trees;145
5.5.1;1 Statement of the Problem;145
5.5.2;2 Factor Selection;146
5.5.2.1;2.1 Factor Reduction;147
5.5.2.2;2.2 Example;147
5.5.3;3 Predictive Measures of Association;148
5.5.4;4 Tree Induction;150
5.5.5;References;152
5.6;Isolated Vertices in Random Intersection Graphs;153
5.6.1;1 Introduction;153
5.6.2;2 Definitions and Main Results;154
5.6.3;3 Preliminaries;156
5.6.3.1;3.1 Edge Probability;156
5.6.3.2;3.2 Isolated Vertices in Gs(n,m,d);157
5.6.4;4 Proof of Theorem 1;161
5.6.5;5 Remarks on Other Distributions;163
5.6.6;References;163
5.7;Strengths and Weaknesses of Ant Colony Clustering;164
5.7.1;1 Introduction;164
5.7.2;2 Ant Colony Clustering;165
5.7.3;3 Analysis of Ant Colony Clustering by Means of Self-Organizing Batch Maps;166
5.7.4;4 Improvement of Ant Colony Clustering;168
5.7.5;5 Data Analysis with Emergent Ant Colony Clustering;170
5.7.6;6 Experimental Settings and Results;170
5.7.7;7 Discussion;172
5.7.8;8 Summary;172
5.7.9;References;173
5.8;Variable Selection for Kernel Classifiers:A Feature-to-Input Space Approach;174
5.8.1;1 Introduction;174
5.8.2;2 Feature-to-Input Space Variable Selection;175
5.8.2.1;2.1 FI-Selection Based on the Group Means;175
5.8.2.2;2.2 FI-Selection Based on the Kernel Weight Vector;176
5.8.3;3 Simulation Study;177
5.8.4;4 Simulation Results and Conclusions;179
5.8.5;5 Application to Data Sets;180
5.8.6;6 Summary;182
5.8.7;References;182
5.9;Finite Mixture and Genetic Algorithm Segmentationin Partial Least Squares Path Modeling: Identificationof Multiple Segments in Complex Path Models;184
5.9.1;1 Introduction;184
5.9.2;2 Computational Experiment;186
5.9.3;3 Results;188
5.9.4;4 Summary and Conclusion;190
5.9.5;References;192
5.10;Cluster Ensemble Based on Co-occurrence Data;194
5.10.1;1 Introduction;194
5.10.2;2 The Algorithm;195
5.10.3;3 Benchmark Experiments;196
5.10.4;4 Results;199
5.10.5;5 Summary;200
5.10.6;References;201
5.11;Localized Logistic Regression for CategoricalInfluential Factors;202
5.11.1;1 Introduction;202
5.11.2;2 Analysis of SNP Data;203
5.11.3;3 Logistic Regression;204
5.11.4;4 Localized Logistic Regression;205
5.11.5;5 Calculation of Weights for Categorical Predictors;207
5.11.6;6 Application to SNP Data;209
5.11.7;7 Summary;211
5.11.8;References;211
5.12;Clustering Association Rules with Fuzzy Concepts;213
5.12.1;1 Introduction;213
5.12.2;2 Nomenclature;215
5.12.3;3 Linguistic Clustering;215
5.12.3.1;3.1 Rule Trajectory Visualization;215
5.12.3.2;3.2 Linguistic Concepts;216
5.12.4;4 Experiments;218
5.12.4.1;4.1 Artificial Data Set;218
5.12.4.2;4.2 Real-world Data Set;219
5.12.5;5 Conclusion and Future Work;220
5.12.6;References;221
5.13;Clustering with Repulsive Prototypes;222
5.13.1;1 Introduction;222
5.13.2;2 Fuzzy c-Means and Noise Clustering;223
5.13.3;3 Repulsive Prototypes;224
5.13.4;4 Experimental Results;227
5.13.5;5 Conclusions and Future Work;229
5.13.6;References;229
6;Part III Mixture Analysis;231
6.1;Weakly Homoscedastic Constraints for Mixturesof t-Distributions;232
6.1.1;1 Introduction;232
6.1.2;2 Preliminaries and Notation;233
6.1.2.1;2.1 The Crab Data Set;235
6.1.3;3 Weakly Homoscedastic Covariance Matrices ;236
6.1.4;4 Numerical Studies;238
6.1.5;5 Conclusions;240
6.1.6;References;241
6.2;Bayesian Methods for Graph Clustering;242
6.2.1;1 Introduction;242
6.2.2;2 A Mixture Model for Networks;244
6.2.3;3 Bayesian View of MixNet;244
6.2.3.1;3.1 Bayesian Probabilistic Model;244
6.2.3.2;3.2 Variational Inference;245
6.2.3.2.1;3.2.1 Variational Bayes E-Step;246
6.2.3.2.2;3.2.2 Variational Bayes M-Step: Optimization of q();247
6.2.3.2.3;3.2.3 Variational Bayes M-Step: Optimization of q();247
6.2.3.2.4;3.2.4 Lower Bound;247
6.2.3.3;3.3 Model Selection;248
6.2.4;4 Experiments;248
6.2.4.1;4.1 Comparison of the Criteria;249
6.2.5;5 Conclusion;251
6.2.6;References;251
6.3;Determining the Number of Components inMixture Models for Hierarchical Data;253
6.3.1;1 Introduction;254
6.3.2;2 Multilevel Latent Class Model;255
6.3.3;3 Design of the Simulation Study;257
6.3.4;4 Results of the Simulation Study;258
6.3.5;5 Conclusions;259
6.3.6;References;260
6.4;Testing Mixed Distributions when the MixingDistribution Is Known;262
6.4.1;1 Introduction;262
6.4.2;2 Construction of the Test Statistics;264
6.4.2.1;2.1 The Score Statistics T(k);264
6.4.2.2;2.2 Schwarz Criteria Statistics;265
6.4.3;3 Simulation Study;267
6.4.3.1;3.1 Empirical Levels;268
6.4.3.2;3.2 Empirical Powers when the Mixed Density Is Known;269
6.4.4;References;270
6.5;Classification with a Mixture Model Havingan Increasing Number of Components;271
6.5.1;1 Introduction;271
6.5.2;2 Main Notations and Assumptions;273
6.5.3;3 Convergence;276
6.5.4;4 Random Classification of the Observations;278
6.5.5;References;279
6.6;Nonparametric Fine Tuning of Mixtures: Applicationto Non-Life Insurance Claims Distribution Estimation;280
6.6.1;1 Introduction;280
6.6.2;2 Mixture-Based Data Transformations;283
6.6.3;3 Beta Kernel Density Estimation;284
6.6.4;4 Application to Non-Life Insurance Data;287
6.6.5;References;289
7;Part IV Linguistics and Text Analysis;291
7.1;Classification of Text Processing Components:The Tesla Role System;292
7.1.1;1 Introduction;292
7.1.2;2 Related Work;293
7.1.3;3 A Role System for Text Processing Components;294
7.1.3.1;3.1 The Tesla Framework;295
7.1.3.2;3.2 The Relation Between Components and Roles;295
7.1.3.3;3.3 An Example: Alignment;298
7.1.4;4 Discussion;299
7.1.5;References;300
7.2;Nonparametric Distribution Analysis for Text Mining;302
7.2.1;1 Introduction;302
7.2.2;2 Maximum Mean Discrepancy;303
7.2.3;3 String Kernels;304
7.2.4;4 R Infrastructure;305
7.2.4.1;4.1 tm;306
7.2.4.2;4.2 kernlab;306
7.2.4.3;4.3 Framework for Kernel Methods on Text in R;306
7.2.4.4;4.4 Kernel MMD;306
7.2.5;5 Experiments;307
7.2.5.1;5.1 Data;307
7.2.5.2;5.2 Results;307
7.2.6;6 Conclusion;311
7.2.7;References;311
7.3;Linear Coding of Non-linear Hierarchies:Revitalization of an Ancient Classification Method;313
7.3.1;1 Introduction;313
7.3.1.1;1.1 Why Are Linear Codings Desirable?;313
7.3.1.2;1.2 Panini's Sivasutra-Technique;314
7.3.2;2 Linear Coding of Non-linear Hierarchies: Generalizing Panini's Sivasutra-Technique;316
7.3.2.1;2.1 S-Orders and S-Sortability: Formal Foundations;316
7.3.2.2;2.2 Constructing S-Orders;317
7.3.2.3;2.3 The Problem of Identifying Elements for Duplication;320
7.3.3;References;322
7.4;Automatic Dictionary Expansion Using Non-parallel Corpora;323
7.4.1;1 Introduction;323
7.4.2;2 Approach;325
7.4.3;3 Language Resources;326
7.4.4;4 Results;327
7.4.5;5 Discussion and Future Work;330
7.4.6;References;331
7.5;Multilingual Knowledge-Based Concept Recognitionin Textual Data;332
7.5.1;1 Introduction;332
7.5.2;2 Application in the Automotive Domain;333
7.5.3;3 Related Work;334
7.5.4;4 Requirements;334
7.5.5;5 Data Structure;335
7.5.6;6 Concept Recognition;337
7.5.6.1;6.1 Taxonomy Expansion;338
7.5.6.2;6.2 Matching Process;338
7.5.7;7 Evaluation;339
7.5.8;8 Conclusion and Future Work;340
7.5.9;References;340
8;Part V Pattern Recognition and Machine Learning;342
8.1;A Diversified Investment Strategy Using Autonomous Agents;343
8.1.1;1 Introduction;343
8.1.2;2 Agents' Implementation;344
8.1.2.1;2.1 Prediction Mechanism;345
8.1.2.2;2.2 Money Management Using Empirical Knowledge;348
8.1.2.3;2.3 Risk Management Using Domain Knowledge;350
8.1.2.4;2.4 Agents' Results;352
8.1.3;3 Final Remarks;353
8.1.4;References;353
8.2;Classification with Kernel Mahalanobis Distance Classifiers;354
8.2.1;1 Introduction;354
8.2.2;2 Kernels and Feature-Space Embedding;355
8.2.3;3 Kernel Mahalanobis Distance Classifiers;356
8.2.3.1;3.1 Kernel Mahalanobis Distances for Invertible Covariance;356
8.2.3.2;3.2 Kernel Mahalanobis Distance for Regularized Covariance;357
8.2.3.3;3.3 Classifiers Based on Kernel Mahalanobis Distances;358
8.2.4;4 Experiments;359
8.2.4.1;4.1 Experiments on 2D Toy Data;359
8.2.4.2;4.2 Real-World-Experiments;361
8.2.5;5 Discussion and Theoretical Considerations;361
8.2.6;6 Conclusion;363
8.2.7;References;364
8.3;Identifying Influential Cases in Kernel Fisher DiscriminantAnalysis by Using the Smallest Enclosing Hypersphere;365
8.3.1;1 Introduction;366
8.3.2;2 Kernel Fisher Discriminant Analysis;367
8.3.3;3 Criteria for Identifying Influential Cases in KFDA;368
8.3.4;4 The Smallest Enclosing Hypersphere;369
8.3.5;5 Monte Carlo Simulation Study;370
8.3.6;6 Application to a Data Set;372
8.3.7;7 Conclusions and Open Problems;372
8.3.8;References;373
8.4;Self-Organising Maps for Image Segmentation;374
8.4.1;1 Introduction;374
8.4.2;2 Materials and Methods;375
8.4.2.1;2.1 Theory;375
8.4.2.2;2.2 Data;376
8.4.2.3;2.3 Software;377
8.4.3;3 SOMs for Image Segmentation;377
8.4.4;4 Supervised SOMs;380
8.4.5;5 Discussion;382
8.4.6;References;383
8.5;Image Based Mail Piece Identification UsingUnsupervised Learning;385
8.5.1;1 Introduction;385
8.5.2;2 Motivation;386
8.5.3;3 Approach;387
8.5.4;4 Feature Extraction and Comparison;389
8.5.5;5 Search Area Consolidation;390
8.5.5.1;5.1 Mail Stream Analysis;390
8.5.5.2;5.2 Rejection Criteria Estimation;391
8.5.6;6 Mail Piece Identification;392
8.5.7;7 Experiments;393
8.5.8;8 Conclusion and Outlook;394
8.5.9;References;394
9;Part VI Statistical Musicology;396
9.1;Statistical Analysis of Human Body Movementand Group Interactions in Response to Music;397
9.1.1;1 Introduction;397
9.1.2;2 Experimental Design and Data Considerations;398
9.1.3;3 Analysis;399
9.1.4;4 Discussion;403
9.1.5;5 Conclusion;404
9.1.6;References;405
9.2;Applying Statistical Models and Parametric DistanceMeasures for Music Similarity Search;407
9.2.1;1 Introduction;407
9.2.2;2 Feature Extraction;408
9.2.3;3 Statistical Models;409
9.2.4;4 Parametric Distance Measures;410
9.2.5;5 Evaluation;412
9.2.5.1;5.1 Test Data and Evaluation Metric;412
9.2.5.2;5.2 Aggregation Process;413
9.2.6;6 Results;413
9.2.7;7 Conclusions;415
9.2.8;References;415
9.3;Finding Music Fads by Clustering Online RadioData with Emergent Self Organizing Maps;417
9.3.1;1 Introduction;417
9.3.2;2 Related Works;418
9.3.3;3 Data;419
9.3.4;4 Frequential Genre Integration;419
9.3.5;5 Visualisation of Music Fads;420
9.3.6;6 Identification of Fads;421
9.3.7;7 Fads Characterisation;422
9.3.8;8 Results;423
9.3.9;9 Discussion;423
9.3.10;10 Summary;423
9.3.11;References;424
9.4;Analysis of Polyphonic Musical Time Series;426
9.4.1;1 Introduction;426
9.4.2;2 Model for Polyphonic Sound;427
9.4.3;3 Preprocessing;428
9.4.3.1;3.1 Alphabet;428
9.4.3.2;3.2 Distortion Measures;429
9.4.4;4 Results;429
9.4.4.1;4.1 Data;429
9.4.4.2;4.2 Construction of the Alphabet;430
9.4.4.3;4.3 First Results;431
9.4.4.4;4.4 Comparison of Distortion Measures;431
9.4.4.5;4.5 Halleluja;432
9.4.4.6;4.6 Instrument Tracking;433
9.4.5;5 Conclusion;434
9.4.6;References;434
10;Part VII Banking and Finance;435
10.1;Hedge Funds and Asset Allocation:Investor Confidence, Diversification Benefits,and a Change in Investment Style Composition;436
10.1.1;1 Introduction;436
10.1.2;2 Literature Review;437
10.1.3;3 Data and Descriptive Statistics;438
10.1.4;4 Portfolio Benefits and Capital Flows;440
10.1.4.1;4.1 Expected Alpha and Allocation into Hedge Funds;440
10.1.4.2;4.2 Reduction of Diversification Benefits Over Time;442
10.1.4.3;4.3 Structural Breaks;443
10.1.5;5 Conclusion;444
10.1.6;References;445
10.2;Mixture Hidden Markov Models in Finance Research;446
10.2.1;1 Introduction;446
10.2.2;2 The Mixture Hidden Markov Model;447
10.2.3;3 Data Set;449
10.2.4;4 Results;451
10.2.5;5 Conclusions;454
10.2.6;References;454
10.3;Multivariate Comparative Analysis of StockExchanges: The European Perspective;455
10.3.1;1 Introduction;455
10.3.2;2 Data Description;456
10.3.3;3 Cluster Analysis;457
10.3.4;4 K-means Grouping;459
10.3.5;5 Factor Analysis;461
10.3.6;6 Summary and Conclusions;462
10.3.7;References;462
10.4;Empirical Examination of FundamentalIndexation in the German Market;464
10.4.1;1 Introduction;464
10.4.2;2 Data and Index Methodology;465
10.4.3;3 Results;466
10.4.4;4 Analysis;468
10.4.4.1;4.1 Efficient Market;468
10.4.4.2;4.2 Inefficient Market;470
10.4.5;5 Conclusion;470
10.4.6;References;472
10.5;The Analysis of Power For Some ChosenVaR Backtesting Procedures;473
10.5.1;1 Introduction;473
10.5.2;2 Tests Based on the Frequency of Failures;475
10.5.3;3 Tests Based on Multiple VaR Levels;476
10.5.3.1;Backtesting Errors;477
10.5.4;4 Empirical Research: Simulation Approach;478
10.5.5;5 Some Final Conclusions;482
10.5.6;References;482
10.6;Extreme Unconditional Dependence Vs. MultivariateGARCH Effect in the Analysis of DependenceBetween High Losses on Polish andGerman Stock Indexes;483
10.6.1;1 Introduction;483
10.6.2;2 Models to Be Compared;484
10.6.2.1;2.1 Extreme Dependence;484
10.6.2.1.1;2.1.1 Testing For;484
10.6.2.1.2;2.1.2 Models Used for Simulations;486
10.6.2.2;2.2 Varying Conditional Covariance;486
10.6.3;3 The Research;487
10.6.4;4 Summary;493
10.6.5;References;495
10.7;Is Log Ratio a Good Value for Measuring Returnin Stock Investments?;496
10.7.1;1 Introduction;496
10.7.2;2 The Data;497
10.7.3;3 Measuring Daily Return;497
10.7.4;4 The Distribution of Daily Returns;499
10.7.5;5 Modeling the Distribution of Returns;499
10.7.6;6 Discussion;501
10.7.7;7 Summary;502
10.7.8;References;502
11;Part VIII Marketing, Management Science and Economics;503
11.1;Designing Products Using QFD and CA: A Comparison;504
11.1.1;1 Introduction;504
11.1.2;2 Product Design in a Climbing Harness Market;505
11.1.2.1;2.1 Application of Conjoint Analysis;506
11.1.2.2;2.2 Application of Quality Function Deployment;507
11.1.3;3 Product Design in a Mobile Phone Market;507
11.1.3.1;3.1 Application of Conjoint Analysis;507
11.1.3.2;3.2 Application of Quality Function Deployment;509
11.1.3.3;3.3 Comparing the CA and QFD Results;513
11.1.3.4;3.4 Comparing the Results with Pullman et al.'s Experiment;513
11.1.4;4 Conclusions and Outlook;514
11.1.5;References;514
11.2;Analyzing the Stability of Price Response Functions:Measuring the Influence of DifferentParameters in a Monte Carlo Comparison;516
11.2.1;1 Introduction;516
11.2.2;2 Price Response Functions in Marketing;517
11.2.2.1;2.1 Alternatives of Price Response Functions;517
11.2.2.2;2.2 Price Response Functions and Connected Values;518
11.2.2.3;2.3 Instruments for Measuring Price Sensitivity;518
11.2.3;3 A Monte Carlo Comparison;520
11.2.3.1;3.1 Research Design;520
11.2.3.2;3.2 Results;521
11.2.4;4 Conclusion and Outlook;523
11.2.5;References;523
11.3;Real Options in the Assessment ofNew Products;525
11.3.1;1 Introduction;525
11.3.2;2 Uncertainties in Product Development;526
11.3.3;3 Real Options in NPD and R&D Projects;527
11.3.4;4 Real Options Assessment Using Excel Based Tools;528
11.3.5;5 Conclusions and Outlook;531
11.3.6;References;532
11.4;Exploring the Interaction Structure of Weblogs;533
11.4.1;1 Introduction;533
11.4.2;2 Identifying Blogs on the WWW;534
11.4.2.1;2.1 Social Networks of Blogs;534
11.4.2.2;2.2 Assessment of Egos and Ego Networks;535
11.4.3;3 Empirical Application;537
11.4.4;4 Conclusions and Future Work;539
11.4.5;References;540
11.5;Analyzing Preference Rankings when There AreToo Many Alternatives;541
11.5.1;1 Introduction and Motivation;541
11.5.2;2 Preliminaries;542
11.5.3;3 Methodology;543
11.5.3.1;3.1 Test Statistic;545
11.5.3.2;3.2 Multiple Comparisons;545
11.5.3.3;3.3 Rank Plots;546
11.5.3.4;3.4 Homogeneous Subsets;547
11.5.4;4 Illustration;547
11.5.4.1;4.1 Data;547
11.5.4.2;4.2 Results;548
11.5.5;5 Conclusion;550
11.5.6;References;550
11.6;Considerations on the Impact of Ill-ConditionedConfigurations in the CML Approach;551
11.6.1;1 Introduction;551
11.6.2;2 The Partial Credit Model;553
11.6.3;3 CML Approach to Estimate Item Parameters;554
11.6.4;4 State of the Art Regarding Existence of ML Estimates;555
11.6.5;5 Analysis of Fixed Small-Dimensional Datasets;556
11.6.6;6 Concluding Remarks;559
11.6.7;References;559
11.7;Dyadic Interactions in Service Encounter:Bayesian SEM Approach;561
11.7.1;1 Introduction;561
11.7.1.1;1.1 Service Encounter in Relationship Marketing;561
11.7.1.2;1.2 Research Design;562
11.7.2;2 APIM Model: Bayesian SEM Approach;563
11.7.2.1;2.1 Assumptions of Bayesian SEM;563
11.7.2.2;2.2 APIM Structural Model;564
11.7.3;3 Final Remarks;569
11.7.4;References;569
12;Part IX Archaeology and Spatial Planning;571
12.1;Estimating the Number of Buildings in Germany;572
12.1.1;1 Introduction;572
12.1.2;2 Inspection and Transformation of Data;573
12.1.3;3 Estimation;575
12.1.4;4 Information Optimisation;578
12.1.5;5 Conclusion;579
12.1.6;References;580
12.2;Mapping Findspots of Roman Military Brickstampsin Mogontiacum (Mainz) and Archaeometrical Analysis;581
12.2.1;1 Introduction;581
12.2.2;2 Mapping of the Locations of Findspots;583
12.2.3;3 Smooth Mapping by Nonparametric Density Estimation;584
12.2.4;4 Comparison of Different Periods;585
12.2.5;5 Conclusions;589
12.2.6;References;589
12.3;Analysis of Guarantor and Warrantee RelationshipsAmong Government Officials in theEighth Century in the Old Capital of Japanby Using Asymmetric Multidimensional Scaling;590
12.3.1;1 Introduction;590
12.3.2;2 Data;591
12.3.3;3 The Method;592
12.3.4;4 The Analysis and the Result;593
12.3.5;5 Discussion;595
12.3.6;References;599
12.4;Analysis of Massive Emigration from Poland:The Model-Based Clustering Approach;600
12.4.1;1 Introduction;600
12.4.2;2 Model-Based Clustering;601
12.4.2.1;2.1 Mixture Models;601
12.4.2.2;2.2 Parameter Estimation and Model Selection;602
12.4.2.3;2.3 Model-Based Strategy for Clustering;603
12.4.3;3 Example;604
12.4.4;4 Conclusions;606
12.4.5;5 Discussion;608
12.4.6;References;608
13;Part X Bio- and Health Sciences;610
13.1;Systematics of Short-Range Correlations inEukaryotic Genomes;611
13.1.1;1 Introduction;611
13.1.2;2 Systematics of Correlation Signatures;613
13.1.3;3 Algorithmic Challenges;617
13.1.3.1;3.1 Systematic Comparison of Many Trees: The Tree-Color Coding Method;617
13.1.3.2;3.2 Memory and Run Time Management for Large Genomes;618
13.1.4;4 Conclusion;620
13.1.5;References;621
13.2;On Classification of Molecules and Species ofRepresentation Rings;622
13.2.1;1 Introduction;622
13.2.2;2 Classification of Molecules by Symmetry Groups;623
13.2.3;3 Ordinary Representations of Finite Groups;624
13.2.4;4 Modular Representations of Finite Groups;626
13.2.5;5 Species of Representation Rings;627
13.2.6;6 Conclusions;631
13.2.7;References;631
13.3;The Precise and Efficient Identification ofMedical Order Forms Using Shape Trees;633
13.3.1;1 Introduction;633
13.3.2;2 Geometrical Shapes for Determining Similarity;634
13.3.2.1;2.1 Object Recognition;634
13.3.2.2;2.2 Shapes as Models for Regions;634
13.3.2.3;2.3 Modeling Regions as a Shape Tree;635
13.3.2.4;2.4 Shape Tree Structure;635
13.3.2.5;2.5 Searching in a Shape Tree;637
13.3.3;3 Document Identification of Specialized Order Forms;638
13.3.4;4 Experiments;640
13.3.5;5 Discussion;642
13.3.6;6 Summary;642
13.3.7;References;643
13.4;On the Prognostic Value of Gene ExpressionSignatures for Censored Data;644
13.4.1;1 Introduction;644
13.4.2;2 Prediction Accuracy of Survival Models;645
13.4.3;3 Measuring the Prognostic Value of Survival Models;647
13.4.4;4 Low-Dimensional Data: Simulation Example;649
13.4.5;5 High-Dimensional Data: Lymphoma Application;651
13.4.6;6 Conclusions;652
13.4.7;References;653
13.5;Quality-Based Clustering of Functional Data:Applications to Time Course Microarray Data;655
13.5.1;1 Introduction;655
13.5.2;2 Methods;657
13.5.2.1;2.1 K-Means Clustering of Functional Data;657
13.5.2.2;2.2 Quality-Based Clustering of Functional Data;657
13.5.3;3 Simulation Design;657
13.5.3.1;3.1 Integrated AR Processes for Simulated Data;658
13.5.4;4 Simulation Results;659
13.5.5;5 Summary;661
13.5.6;References;663
13.6;A Comparison of Algorithms to Find DifferentiallyExpressed Genes in Microarray Data;665
13.6.1;1 Introduction;665
13.6.2;2 Benchmark Data Set;666
13.6.3;3 Popular Algorithms to Identify Differentially Expressed Genes;667
13.6.4;4 The PUL Method to Identify DE Genes;667
13.6.4.1;4.1 Unit Transformation;668
13.6.4.2;4.2 Modeling Expressed Genes as Log Normals;669
13.6.4.3;4.3 Bayes Posterior Probabilities;671
13.6.4.4;4.4 Gene Scoring in PUL;671
13.6.5;5 IR Methods for the Evaluation of DE Algorithms;671
13.6.6;6 Results;672
13.6.7;7 Applications of PUL;674
13.6.8;8 Discussion;674
13.6.9;9 Summary;676
13.6.10;References;676
14;Part XI Exploratory Data Analysis, Modeling and Applications;678
14.1;Data Compression and Regression Basedon Local Principal Curves;679
14.1.1;1 Introduction;679
14.1.2;2 Data Compression with Local Principal Curves;680
14.1.2.1;2.1 Local Principal Curves;680
14.1.2.2;2.2 Simple Example: Speed-Flow Data;681
14.1.2.3;2.3 Parametrizations and Projections;681
14.1.3;3 Regression with Principal Curves;683
14.1.3.1;3.1 GAIA Data;683
14.1.3.2;3.2 Principal Component Regression;685
14.1.3.3;3.3 Dimension Reduction with Local Principal Curves;686
14.1.3.4;3.4 Direct Local Principal Curve Regression;687
14.1.3.5;3.5 Prediction and Comparison;687
14.1.4;4 Outlook;689
14.1.5;References;689
14.2;Optimization of Centrifugal Impeller UsingEvolutionary Strategies and Artificial Neural Networks;691
14.2.1;1 Introduction;691
14.2.2;2 Optimization of a Centrifugal Impeller Geometry;692
14.2.3;3 Optimization Using Evolutionary Strategies;692
14.2.4;4 Performance Predictions;693
14.2.4.1;4.1 Method;694
14.2.4.2;4.2 Experimental Settings;695
14.2.4.3;4.3 Experimental Results;695
14.2.5;5 Conclusion;698
14.2.6;References;699
14.3;Efficient Media Exploitation TowardsCollective Intelligence;700
14.3.1;1 Introduction;700
14.3.2;2 Progress Over Related Scientific Work;701
14.3.3;3 Intelligent Media Analysis;703
14.3.3.1;3.1 Text Analysis;704
14.3.3.2;3.2 Visual Information Analysis;705
14.3.3.3;3.3 Speech Analysis;706
14.3.4;4 Contextual Media Analysis and Fusion;706
14.3.5;5 Social Media Intelligence;707
14.3.6;6 Conclusions;708
14.3.7;References;708
14.4;Multi-class Extension of Verifiable EnsembleModels for Safety-Related Applications;710
14.4.1;1 Introduction;710
14.4.2;2 The Verifiable Ensemble;712
14.4.3;3 Common Multi-class Extensions;714
14.4.4;4 The Multi-class Ensemble;716
14.4.5;5 Conclusions;720
14.4.6;References;720
14.5;Dynamic Disturbances in BTA Deep-Hole Drilling:Modelling Chatter and Spiralling as RegenerativeEffects;722
14.5.1;1 Introduction;722
14.5.2;2 Chatter and Spiralling as Regenerative Effects;723
14.5.3;3 Modelling Chatter;725
14.5.3.1;3.1 Torsional Vibration Model;725
14.5.3.2;3.2 Chatter Simulation;726
14.5.4;4 Modelling Spiralling;727
14.5.4.1;4.1 Bending Vibration Model;727
14.5.4.2;4.2 Clustering of Increasing Eigenfrequency Courses;727
14.5.5;5 Outlook;729
14.5.6;References;731
14.6;Nonnegative Matrix Factorization for BinaryData to Extract Elementary Failure Mapsfrom Wafer Test Images;732
14.6.1;1 Introduction;732
14.6.1.1;1.1 Notation;733
14.6.2;2 Nonnegative Matrix Factorization;733
14.6.2.1;2.1 Alternating Least Squares Algorithm for NMF;733
14.6.3;3 NMF for Binary Datasets;734
14.6.3.1;3.1 Generative Model;734
14.6.3.2;3.2 Bernoulli Likelihood;735
14.6.3.3;3.3 Optimizing the Log-Likelihood;736
14.6.3.3.1;3.3.1 Alternating Gradient Ascent Algorithm;736
14.6.3.3.2;3.3.2 Alternating Least Squares on a Simplified Problem;737
14.6.3.3.3;3.3.3 Determining the Parameter ;737
14.6.3.3.4;3.3.4 Semi-supervised Mode;738
14.6.3.4;3.4 Other Cost functions;738
14.6.4;4 Results;739
14.6.4.1;4.1 Toydata Example;739
14.6.4.2;4.2 Real World Example;740
14.6.5;5 Conclusion;741
14.6.6;References;741
14.7;Collective Intelligence Generation from User Contributed Content;742
14.7.1;1 Introduction;742
14.7.2;2 Collective Intelligence;744
14.7.2.1;2.1 Personal Intelligence;744
14.7.2.2;2.2 Media Intelligence;745
14.7.2.3;2.3 Mass Intelligence;747
14.7.2.4;2.4 Social Intelligence;747
14.7.2.5;2.5 Organizational Intelligence;748
14.7.3;3 Use Cases;749
14.7.3.1;3.1 Emergency Response Case Study;749
14.7.3.2;3.2 Consumers Social Group Case Study;750
14.7.4;4 Conclusions;751
14.7.5;References;751
14.8;Computation of the Molenaar Sijtsma Statistic;752
14.8.1;1 Introduction;752
14.8.2;2 Case I: The Computation of MS When No Provisional Measures Are Needed;754
14.8.3;3 Case II: The Computation of MS When ProvisionalMeasures Are Needed;757
14.8.4;4 Estimation of the Unobservable Joint Cumulative Probabilities in MSP5.0;759
14.8.5;5 Discussion;760
14.8.6;References;761
15;Keyword Index;762
16;Author Index;765
mehr

Autor