Hugendubel.info - Die B2B Online-Buchhandlung 

Merkliste
Die Merkliste ist leer.
Bitte warten - die Druckansicht der Seite wird vorbereitet.
Der Druckdialog öffnet sich, sobald die Seite vollständig geladen wurde.
Sollte die Druckvorschau unvollständig sein, bitte schliessen und "Erneut drucken" wählen.

Human-Centric Interfaces for Ambient Intelligence

E-BookEPUBDRM AdobeE-Book
542 Seiten
Englisch
Elsevier Science & Techn.erschienen am25.09.2009
To create truly effective human-centric ambient intelligence systems both engineering and computing methods are needed. This is the first book to bridge data processing and intelligent reasoning methods for the creation of human-centered ambient intelligence systems. Interdisciplinary in nature, the book covers topics such as multi-modal interfaces, human-computer interaction, smart environments and pervasive computing, addressing principles, paradigms, methods and applications. This book will be an ideal reference for university researchers, R&D engineers, computer engineers, and graduate students working in signal, speech and video processing, multi-modal interfaces, human-computer interaction and applications of ambient intelligence.

Hamid Aghajan is a Professor of Electrical Engineering (consulting) at Stanford University, USA. His research is on user-centric vision applications in smart homes, assisted living / well being, smart meetings, and avatar-based social interactions. He is Editor-in-Chief of Journal of Ambient Intelligence and Smart Environments, has chaired ACM/IEEE ICDSC 2008, and organized workshops/sessions/tutorials at ECCV, ACM MM, FG, ECAI, ICASSP, CVPR.



Juan Carlos Augusto is a Lecturer at the University of Ulster, UK. He is conducting research on Smart Homes and Classrooms. He has given tutorials at IJCAI'07 and AAAI'08. He is Editor-in-Chief of the Book Series on Ambient Intelligence and Smart Environments and the Journal of Ambient Intelligence and Smart Environments. He has co-Chaired ICOST'06, AITAmI'06/07/08, and is Workshops Chair for IE'09.



Ram?n L?pez-C?zar Delgado is a Professor at the Faculty of Computer Science and Telecommunications of the University of Granada, Spain. His research interests include speech recognition and understanding, dialogue management and Ambient Intelligence. He is a member of ISCA (International Speech Communication Association), SEPLN (Spanish Society on Natural Language Processing) and AIPO (Spanish Society on HCI).



Integrates engineering and computing methods that are essential for designing and implementing highly effective ambient intelligence systems

Contains contributions from the world's leading experts in academia and industry

Gives a complete overview of the principles, paradigms and applications of human-centric ambient intelligence systems
mehr

Produkt

KlappentextTo create truly effective human-centric ambient intelligence systems both engineering and computing methods are needed. This is the first book to bridge data processing and intelligent reasoning methods for the creation of human-centered ambient intelligence systems. Interdisciplinary in nature, the book covers topics such as multi-modal interfaces, human-computer interaction, smart environments and pervasive computing, addressing principles, paradigms, methods and applications. This book will be an ideal reference for university researchers, R&D engineers, computer engineers, and graduate students working in signal, speech and video processing, multi-modal interfaces, human-computer interaction and applications of ambient intelligence.

Hamid Aghajan is a Professor of Electrical Engineering (consulting) at Stanford University, USA. His research is on user-centric vision applications in smart homes, assisted living / well being, smart meetings, and avatar-based social interactions. He is Editor-in-Chief of Journal of Ambient Intelligence and Smart Environments, has chaired ACM/IEEE ICDSC 2008, and organized workshops/sessions/tutorials at ECCV, ACM MM, FG, ECAI, ICASSP, CVPR.



Juan Carlos Augusto is a Lecturer at the University of Ulster, UK. He is conducting research on Smart Homes and Classrooms. He has given tutorials at IJCAI'07 and AAAI'08. He is Editor-in-Chief of the Book Series on Ambient Intelligence and Smart Environments and the Journal of Ambient Intelligence and Smart Environments. He has co-Chaired ICOST'06, AITAmI'06/07/08, and is Workshops Chair for IE'09.



Ram?n L?pez-C?zar Delgado is a Professor at the Faculty of Computer Science and Telecommunications of the University of Granada, Spain. His research interests include speech recognition and understanding, dialogue management and Ambient Intelligence. He is a member of ISCA (International Speech Communication Association), SEPLN (Spanish Society on Natural Language Processing) and AIPO (Spanish Society on HCI).



Integrates engineering and computing methods that are essential for designing and implementing highly effective ambient intelligence systems

Contains contributions from the world's leading experts in academia and industry

Gives a complete overview of the principles, paradigms and applications of human-centric ambient intelligence systems
Details
Weitere ISBN/GTIN9780080878508
ProduktartE-Book
EinbandartE-Book
FormatEPUB
Format HinweisDRM Adobe
Erscheinungsjahr2009
Erscheinungsdatum25.09.2009
Seiten542 Seiten
SpracheEnglisch
Dateigrösse7046 Kbytes
Artikel-Nr.2738937
Rubriken
Genre9200

Inhalt/Kritik

Inhaltsverzeichnis
1;Front Cover;1
2;Human-Centric Interfaces for Ambient Intelligence;4
3;Copyright Page;5
4;Contents;6
5;Foreword;18
6;Preface;20
6.1;Ambient Intelligence;20
6.2;Human-Centric Design;21
6.3;Vision and Visual Interfaces;23
6.4;Speech Processing and Dialogue Management;25
6.5;Multimodal Interfaces;26
6.6;Smart Environment Applications;27
6.7;Conclusions;28
6.8;Acknowledgments;28
7;Part 1: Vision and Visual Interfaces;30
7.1;Chapter 1: Face-to-Face Collaborative Interfaces;32
7.1.1;1.1 Introduction;33
7.1.2;1.2 Background;36
7.1.3;1.3 Surface User Interface;39
7.1.4;1.4 Multitouch;41
7.1.4.1;1.4.1 Camera-Based Systems;42
7.1.4.2;1.4.2 Capacitance-Based Systems;47
7.1.5;1.5 Gestural Interaction;49
7.1.6;1.6 Gestural Infrastructures;53
7.1.6.1;1.6.1 Gestural Software Support;54
7.1.7;1.7 Touch versus Mouse;55
7.1.8;1.8 Design Guidelines for SUIs for Collaboration;56
7.1.8.1;1.8.1 Designing the Collaborative Environment;57
7.1.9;1.9 Conclusions;58
7.1.10;References;58
7.2;Chapter 2: Computer Vision Interfaces for Interactive Art;62
7.2.1;2.1 Introduction;63
7.2.1.1;2.1.1 A Brief History of (Vision in) Art;63
7.2.2;2.2 A Taxonomy of Vision-Based Art;64
7.2.3;2.3 Paradigms for Vision-Based Interactive Art;66
7.2.3.1;2.3.1 Mirror Interfaces;67
7.2.3.2;2.3.2 Performance;71
7.2.4;2.4 Software Tools;73
7.2.4.1;2.4.1 Max/MSP, Jitter, and Puredata;73
7.2.4.2;2.4.2 EyesWeb;74
7.2.4.3;2.4.3 processing;74
7.2.4.4;2.4.4 OpenCV;74
7.2.5;2.5 Frontiers of Computer Vision;74
7.2.6;2.6 Sources of Information;75
7.2.7;2.7 Summary;76
7.2.8;Acknowledgments;76
7.2.9;References;77
7.3;Chapter 3: Ubiquitous Gaze: Using Gazeat the Interface;78
7.3.1;3.1 Introduction;79
7.3.2;3.2 The Role of Gaze in Interaction;79
7.3.3;3.3 Gaze as an Input Device;82
7.3.3.1;3.3.1 Eyes on the Desktop;84
7.3.3.2;3.3.2 Conversation-Style Interaction;86
7.3.3.3;3.3.3 Beyond the Desktop;87
7.3.3.3.1;Ambient Displays;88
7.3.3.3.2;Human-Human Interaction in Ambient Environments;89
7.3.3.3.2.1;Activity detection;89
7.3.3.3.2.2;Interest level;90
7.3.3.3.2.3;Hot spot detection;90
7.3.3.3.2.4;Participation status;90
7.3.3.3.2.5;Dialogue acts;90
7.3.3.3.2.6;Interaction structure;90
7.3.3.3.2.7;Dominance and influence;90
7.3.4;3.4 Mediated Communication;93
7.3.5;3.5 Conclusion;94
7.3.6;References;95
7.4;Chapter 4: Exploiting Natural Language Generation in Scene Interpretation;100
7.4.1;4.1 Introduction;101
7.4.2;4.2 Related Work;101
7.4.3;4.3 Ontology-Based User Interfaces;103
7.4.4;4.4 Vision and Conceptual Levels;104
7.4.5;4.5 The NLG Module;107
7.4.5.1;4.5.1 Representation of the Discourse;109
7.4.5.2;4.5.2 Lexicalization;111
7.4.5.3;4.5.3 Surface Realization;111
7.4.6;4.6 Experimental Results;112
7.4.7;4.7 Evaluation;114
7.4.7.1;4.7.1 Qualitative Results;116
7.4.7.2;4.7.2 Quantitative Results;116
7.4.8;4.8 Conclusions;118
7.4.9;Acknowledgments;119
7.4.10;Appendix Listing of Detected Facts Sorted by Frequency of Use;119
7.4.11;References;121
7.5;Chapter 5: The Language of Action: A New Tool for Human-CentricInterfaces;124
7.5.1;5.1 Introduction;125
7.5.2;5.2 Human Action;126
7.5.3;5.3 Learning the Languages of Human Action;128
7.5.3.1;5.3.1 Related Work;129
7.5.4;5.4 Grammars of Visual Human Movement;132
7.5.5;5.5 Grammars of Motoric Human Movement;137
7.5.5.1;5.5.1 Human Activity Language: A Symbolic Approach;141
7.5.5.2;5.5.2 A Spectral Approach: Synergies;150
7.5.6;5.6 Applications to Health;155
7.5.7;5.7 Applications to Artificial Intelligence and Cognitive Systems;156
7.5.8;5.8 Conclusions;157
7.5.9;Acknowledgments;158
7.5.10;References;158
8;Part 2: Speech Processing and Dialogue Management;162
8.1;Chapter 6: Robust Speech Recognition Under Noisy Ambient Conditions;164
8.1.1;6.1 Introduction;165
8.1.2;6.2 Speech Recognition Overview;167
8.1.3;6.3 Variability in the Speech Signal;170
8.1.4;6.4 Robust Speech Recognition Techniques;171
8.1.4.1;6.4.1 Speech Enhancement Techniques;172
8.1.4.2;6.4.2 Robust Feature Selection and Extraction Methods;174
8.1.4.3;6.4.3 Feature Normalization Techniques;176
8.1.4.4;6.4.4 Stereo Data-Based Feature Enhancement;176
8.1.4.5;6.4.5 The Stochastic Matching Framework;177
8.1.4.5.1;Model-Based Model Adaptation;178
8.1.4.5.2;Model-Based Feature Enhancement;180
8.1.4.5.3;Adaptation-Based Compensation;180
8.1.4.5.4;Uncertainty in Feature Enhancement;182
8.1.4.6;6.4.6 Special Transducer Arrangement to Solve the Cocktail Party Problem;184
8.1.5;6.5 Summary;184
8.1.6;References;185
8.2;Chapter 7: Speaker Recognition in Smart Environments;192
8.2.1;7.1 Principles and Applications of Speaker Recognition;193
8.2.1.1;7.1.1 Features Used for Speaker Recognition;194
8.2.1.2;7.1.2 Speaker Identification and Verification;195
8.2.1.3;7.1.3 Text-Dependent, Text-Independent, and Text-Prompted Methods;196
8.2.2;7.2 Text-Dependent Speaker Recognition Methods;197
8.2.2.1;7.2.1 DTW-Based Methods;197
8.2.2.2;7.2.2 HMM-Based Methods;197
8.2.3;7.3 Text-Independent Speaker Recognition Methods;198
8.2.3.1;7.3.1 Methods Based on Long-Term Statistics;198
8.2.3.2;7.3.2 VQ-Based Methods;198
8.2.3.3;7.3.3 Methods Based on Ergodic HMM;198
8.2.3.4;7.3.4 Methods Based on Speech Recognition;199
8.2.4;7.4 Text-Prompted Speaker Recognition;200
8.2.5;7.5 High-Level Speaker Recognition;201
8.2.6;7.6 Normalization and Adaptation Techniques;201
8.2.6.1;7.6.1 Parameter Domain Normalization;202
8.2.6.2;7.6.2 Likelihood Normalization;202
8.2.6.3;7.6.3 HMM Adaptation for Noisy Conditions;203
8.2.6.4;7.6.4 Updating Models and A Priori Thresholds for Speaker Verification;204
8.2.7;7.7 ROC and DET Curves;204
8.2.7.1;7.7.1 ROC Curves;204
8.2.7.2;7.7.2 DET Curves;205
8.2.8;7.8 Speaker Diarization;206
8.2.9;7.9 Multimodal Speaker Recognition;208
8.2.9.1;7.9.1 Combining Spectral Envelope and Fundamental Frequency Features;208
8.2.9.2;7.9.2 Combining Audio and Visual Features;209
8.2.10;7.10 Outstanding Issues;209
8.2.11;References;210
8.3;Chapter 8: Machine Learning Approaches to Spoken Language Understanding for Ambient Intelligence;214
8.3.1;8.1 Introduction;215
8.3.2;8.2 Statistical Spoken Language Understanding;217
8.3.2.1;8.2.1 Spoken Language Understanding for Slot-Filling Dialogue System;217
8.3.2.2;8.2.2 Sequential Supervised Learning;219
8.3.3;8.3 Conditional Random Fields;221
8.3.3.1;8.3.1 Linear-Chain CRFs;221
8.3.3.2;8.3.2 Parameter Estimation;222
8.3.3.3;8.3.3 Inference;223
8.3.4;8.4 Efficient Algorithms for Inference and Learning;225
8.3.4.1;8.4.1 Fast Inference for Saving Computation Time;225
8.3.4.2;8.4.2 Feature Selection for Saving Computation Memory;228
8.3.5;8.5 Transfer Learning for Spoken Language Understanding;230
8.3.5.1;8.5.1 Transfer Learning;230
8.3.5.2;8.5.2 Triangular-Chain Conditional Random Fields;231
8.3.5.2.1;Model1;232
8.3.5.2.2;Model2;233
8.3.5.3;8.5.3 Parameter Estimation and Inference;234
8.3.6;8.6 Joint Prediction of Dialogue Acts and Named Entities;235
8.3.6.1;8.6.1 Data Sets and Experiment Setup;235
8.3.6.2;8.6.2 Comparison Results for Text and Spoken Inputs;236
8.3.6.3;8.6.3 Comparison of Space and Time Complexity;239
8.3.7;8.7 Multi-Domain Spoken Language Understanding;240
8.3.7.1;8.7.1 Domain Adaptation;241
8.3.7.2;8.7.2 Data and Setup;242
8.3.7.3;8.7.3 Comparison Results;245
8.3.8;8.8 Conclusion and Future Direction;250
8.3.9;Acknowledgments;251
8.3.10;References;251
8.4;Chapter 9: The Role of Spoken Dialogue in User-Environment Interaction;254
8.4.1;9.1 Introduction;255
8.4.2;9.2 Types of Interactive Speech Systems;257
8.4.3;9.3 The Components of an Interactive Speech System;261
8.4.3.1;9.3.1 Input Interpretation;261
8.4.3.2;9.3.2 Output Generation;264
8.4.3.3;9.3.3 Dialogue Management;264
8.4.4;9.4 Examples of Spoken Dialogue Systems for Ambient Intelligence Environments;269
8.4.4.1;9.4.1 Chat;269
8.4.4.2;9.4.2 SmartKom and SmartWeb;270
8.4.4.3;9.4.3 Talk;274
8.4.4.4;9.4.4 Companions;275
8.4.5;9.5 Challenges for Spoken Dialogue Technology in Ambient Intelligence Environments;277
8.4.5.1;9.5.1 Infrastructural Challenges;277
8.4.5.2;9.5.2 Challenges for Spoken Dialogue Technology;278
8.4.6;9.6 Conclusions;279
8.4.7;References;279
8.5;Chapter 10: Speech Synthesis Systems in Ambient Intelligence Environments;284
8.5.1;10.1 Introduction;285
8.5.2;10.2 Speech Synthesis Interfaces for Ambient Intelligence;287
8.5.3;10.3 Speech Synthesis;290
8.5.3.1;10.3.1 Text Processing;290
8.5.3.2;10.3.2 Speech Signal Synthesis;292
8.5.3.2.1;Articulatory Synthesis;293
8.5.3.2.2;Formant Synthesis;293
8.5.3.2.3;Concatenative Synthesis;296
8.5.3.3;10.3.3 Prosody Generation;298
8.5.3.4;10.3.4 Evaluation of Synthetic Speech;299
8.5.4;10.4 Emotional Speech Synthesis;299
8.5.5;10.5 Discussion;301
8.5.5.1;10.5.1 Ambient Intelligence and Users;302
8.5.5.2;10.5.2 Future Directions and Challenges;302
8.5.6;10.6 Conclusions;303
8.5.7;Acknowledgments;303
8.5.8;References;304
9;Part 3: Multimodal Interfaces;308
9.1;Chapter 11: Tangible Interfaces for Ambient Augmented Reality Applications;310
9.1.1;11.1 Introduction;311
9.1.1.1;11.1.1 Rationale for Ambient AR Interfaces;311
9.1.1.2;11.1.2 Augmented Reality;313
9.1.2;11.2 Related Work;314
9.1.2.1;11.2.1 From Tangibility...;314
9.1.2.2;11.2.2 . . .To the AR Tangible User Interface;315
9.1.3;11.3 Design Approach for Tangible AR Interfaces;316
9.1.3.1;11.3.1 The Tangible AR Interface Concept;316
9.1.4;11.4 Design Guidelines;317
9.1.5;11.5 Case Studies;318
9.1.5.1;11.5.1 AR Lens;318
9.1.5.2;11.5.2 AR Tennis;320
9.1.5.3;11.5.3 MagicBook;322
9.1.6;11.6 Tools for Ambient AR Interfaces;324
9.1.6.1;11.6.1 Software Authoring Tools;324
9.1.6.2;11.6.2 Hardware Authoring Tools;325
9.1.7;11.7 Conclusions;327
9.1.8;References;328
9.2;Chapter 12: Physical Browsing and Selection-Easy Interaction with Ambient Services;332
9.2.1;12.1 Introduction to Physical Browsing;333
9.2.2;12.2 Why Ambient Services Need Physical Browsing Solutions;334
9.2.3;12.3 Physical Selection;335
9.2.3.1;12.3.1 Concepts and Vocabulary;335
9.2.3.2;12.3.2 Tou;335
9.2.3.3;12.3.3 Pointing;336
9.2.3.4;12.3.4 Scanning;337
9.2.3.5;12.3.5 Visualizing Physical Hyperlinks;338
9.2.4;12.4 Selection as an Interaction Task;338
9.2.4.1;12.4.1 Selection in Desktop Computer Systems;339
9.2.4.2;12.4.2 About the Choice of Selection Technique;339
9.2.4.3;12.4.3 Selection in Immersive Virtual Environments;340
9.2.4.4;12.4.4 Selection with Laser Pointers;341
9.2.4.5;12.4.5 The Mobile Terminal as an Input Device;343
9.2.5;12.5 Implementing Physical Selection;344
9.2.5.1;12.5.1 Implementing Pointing;344
9.2.5.2;12.5.2 Implementing Touching;346
9.2.5.2.1;RFID as an Implementation Technology;346
9.2.5.2.2;User Interaction Considerations;347
9.2.5.3;12.5.3 Other Technologies for Connecting Physical and Digital Entities;348
9.2.5.3.1;Visual Technologies for Mobile Terminals;348
9.2.5.3.2;Body Communication;349
9.2.6;12.6 Indicating and Negotiating Actions After the Selection Event;350
9.2.6.1;12.6.1 Activation by Selection;350
9.2.6.2;12.6.2 Action Selection by a Different Modality;351
9.2.6.3;12.6.3 Actions by Combining Selection Events;351
9.2.6.4;12.6.4 Physical Selection in Establishing Communication;352
9.2.7;12.7 Conclusions;352
9.2.8;References;353
9.3;Chapter 13: Nonsymbolic Gestural Interaction for Ambient Intelligence;356
9.3.1;13.1 Introduction;357
9.3.2;13.2 Classifying Gestural Behavior for Human-Centric Ambient Intelligence;357
9.3.3;13.3 Emotions;361
9.3.4;13.4 Personality;365
9.3.5;13.5 Culture;366
9.3.6;13.6 Recognizing Gestural Behavior for Human-Centric Ambient Intelligence;369
9.3.6.1;13.6.1 Acceleration-Based Gesture Recognition;369
9.3.6.2;13.6.2 Gesture Recognition Based on Physiological Input;371
9.3.7;13.7 Conclusions;372
9.3.8;References;372
9.4;Chapter 14: Evaluation of Multimodal Interfaces for Ambient Intelligence;376
9.4.1;14.1 Introduction;377
9.4.2;14.2 Performance and Quality Taxonomy;379
9.4.3;14.3 Quality Factors;380
9.4.4;14.4 Interaction Performance Aspects;381
9.4.5;14.5 Quality Aspects;382
9.4.6;14.6 Application Examples;384
9.4.6.1;14.6.1 INSPIRE and MediaScout;384
9.4.6.2;14.6.2 Evaluation Constructs;385
9.4.6.3;14.6.3 Evaluation of Output Metaphors;387
9.4.6.3.1;Rationale;387
9.4.6.3.2;Experimental Design;387
9.4.6.3.3;Insights;389
9.4.6.4;14.6.4 Evaluation of the Quality of an Embodied Conversational Agent;390
9.4.6.4.1;Rationale;390
9.4.6.4.2;Experimental Design;391
9.4.6.4.3;Insights;392
9.4.6.5;14.6.5 Comparison of Questionnaires;392
9.4.6.5.1;Rationale;392
9.4.6.5.2;Experimental Design;393
9.4.6.5.3;Insights;394
9.4.6.5.3.1;Comparison of questionnaire results;394
9.4.6.5.3.2;Comparison of quality and performance metrics;395
9.4.7;14.7 Conclusions and Future Work;396
9.4.8;Acknowledgment;397
9.4.9;References;397
10;Part 4: Smart Environment Applications;400
10.1;Chapter 15: New Frontiers in Machine Learning for Predictive User Modeling;402
10.1.1;15.1 Introduction;403
10.1.1.1;15.1.1 Multimodal Affect Recognition;404
10.1.1.2;15.1.2 Modeling Interruptability;406
10.1.1.3;15.1.3 Classifying Voice Mails;406
10.1.1.4;15.1.4 Brain-Computer Interfaces for Visual Recognition;406
10.1.2;15.2 A Quick Primer: Gaussian Process Classification;407
10.1.3;15.3 Sensor Fusion;408
10.1.3.1;15.3.1 Multimodal Sensor Fusion for Affect Recognition;409
10.1.3.2;15.3.2 Combining Brain-Computer Interface with Computer Vision;410
10.1.4;15.4 Semisupervised Learning;412
10.1.4.1;15.4.1 Semisupervised Affect Recognition;413
10.1.5;15.5 Active Learning;415
10.1.5.1;15.5.1 Modeling Interruptability;415
10.1.5.2;15.5.2 Classifying Voice Mails;416
10.1.6;15.6 Conclusions;419
10.1.7;Acknowledgments;419
10.1.8;References;419
10.2;Chapter 16: Games and Entertainment in Ambient Intelligence Environments;422
10.2.1;16.1 Introduction;423
10.2.2;16.2 Ambient Entertainment Applications;423
10.2.2.1;16.2.1 Ubiquitous Devices;424
10.2.2.2;16.2.2 Exergames;424
10.2.2.3;16.2.3 Urban Gaming;425
10.2.2.4;16.2.4 Dancing in the Streets;426
10.2.3;16.3 Dimensions in Ambient Entertainment;426
10.2.3.1;16.3.1 Sensors and Control;426
10.2.3.2;16.3.2 Location;429
10.2.3.3;16.3.3 Social Aspects of Gaming;430
10.2.4;16.4 Designing for Ambient Entertainment and Experience;432
10.2.4.1;16.4.1 Emergent Games;432
10.2.4.2;16.4.2 Rhythm and Temporal Interaction;433
10.2.4.3;16.4.3 Performance in Play;435
10.2.4.4;16.4.4 Immersion and Flow;437
10.2.5;16.5 Conclusions;438
10.2.6;Acknowledgments;439
10.2.7;References;439
10.3;Chapter 17: Natural and Implicit Information-Seeking Cues in Responsive Technology;444
10.3.1;17.1 Introduction;445
10.3.2;17.2 Information Seeking and Indicative Cues;446
10.3.2.1;17.2.1 Analysis of the Hypothetical Shopping Scenario;446
10.3.2.2;17.2.2 A Framework for Information Seeking;447
10.3.2.3;17.2.3 Indicative Cues by Phase;449
10.3.3;17.3 Designing Systems for Natural and Implicit Interaction;451
10.3.3.1;17.3.1 Natural Interaction;451
10.3.3.2;17.3.2 Implicit Interaction;453
10.3.4;17.4 Clothes Shopping Support Technologies;454
10.3.4.1;17.4.1 Fitting Room Technologies;454
10.3.4.2;17.4.2 Virtual Fittings;455
10.3.4.3;17.4.3 Reactive Displays;455
10.3.5;17.5 Case Study: Responsive Mirror;455
10.3.5.1;17.5.1 Concept;455
10.3.5.2;17.5.2 Privacy Concerns;457
10.3.5.2.1;Disclosure;458
10.3.5.2.2;Identity;458
10.3.5.2.3;Temporal;458
10.3.5.3;17.5.3 Social Factors: Reflecting Images of Self and Others;458
10.3.5.4;17.5.4 Responsive Mirror Prototype;460
10.3.5.5;17.5.5 Vision System Description;461
10.3.5.5.1;Shopper Detection;461
10.3.5.5.2;Orientation Estimation;464
10.3.5.5.3;Clothes Recognition;465
10.3.5.5.3.1;Subjectivity of Clothing Similarity;466
10.3.5.5.3.2;Clothing Similarity Algorithm;468
10.3.5.5.3.3;Feature Extraction-Shirt Parts Segmentation;469
10.3.5.5.3.4;Feature Extraction-Sleeve Length Detection;469
10.3.5.5.3.5;Feature Extraction-Collar Detection;469
10.3.5.5.3.6;Feature Extraction-Button Detection;471
10.3.5.5.3.7;Feature Extraction-Pattern Detection;471
10.3.5.5.3.8;Feature Extraction-Emblem Detection;472
10.3.5.6;17.5.6 Design Evaluation;473
10.3.5.6.1;Method;473
10.3.5.6.2;Task and Procedure;474
10.3.5.6.3;Results;474
10.3.5.6.4;Fitting Room Behavior;475
10.3.5.6.5;User Suggestions for Enhancement;475
10.3.5.6.6;Use of Images of Other People;476
10.3.5.6.7;Results from Privacy-Related Questions;477
10.3.6;17.6 Lessons for Ambient Intelligence Designs of Natural and Implicit Interaction;478
10.3.7;Acknowledgments;479
10.3.8;References;480
10.4;Chapter 18: Spoken Dialogue Systems for Intelligent Environments;482
10.4.1;18.1 Introduction;483
10.4.2;18.2 Intelligent Environments;484
10.4.2.1;18.2.1 System Architecture;484
10.4.2.2;18.2.2 The Role of Spoken Dialogue;485
10.4.2.2.1;Network Speech Recognition;488
10.4.2.2.2;Distributed Speech Recognition;488
10.4.2.2.2.1;ETSI DSR front-end standards;490
10.4.2.2.2.2;A Java ME implementation of the DSR front-end;490
10.4.2.3;18.2.3 Proactiveness;492
10.4.3;18.3 Information Access in Intelligent Environments;493
10.4.3.1;18.3.1 Pedestrian Navigation System;493
10.4.3.1.1;System Description;494
10.4.3.1.2;Evaluation;495
10.4.3.2;18.3.2 Journey-Planning System;498
10.4.3.3;18.3.3 Independent Dialogue Partner;500
10.4.3.3.1;Proactive Dialogue Modeling;503
10.4.3.3.2;Usability Evaluation;504
10.4.4;18.4 Conclusions;504
10.4.5;Acknowledgments;505
10.4.6;References;505
10.5;Chapter 19: Deploying Context-Aware Health Technology at Home: Human-Centric Challenges;508
10.5.1;19.1 Introduction;509
10.5.2;19.2 The Opportunity: Context-Aware Home Health Applications;510
10.5.2.1;19.2.1 Medical Monitoring;510
10.5.2.2;19.2.2 Compensation;511
10.5.2.3;19.2.3 Prevention;511
10.5.2.4;19.2.4 Embedded Assessment;512
10.5.3;19.3 Case Study: Context-Aware Medication Adherence;512
10.5.3.1;19.3.1 Prototype System;513
10.5.3.2;19.3.2 Evaluation;513
10.5.3.3;19.3.3 Human-Centric Design Oversights;514
10.5.4;19.4 Detecting Context: Twelve Questions to Guide Research;516
10.5.4.1;19.4.1 Sensor Installation ( Install It );516
10.5.4.1.1;Question 1: What Type of Sensors Will Be Used?;516
10.5.4.1.2;Question 2: Are the Sensors Professionally Installed or Self-Installedin the Home?;517
10.5.4.1.3;Question 3: What Is the Cost of (end-user) Installation?;518
10.5.4.1.4;Question 4: Where Do Sensors Need to Go?;518
10.5.4.1.5;Question 5: How Are Sensors Selected, Positioned, and Labeled?;519
10.5.4.1.5.1;Selection;519
10.5.4.1.5.2;Positioning;520
10.5.4.1.5.3;Labeling;520
10.5.4.2;19.4.2 Activity Model Training ( Customize It );521
10.5.4.2.1;Question 6: What Type of Training Data Do the Activity Models Require?;521
10.5.4.2.2;Question 7: How Many Examples Are Needed?;523
10.5.4.3;19.4.3 Activity Model Maintenance ( Fix It );524
10.5.4.3.1;Question 8: Who Will Maintain the System as Activities Change, the Environment Changes, and Sensors Break?;524
10.5.4.3.2;Question 9: How Does the User Know What Is Broken?;526
10.5.4.3.3;Question 10: Can the User Make Instantaneous, Nonoscillating Fixes?;526
10.5.4.3.4;Question 11: What Will Keep the User´s Mental Model in Line with the Algorithmic Model?;527
10.5.4.3.5;Question 12: How Does a User Add a New Activity to Recognize?;527
10.5.5;19.5 Conclusions;527
10.5.6;Acknowledgments;528
10.5.7;References;528
11;Epilogue: Challenges and Outlook;534
12;Index;540
mehr

Autor