Human-Automation Collaboration Taxonomy

Get Complete Project Material File(s) Now! »

Autonomous Control Levels (ACL)

The fundamental questions of how autonomous a system is and how this autonomy can be classi ed have also been explored by a number of researchers at the U.S. Air Force Research Laboratory and their publications on ACL [48]. Capturing the idea from both, but not limited to the SV scale [211] and the 3D intel-ligence space [47], ACL has been developed. The resolution of the scale has increased to eleven levels as illustrated in Table 2.3, while retaining the multi-dimensional facet presented in the 3D intelligence space [48]. This taxonomy also incorporates the four-stage human information processing model [48] as reviewed in Section 2.1.2.
• Perception/Situational Awareness: The robustness of acquiring live information from the surroundings.
• Analysis/Coordination: The ability to adapt and coordinate with the remaining of the team from the acquired live information and health of the system.
• Decision Making: The ability to make appropriate decisions based on the available data.
• Capability: The ability to carry out tasks autonomously as required by the scenario based on the decision (autonomously or manually). Although this taxonomy is very extensive and has considered information processing at each level of automation, the ability to classify a heterogeneous multi-agent autonomous system was not clearly captured.

Endsley and Kaber’s Level of Automation

Traditionally, the design decision of automation is made to maximise the capability of the technology, this results in a reduction in cost and human errors [78]. The aim of automation is to minimise human involvement. Unfortunately, the cognitive capability of a human operator to perform monitoring or pure supervisory tasks is very limited as the human operator’s sense for SA decreases with time [78]. To this end, a system that is fully autonomous (with human operators being completely excluded from the process) is not the optimal solution to maximise task e ciency. Instead, a balance of human and machine involvement has a more positive e ect. To address this from the view of the performance, SA and CW in a dynamic control task scenario, Endsley and Kaber had proposed a method of classifying levels of automation [78].
The Endsley and Kaber scale consists of ten autonomy levels and four roles. The roles are presented as processes starting from information collection to performing selected decisions. Table 2.4 illustrates a hierarchy of levels of automation presented by Endsley and Kaber.
Each level of automation states the responsibility of the roles (either human, machine or both):
1. Manual Control: In this level, human agent assumes responsibility for all roles;.
2. Action Support: In this level, the majority of the roles are the human agent’s re-sponsibility, with the computer agent aiding in the information observation, and implementation processes.
3. Batch Processing: It is the human agent’s responsibility to generate and make the appropriate decisions, and then it is up to the machine agent to carry out the selected action.
4. Shared Control: The process of generating options is carried out by the machine and human agents, although the human agent still retains full control over the selection and implementation of the selected action.

Autonomous Levels For Unmanned Systems

The Autonomous Levels For Unmanned Systems (ALFUS), initially proposed by the Na-tional Institute of Standards and Technology (NIST), addresses the issues of a linear auto-mation classi cation, and lack of information processing ow attributes, which was visible from Sheridan and Verplanck’s Ten Levels of Automation.
In this taxonomy, an Unmanned System (UMS) is de ned as a powered physical system, with no human operator aboard the principal components, acts on physical world for the purpose of achieving assigned tasks. May be mobile or stationary. May include any and all associated supporting components. Examples include unmanned group vehicles (UGV), unmanned aerial vehicles (UAV), unmanned underwater vehicles (UUV), unmanned water surface borne vehicles (USV), unattended munitions (UM), and unattended ground sensors (UGS). Missiles, rockets, and their submunitions, and artillery are not considered UMSs [103]. ALFUS describes autonomy in three tiers: Subsystems, systems, and system of sys-tems. Each tier is then broken into ve LOAs as illustrated in Table 2.5. These levels are de ned according to three aspects of Contextual Autonomous Capability (CAC), Mission Complexity (MC), Environmental Complexity (EC), and Human Independence (HI). ALFUS uses the CAC model (presented in Figure 2.4) to de ne UMSs. This model consists of three aspects or axes: MC, EC, and HI. MC is concerned about the various details of the mission task which contributes to a more di cult mission. Examples of these tasks are mission time constraints, resource availability, asset availability, intelligence gather-ing, task planning, and mission planning etc. EC is concerned about the environment which the mission is surrounded by. Environmental variables can have great impact on the success of the mission. These variables can range from natural causes, atmospheric disturbances, meteorological conditions, geological limitations, and man-made obstacles etc. HI is concerned about the UMS’s independence from human inputs. This is also known as LOA.
• At 0, the simplest task in relation to MC, static and simple environmental impacts in relation to EC, and least independence (remote control) of human involvement in relation to HI.
• At 10, the highest level of capability of the UMS, where it is able to adapt to the most di cult mission in relation to MC, the most di cult and dynamically chan-ging environment perceived by the UMS in terms of EC, and UMSs with maximum autonomy, or with complete human independence in terms of HI.

READ  SOCIAL ENTREPRENEURSHIP AS THE KEY FOR SUSTAINABILITY

Table of contents :

Abstract
Acknowledgments
List of Figures
List of Tables
List of Abbreviations
Statement of Original Authorship
Chapter 1 Introduction 
1.1 Multiple Heterogeneous UAVs
1.1.1 Benets & Potentials
1.1.2 Issues & Challenges
1.1.3 Capability Transparency
1.2 Research Program
1.2.1 Scope
1.2.2 Research Objective and Questions
1.2.3 Contributions & Signicance
1.3 Research Publications
1.4 Thesis Structure
Chapter 2 Literature Review 
2.1 Systems and Automation
2.1.1 Ten Levels of Automation (SV Scale)
2.1.2 Model for Human-Automation Interaction
2.1.3 Autonomy Spectrum
2.1.4 3D Intelligent Space
2.1.5 Autonomous Control Levels (ACL)
2.1.6 Endsley and Kaber’s Level of Automation
2.1.7 Autonomous Levels For Unmanned Systems
2.1.8 Human-Automation Collaboration Taxonomy
2.2 Interfaces and Interactions
2.2.1 User Interface Design Models
2.2.2 Ecological Interface Design
2.2.3 Dialogues
2.2.4 Authority Sharing
2.2.5 Belief-Desire-Intention Model
2.2.6 Adaptive Automation
2.2.7 Autonomy Transparency
2.2.8 System and Agent Transparency
2.3 Cognitive Constructs
2.3.1 Cognitive Workload
2.3.2 Situation Awareness
2.3.3 Automation Trust
2.4 Discussion
2.5 Conclusion
Chapter 3 Theoretical Foundation 
3.1 Capability Transparency
3.1.1 Environment Grouping
3.1.2 User Interface Grouping
3.2 Functional Capability Framework
3.2.1 Requirement 1: Functional Subsystem Abstraction
3.2.2 Requirement 2: Level Of Detail Indexing Method
3.2.3 Example 1: B-HUNTER UAV
3.2.4 Example 2: A Generic Tactical UAV for this Research
3.3 Autonomy Transparency in Hybrid Systems
3.3.1 Functional Level Of Autonomy
3.3.2 Information Transparency
3.3.3 Autonomy Spectrum
3.3.4 Model of Autonomy Transparency through Text-Based Representation
3.4 Implementing Capability Transparency
3.4.1 Mission Layer
3.4.2 Visualisation Layer (Display Interface)
3.4.3 Agent Layer
3.5 Conclusion
Chapter 4 Experiment Details 
4.1 Experiment Overview
4.1.1 Experiment 1
4.1.2 Experiment 2
4.1.3 Experiment 3
4.2 Experiment Software Prototype
4.2.1 Software System Design
4.2.2 LOA/LOD Visual Representation
4.2.3 Status Communication Feature
4.2.4 Interaction Design
4.3 Design and Apparatus
4.4 Procedure
4.4.1 Experiment Preparation
4.4.2 Subject Recruitment
4.4.3 Greet and Brief
4.4.4 Prototype Familiarisation
4.4.5 Experiment Trial
4.4.6 Data Collection
4.4.7 Post-Experiment Interview
4.4.8 Post Experiment
4.5 Conclusion
Chapter 5 Experiment 1: Functional Capability Framework Validation 
5.1 Scenario Description
5.1.1 Segment A: High LOD/Min Information
5.1.2 Segment B: Hybrid LOD/Mixed Information
5.1.3 Segment C: Low LOD/Max Information
5.2 Result and Analysis: Cognitive Workload
5.2.1 Mental Demand
5.2.2 Physical Demand
5.2.3 Temporal Demand
5.2.4 Performance
5.2.5 Eort
5.2.6 Frustration
5.2.7 Combined Cognitive Workload
5.2.8 Analysis Summary
5.3 Result and Analysis: Situation Awareness
5.3.1 Scoring Method
5.3.2 Analysis
5.4 Discussion and Conclusion
Chapter 6 Experiment 2: Partial Autonomy Transparency 
6.1 Scenario Description
6.1.1 Tasks and Objectives
6.1.2 Baseline Scenario (Opaque Autonomy Transparency)
6.1.3 Evaluation Scenario (Transparent Autonomy Transparency)
6.2 Result & Analysis: Cognitive Workload
6.2.1 Mental Demand
6.2.2 Physical Demand
6.2.3 Temporal Demand
6.2.4 Performance
6.2.5 Eort
6.2.6 Frustration
6.2.7 Combined Cognitive Workload
6.2.8 Analysis Summary
6.3 Result & Analysis: Situation Awareness
6.3.1 Level 1 SA
6.3.2 Level 2 SA
6.3.3 Combined SA
6.4 Discussion & Conclusion
Chapter 7 Experiment 3: Complete Autonomy Transparency 
7.1 Scenario Description
7.1.1 Tasks, Objectives and Modes-Of-Operations
7.1.2 Baseline Scenario (Opaque Autonomy Capability)
7.1.3 Evaluation Scenario (Transparent Autonomy Capability)
7.2 Result & Analysis: Cognitive Workload
7.2.1 Mental Demand
7.2.2 Physical Demand
7.2.3 Temporal Demand
7.2.4 Performance
7.2.5 Eort
7.2.6 Frustration
7.2.7 Combined Cognitive Workload
7.2.8 Analysis Summary
7.3 Result & Analysis: Situation Awareness
7.3.1 Level 1 SA
7.3.2 Level 2 SA
7.3.3 Combined SA
7.4 Result & Analysis: Trust in Automation
7.4.1 Competence
7.4.2 Predictability
7.4.3 Reliability
7.4.4 Faith
7.4.5 Overall Trust
7.5 Result & Analysis: Operator Performance
7.5.1 Initial Response Time
7.5.2 Event Response Time
7.5.3 Items-Of-Interest Found
7.6 Discussion & Conclusion
Chapter 8 Conclusion 
8.1 Addressing the Research Questions
8.1.1 Question 1: Functional Transparency
8.1.2 Question 2: Partial Autonomy Transparency
8.1.3 Question 3: Complete Autonomy Transparency
8.2 Contribution
8.3 Research Limitation
8.3.1 Transparency Visualisation and Interface Designs
8.3.2 Experiment Scenario Realism
8.3.3 Experiment Task Familiarisation
8.4 Recommendations & Future Work
Appendix A Experiment 1 Result Analysis 
A.1 Cognitive Workload Assumptions Testing
A.1.1 Mental Demand
A.1.2 Physical Demand Assumptions Testing
A.1.3 Temporal Demand
A.1.4 Performance
A.1.5 Eort
A.1.6 Frustration
A.1.7 Combined Cognitive Workload
A.2 Situation Awareness Assumptions Testing
Appendix B Experiment 2 Result Analysis 
B.1 Cognitive Workload Assumptions Testing
B.1.1 Mental Demand
B.1.2 Physical Demand
B.1.3 Temporal Demand
B.1.4 Performance
B.1.5 Eort
B.1.6 Frustration
B.1.7 Combined Cognitive Workload
B.2 Situation Awareness Assumptions Testing
B.2.1 Level 1 SA
B.2.2 Level 2 SA
B.2.3 Combined SA
Appendix C Experiment 3 Result Analysis 
C.1 Cognitive Workload Assumptions Testing
C.1.1 Mental Demand
C.1.2 Physical Demand
C.1.3 Temporal Demand
C.1.4 Performance
C.1.5 Eort
C.1.6 Frustration
C.1.7 Combined Cognitive Workload
C.2 Situation Awareness Assumptions Testing
C.2.1 Level 1 SA
C.2.2 Level 2 SA
C.2.3 Combined SA
C.3 Trust In Automation Assumptions Testing
C.3.1 Competence
C.3.2 Predictability
C.3.3 Reliability
C.3.4 Faith
C.3.5 Overall Trust
C.4 Operator Performance Assumptions Testing
C.4.1 Initial Response Time
C.4.2 Event Response Time
C.4.3 Items-Of-Interest Found
References 

GET THE COMPLETE PROJECT

Related Posts