INTELLIGENT VIRTUAL LEARNING ENVIRONMENT (IVLE)

Get Complete Project Material File(s) Now! »

Significant cognitive agent architectures

Different models and architectures, like BDI (Belief-Desire-Intention), SOAR (State, Operator, and Result) and ACT-R (Adaptive Components of Thought-Rational) are developed to build cognitive frameworks for intelligent agents in order to be able to plan for human-like natural behaviors across different domains and applications.
The Belief-Desire-Intention (BDI) architecture [51] [52] [53] is one of the best approaches that are considered to build an intelligent system for tutor agents in order to illustrate human reasoning models. It assigns short and long term memories among the knowledge base of these agents. The short term memory in BDI includes the beliefs and facts about the virtual environment [54]. It uses a database to save the context of the virtual environment with the continuously updated states of its virtual entities. Emotional [55] and social states of agents, forming its internal state, are also estimated according to obtained context. Therefore, all patterns of knowledge are accordingly modified upon the perception of any activity.
Every agent holds a long term memory [54] that holds the plan of goals and desires it has to attain, such as the tutoring and the pedagogical objectives. The agent depends on its beliefs in order for suitable intentions to be selected. Behaviors and actions are then assigned to the agent to achieve its determined desires.
The beliefs, desires and intentions are considered as the mental attitudes carried by the BDI agents which reactively cooperate to achieve targeted objectives (Figure 14). The BDI architecture focuses on constructing the intentions of the agents to represent their commitments and execute particular plans of actions [56].
Several specifications of BDI architectures have been already released, such as formal specifications using standard software engineering tools [57] and procedural concepts for building procedural reasoning systems. These BDI architectures are still the most durable agent architectures that are being used [58].
Researchers in [59] improved the BDI architecture by relating the representation of knowledge in the framework to the expressed knowledge of experts through particular learning techniques. Consequently, several agent-based systems, like Jadex [60], adopted the BDI-model for the implementation of intelligent agents. The State, Operator, and Result (SOAR) cognitive architecture [61] focuses on building the agent’s intelligence according to its acquired experience. Recent SOAR evolutions cover human-like behaviors by appending educational mechanisms, long term memory and various types of knowledge (Figure 15). These mechanisms have a major role to develop reasoning, planning and decision making processes in the VLE.

Embodied Conversational Agent (ECA)

The Embodied Conversational Agent (ECA) is a computer interface that is represented as a human-embodied agent that can naturally interact with the user. Verbal and non-verbal behaviors can be realized by an ECA including vocal speech, facial expressions, hand gestures and other body movements. These behaviors allow the ECA to communicate with the user in the most human-like methods which could motivate the user to respond and interact [68].
The embodied agents have to be characterized with several capabilities to reach the level of human intelligence throughout their interactions with the user, such as planning, and emotional reasoning [69]. However, the influence of using an ECA for interacting with the user can be evaluated by reviewing their responses and comparing the results of performed scenarios [68].

Interest of ECA for virtual learning environment

ECAs are progressively being developed to adopt the most realistic human visual representation and communication capabilities [70]. They are considered as computer interfaces that could replace human tutors, for example, in practicing and learning scenarios. During the interaction with the user and the virtual environments, these agents have the ability to execute verbal and non-verbal behaviors like speaking, facial expressions, body gestures and locomotion activities. Experiments, like in [26], prove that involving ECAs in learning scenarios as tutor agents can motivate the user to accomplish required tasks.
The capabilities of an ECA can draw the attention of the user with the most common and natural manners, such as gaze and deictic gestures. For instance, it would be considerable if the ECA looked at an object and pointed to it while moving in the virtual environment and discussing the required task with the user. Such human-like manners are essential to provide the user with realistic contexts and motivate her/him to naturally interact with the ECA and consider the ECA as a human tutor.
Several ECA projects have been developed so far, but ECA with light and specific domain knowledge, like STEVE [23] and MAX [71], were initially created. Latest ECA researches are further focusing on achieving more credible intelligent ECAs by improving the natural interaction (human-like), the intelligent capabilities, the emotions and the facial expressions. The Virtual Human Toolkit4 [72], Greta5 [73] and MARC6 [55] are examples of the currently utilized ECA platforms.

READ  Dynamics, offline learning and transfer

ECA platforms

Among the initially established ECA platforms, STEVE (Soar Training Expert for Virtual Environments) [23] was used as a tutor agent to execute particular pedagogical scenarios. Using SOAR, STEVE was developed to interact and train the user on predefined operations of a ship’s control panel. STEVE is an interactive system since it interprets various input sources from the user such as keyboard strokes, mouse clicks, and voice commands. STEVE interacts with the user through an animated embodied agent that has a physical representation of a human-like face and body. It uses gestures to communicate while navigating in the virtual environment.
In consequence, the primary tasks of STEVE revolve on demonstrating required actions to the user and on observing the performed actions. STEVE can support the user when needed by replying to inquiries about prior actions.
The concerned domain knowledge of STEVE includes the initial states of entities in the virtual environment, and the procedural scenario that should be followed. Nevertheless, STEVE is characterized with several human-like capabilities which weren’t provided by previously utilized agents. It can realize several human-like actions and movements, reply to inquired questions, use gestures and gaze actions, follow implementing the sequence of actions of assigned procedure, and uses its memory components to record performed actions and altered states of the entities. For this purpose, several components are supplied by the architecture of STEVE:
1- Simulator: applies the behaviors in the virtual environment.
2- Visual Interface: permits the user to interact with the virtual objects.
3- Audio Component: needed to vocalize and to accept vocal messages from the user.
4- Speech Generation: transforms the created text messages into speech in order to be vocally transmitted to the user.

Virtual Human Toolkit

The project of the Virtual Human is constructed at the Institute for Creative Technologies (ICT) [80]. Its objective is to build and naturally structure embodied agents that can realize human-like actions while interacting with the user during the implementation of social trainings in the virtual environments (Figure 21).
In fact, Virtual Humans (VH) are autonomous agents that perceive their environments and recognize performed activities in order to accordingly update their beliefs. They model their own and other’s believes, desires and intentions, and follow the assigned plans to naturally interact with the user by realizing verbal and non-verbal communication behaviors. Furthermore, various roles can be handled by these agents for supporting the user in executing training scenarios [80]. To naturally collaborate with the user and other cooperating agents, several capabilities are carried out by the VH agents [69], such as the automated speech recognition, perception using the Computer Expression Recognition Toolbox (CERT) [81], task modeling using the DTask [82], natural language generation [83], and the text-to-speech using the Festival engine [84].
Consequently, the ICT developed the architecture (Figure 22) of the Virtual Human Toolkit that defines, at an abstract level, the essential modules that can properly realize the functionalities of the virtual human. The VH Toolkit is composed of several modules, tools, libraries and 3rd party software that cooperate to attain these functionalities.

Table of contents :

1 INTRODUCTION
1.1 VIRTUAL LEARNING ENVIRONMENT
1.2 EMBODIED CONVERSATIONAL AGENT
1.3 THESIS OBJECTIVES AND PLAN
2 BACKGROUND AND LITERATURE REVIEW
2.1 INTELLIGENT VIRTUAL LEARNING ENVIRONMENT (IVLE)
2.1.1 SELDON
2.1.2 #FIVE
2.1.3 MASCARET
2.2 COGNITIVE ARCHITECTURES
2.2.1 Significant cognitive agent architectures
2.3 EMBODIED CONVERSATIONAL AGENT (ECA)
2.3.1 Interest of ECA for virtual learning environment
2.3.2 ECA platforms
2.3.3 SAIBA Framework
3 MODEL
3.1 GLOBAL ARCHITECTURE
3.2 MODEL OF KNOWLEDGE
3.2.1 Components of Knowledge
3.2.2 Initial instantiation of knowledge base
3.3 BEHAVIORS AND ACTIONS
3.3.1 Perception behavior
3.3.2 Communication Action
3.3.3 Communication Behavior
3.3.4 Taxonomy of Questions
3.4 SAIBA INTEGRATION
3.4.1 Implementation of Behavior Planner and Behavior Realizer interfaces
3.4.2 Integrating ECA platforms
4 APPLICATION
4.1 TECHNICAL ARCHITECTURE
4.1.1 The virtual environment
4.1.2 Integrating the ECAs
4.1.3 Building the domain model
4.1.4 Importing MASCARET and the defined models
4.1.5 Communication methods for the user
4.2 TUTOR BEHAVIOR
4.3 IMPLEMENTED SCENARIO
5 EVALUATION
5.1 EXPERIMENT PROTOCOL
5.1.1 Description of the experiment
5.1.2 Log files
5.1.3 Questionnaire
5.2 RESULTS
6 CONCLUSION & PERSPECTIVES
6.1 PERSPECTIVES
6.1.1 Building an advanced Tutoring Behavior
6.1.2 Intelligent Tutoring System
7 BIBLIOGRAPHY
8 APPENDICES

GET THE COMPLETE PROJECT

Related Posts