RuleWorks

A Framework for Model-Based Adaptive Training

System Realisation

In the training system architecture, the training agents replicate problem solving capabilities of idealised human expert(s). In this work, the approach for developing models of expertise is to describe both the expert agent(s) and generic trainer agent realisation at 2 levels: the knowledge level and symbol level [Newell 1981]. In modern object-oriented software development, the building of models which describe aspects of reality is an acknowledged part of the work. Given that all software is a model, expressing the difference between a low level implementation (the symbol level) and abstract models (the knowledge level) is important in order to gain the right level of understanding of the application. In the MOBAT framework the knowledge level links to detail at the symbol level. The identification (at the knowledge level) and mapping (to the symbol level) of an explicit set of modelling dimensions provides great flexibility in creating adaptive training applications that suit a wide range of industrial training problems. This section summarises the mapping of modelling dimensions (Section 7.8.1), system realisation at the knowledge level (Section 7.8.2) and system realisation at the symbol level (Section 7.8.3) within the MOBAT framework.

Mapping Modelling Dimensions to Modelling Specifications

To cope with complexity, the trainer agent must select and switch between multiple models in order to provide the most effective training for a given situation. In the MOBAT framework, a set of primitive attributes of training tasks have been defined which enable the trainer agent to pursue particular strategies. To operate on the domain specific training models, the MOBAT experiments have used different combinations of generic trainer tasks with generic trainer methods and generic training heuristics. The trainer agent contains no domain specific expertise but only training expertise (i.e., training strategies). The design and implementation approach for training expertise (trainer agent) is similar to domain expertise (expert agents). A set of primitive task attributes are the generic tasks for the trainer agent. These generic trainer tasks in the MOBAT framework are described below.

  • To present: The tasks here are presenting the subject matter at different levels which results in inductive, deductive or rote learning modes (e.g., see Figure 6-8 To Present at Different Levels). This task is using the generality modelling dimension by selecting from procedural, associative or principled models. All training strategies
  • make use of this task primitive. The amount of training unit structuring, trainee navigation choices and help levels are determined by other task primitives.
  • To dictate: The tasks for this primitive realise the tutoring strategy. Typically these tasks are introducing the theoretical presentations before the practical exercises. In the MOBAT framework, tutoring tasks are used as follows: (1) to keep the trainee on track with step-by-step training unit control; (2) to select simple training tasks; (3) to select detailed problem solving methods; and (4) to frequently check trainee progress. The tutoring strategy is chosen when a trainee has a low self-supporting level or a trainee demonstrates poor progress.
  • To coach: The tasks using this primitive realise the coaching strategy. In general, these tasks focus on practical exercises. Hints are provided if the trainee shows any difficulty with the training material. In the MOBAT framework, coaching tasks are used as follows: (1) to guide the trainee with limited navigation choices; (2) to select moderately complex training tasks; (3) to select general problem solving methods; and (4) to provide a lot of feedback on trainee progress. The coaching strategy is chosen when a trainee’s profile indicates an average self-supporting level.
  • To facilitate: Tasks based on this primitive realise the facilitating strategy. The trainee is encouraged to experiment and the trainee is given many navigation choices to learn as much as possible without losing track of the training goals. In the MOBAT framework, facilitating tasks are used as follows: (1) to guide the trainee while providing lots of navigation freedom; (2) to select abstract training tasks; (3) to select generic problem solving methods; and (4) to provide feedback on trainee progress when requested. The facilitating strategy is chosen when a trainee’s profile indicates a high self-supporting level or a trainee demonstrates good progress.
  • To advise: The tasks for provision of advise to a trainee are based on current performance and diagnostic information from the trainee model. It should be noted that this ‘advice’ is different from ‘explanation’. Trainer ‘advice’ is intended to help a trainee either complete the current training unit or select another training unit. In the MOBAT framework, ‘explanation’ is a task for expert agent(s) to justify its reasoning and explain aspects of the domain model.
  • To intervene: Tasks based on this primitive interrupt a training session when appropriate. The feed-back loop to the trainer agent may occur early enough to allow a modified approach. Although training units run semi-autonomously, the trainer agent may intervene during a training session based on diagnostic information and adjust an appropriate model dimension. For example, the trainee may take longer than expected or a trainee’s actions clearly demonstrate a bug or misconception.
  • To remediate: This task primitive is used to take appropriate actions when there are trainee errors. The feed-back to the trainer agent after completion of a training unit may result in a modified plan or modified training unit presentation. If the results of a training unit indicate that a trainee’s actions differ from the expert, then the trainer needs to remediate the situation.
  • To assess: These tasks implement assessments which update the diagnostic information in the trainee model. The task primitives here are used to classify trainee errors in terms of slips, bugs and misconceptions (see Section 6.5.2 and 7.6.5).
A library of specific tasks for the trainer agent can be built from these generic tasks in a similar way as the expert agent generic tasks are used to categorise the subject specific tasks. The combination of trainer tasks and trainer methods operate on distinct modelling dimensions (see Section 6.5 The Use of Multiple Models in a Training Framework). In addition to the trainer tasks and methods for didactic and diagnostic problem spaces, the MOBAT experiments have been implemented with auxiliary tasks and methods that are useful in special circumstances. For example, the task ‘to generate’ is used in the Workmanship-MOBAT application to generate the different classes of training units in a special format for use by Internet navigation tools. A summary of the modelling elements that are manipulated by the (didactic tactician and diagnostic tactician) trainer tasks and trainer methods in the MOBAT framework is shown in Table 7-2.
Table 7-2 Mapping Modelling Dimensions to Modelling Specifications

Table 7-2 Mapping Modelling Dimensions to Modelling Specifications

Changing either the scope (problem-space), generality (method) or perspicuity (case-model) effectively results in a switch to a new model. The scope, generality and perspicuity modelling dimensions enable the trainer agent to focus trainee attention and reduce problems, in terms of both representation and reasoning, to a manageable size. Changing precision (level), accuracy (range) or uncertainty (confidence) results in an adjustment for the current model. A model switch by the trainer agent effectively result in the generation of a new training unit. Training units are created as semi-autonomous objects in that after being selected for execution they run relatively independent. The modelling dimensions enable the trainer agent to switch or vary model properties and act appropriately on trainee slips, bugs and misconceptions. Varying these primitive modelling dimensions is only possible if the model representation has not been engineered too tightly. I.e., if only the minimum information to execute a task has been represented then a model is only appropriate for a limited purpose and both diagnostic actions from the trainer agent and explanations from the expert agent are parsimonious.

© RuleWorks.co.uk | | Sitemap