Modelling human decision-making Essay

Modelling human decision-making

 

 

A great number of projects modeling human decision-making in various simulations have been conducted and are currently being conducted. The purpose of these simulations is to bring forth valuable information concerning human decision-making. Artificial intelligence is then being exploited to gain knowledge of the human decision-making strategies. The trained artificial intelligence is linked to the simulation in order to evaluate the performance of the decision-maker. Results are presented from the most recent project. The major motivation of these projects is to further understand and improve the human decision-making skills.

 

1. Introduction

 

The utilization of artificial intelligence in portraying human decision-making in simulations has been persistently investigated by the author ever since the mid-1990’s. The purpose of this paper is to report the records and results of the investigation and the future undertakings that it seeks to complete. The idea evolved from the project attempting to model rail marshalling yards, which is an artificial example of simulations working in collaboration with artificial expert systems. The idea was then applied to an actual setting where it was placed at an engine assembly plant for maintenance operations. Future works deem to gaze the possibility of using simulations in knowledge elicitation. This paper succinctly explains each phases of work and concludes by discussing the importance of modelling human decision-making.

 

2. Forming Ideas

 

During the mid-1990s, the author started a project actualizing the simulation of industrial rail marshalling yards. This was a project funded by ESPRIT. With the aid a Belgian consultancy the project was carried out seeking to identify the things needed for a rail yard simulator. During those times, it was really hard to simulate he movements such as shunting of individual wagons in a yard. The problem was mainly because of the unavailability of commercial software that is sufficient for the said function.

 

As studies were being conducted in the nature of rail yard operations, an important realization arose. For the project investigation, a supervisor was tasked to receive incoming trains and direct the splitting up of wagons within the yard. The supervisor was also in charge with selecting wagons from different locations to form outgoing trains. In such tasks, we can say that those involved complex decision-making abilities since you’re not just moving the wagons, but you’re also relocating the wagons to minimize the movement and disturbances in the yard. Since the way on how the supervisor was fulfilling his duties was based from human judgments, it was too hard for him to share the strategies that he employed. Thus, there was no direct means of modelling the decision-making strategies in simulations. Stating these analyses, it became clear that modelling the human decision-making was more complicated than modelling the physical movements of the wagons.

 

3. Proof of Concept

 

The employment of an expertment system or of other possible artificial intelligence technologies was a promising solution to the encountered problem. Previous research had attempted this and such solutions provided them with some success (Flitman and Hurrion, 1987; O’Keefe, 1989; Williams, 1996; Lyu and Gunasekaran, 1997). Some of the attempts had been carried out a number of years prior but none of it gave the impression that the use of commercial software will be of much use, which was the focus of the rail yard study.

 

A small study was prepared to explore two questions:

 

• Could modeling human descion making be achieved through a linking of commercial expert systems with commercial simulation software?

• Would getting a decision-maker to interact with simulation model, could it then be utilized as a method for eliciting knowledge??

 

The second question seeks to know if the problem of tacit knowledge could be conquered by providing decision-making scenarios in a simulation and getting an expert to respond to those scenarios.

 

In Witness, a simple simulation based on an actual event that occurred in a steel factory was developed (figure 1). A number of lorries arrived at a lorry park requiring loads of between 5 and 20 items. Upon arrival, the supervisor directed the lorries to a loading bay. The supervisor was tasked to tell if the suitable lorry was available. To make this decision, the supervisor must take into account the restrictions on the bays and the lorry capacities. Bay 2 or 3 were where lorries requiring above 10 items were to be allocated while bays 1 and 4 were for lorries requiring only up to 10 items. If a bay was not appropriate, the lorry would be made to wait in the park until a suitable bay became vacant. Once a lorry was allocated, it moved to the bay where it would be loaded before leaving the system.

In order to develop the expert system that represented the supervisor’s decisions in allocating the lorries, XpertRule was used. The two main reasons why this was decided are as follows: The first reason is that it adopted a rule induction approach. The second is that XpertRule is one of the few expert system packages available that has a true Windows implementation and is also OLE compliant.

 

Because Witness was dependent on OLE, the need to develop a model controller

(MC) in Visual Basic was great(figure 2). The MC was the one initiating the run of the simulation model. If at some point in time an allocation decision was required, the simulation model would automatically stop and wait until the MC has returned a decision so the simulation model would continue to run. Once the MC detected that the model stopped running, it would extract data from the model which was being passed to the expert system for a decision. The decision was brought to the simulation model via the MC. There were actions that were needed to be done to ensure that these sequence of events was being followed. There had been some difficulty that were encountered, it was basically in the monitoring of the Witness whether it had stopped running before seeking the decision from XpertRule or it had continued running. If only the Witness could do the functions of an OLE client, it could call XpertRule directly, eliminating the need for the MC. In such case, it could have significantly simplified the linking of the packages.

 

Before we can develop the expert system decision tree, the simulation must be used first as a

knowledge elicitation engine. Here, the simulation was run up until a decision-making point where the user (the author) was prompted the decision for the allocation. Together with variables regarding the system’s state, the decisions were then logged on to a data file. The expert system was trained using these data. Figure 3 was developed with the use of this method.

 

The simulation could then be operated by the expert system upon the definition of the decision tree. The expert system now took the place of the decision-maker and established the results of the decision-making strategy on the operation of the loading bay. Robinson et al (1998) document the full details of this work.

 

4. Modelling Maintenance Decisions in a Manufacturing Plant

 

The previous example gave empirical proof that commercial software could be linked for the representation of the human decision-making processes and that the simulation of these could be used to give good utilities, among these is for knowledge elicitation approach. Although this was also revealed in the past works, this is nothing but an artificial example. Due to this fact, a question arose which asked whether this approach could be used or not in real life situations where more complex cases arise. In 1999 a three-year EPSRC began the funding of a collaborative project, which looked at this very same issue. The project was the collaboration of Warwick Business School, Aston University, Ford Motor Company and the Lanner Group. The case that was placed under consideration involved an engine assembly plant and the decisions made by supervisors when encountered with machine failure. A summary is presented below and a more complete description can be found in Robinson et al. (2001).

 

Blocks were placed on a ‘platten’ and were passed through a series of automated and manual processes in the engine assembly plant,. For this research’s benefit, the point of consideration was the maintenance operations on a self-contained section of the engine assembly line. Prior to the research, a simulation model of the complete engine assembly facility had already been developed. The model which was developed in the WITNESS simulation software was used to identify bottlenecks and to determine viable operating alternatives. The maintenance comment about the model was that whenever machine failure occurs, the decision would be is to make an immediate repair of the machine. In order to asses the skill level of the engineer that would perform the repair, random sampling method would be utilized. The objectives of the study that was undergone show that these assumptions were adequate.

 

However, in reality, a maintenance supervisor has at his or her disposable many other alternatives aside from the decision of having the machine fixed immediately.

 

• Stand-by: Have an engineer process the parts manually until the end of the shift and only then have the machine undergo repair.

• Have the line stopped.

• And even the alternative of not having anything done.

 

To aid the discussion, we will consider some questions. Can the simulation that already existed be used or not to elicit knowledge from the maintenance supervisors on how they made these decisions? Can this information be used to develop an artificial intelligence representation of the decision-maker? The goal was, in fact, to be able to come up with a way for identifying and improving decision making and not so much to be able to develop a better simulation model.

 

The knowledge based improvement (KBI) methodology, was conceptualized in order to address this issue. The KBI was comprised of five stages:

 

• Stage 1: Understanding the decision-making procedure

• Stage 2: Collecting Data

• Stage 3: Determining the strategies employed by expert decision-makers

• Stage 4: Determining the possible effects of the employed decision-making strategies

• Stage 5: Devising ways by which to improve the system

 

The KBI’s five stages are described in fuller detail in (Robinson et al., 2001).

 

In order to follow a process of knowledge elicitation, up to 63 example decisions were collected

from each of the three maintenance supervisors (one for each shift). Knowledge elicitation sessions lasted about one hour for this was length of time that the supervisors had. This was also because of the deemed limit of the supervisors’ ability wherein they can concentrate on making sound decisions for the model.

 

The examples attained were used to train a set of artificial intelligence methods producing varying degrees of success. Table 1 shows the rate each method had against the others with regards to misclassified example decisions. A score of zero reflects perfect classification. The neural network’s meager performance output was expected as they were known for not being able to perform well with small training sets.

 

 

The ID3 decision tree was then used to run the simulation. Figure 4 shows the results of the day-to-

day throughput ensuing from the utilization of the three different decision-making strategies. It also shows results taken from the decision logic in the original model developed by Ford. This shows a few discrepancies in the plant throughput as a result of the different decision-making strategies.

 

 

5. Knowledge Elicitation through Simulation

 

The work on the engine assembly case showed the KBI methodology’s potential for implementation in real situations. However, it also revealed some of the difficulties that could arise from its use. A particular difficulty was found in obtaining realistic decisions from the supervisors and in obtaining sufficient example of decisions to enable valid artificial intelligence representations to be trained. Another three-year project started last October 2002 addressed the issues on knowledge elicitation. This undertaking was also funded by the EPSRC with collaboration from Ford, Lanner Group and Aston University.

 

The project’s specific objectives include:

 

• The determination of alternative mechanisms for knowledge-elicitation from decision-makers via

a visual interactive simulation

• The comparison of alternative methods with regard to their individual efficiency as shown by the speed of data collection

• The comparison of alternative methods with regard to effectiveness as shown by the accuracy of data collection

• The comparison of data collection methods in terms of the ability to train various artificial

intelligence methods from the data sets collected

 

The following issues must then be considered:

 

• The visual display’s level: paper based, none, 2D, 2½D, 3D

• Interactive interface: number of decision-making attributes that are reported to the decision-maker. The decision-making attributes are the key data upon which decisions are taken.

• Scenario generation: use of historic scenarios, adapted historic scenarios to give extreme examples, random sampling of scenarios, adapted random sampling of scenarios to give more extreme examples

• Self learning: learning responses to specific scenarios as the data collection progresses and automatically responding to future repetitions of the like scenarios

 

6. Conclusion: Why Model Human-Decision Making?

 

To cut the long story short, the heart of this was the motivation for modelling human-decision making. Is it to enable the development of better models, or to help better understand and improve human decision-making?

 

Checkland (1981) describes four sytem types. Two of which were significant:

 

• Designed physical systems: human-designed systems that do not require human interaction in

operations e.g. an automated warehouse.

• Human activity systems: systems of human activity that are unable to operate without human interactions e.g. political and social systems.

 

The significance of these two systems lies in the fact that these two represent the extremes of human interactions. Just like in modeling simulation of the type considered in this paper (operations system modelling), we rarely deal with systems at their extremes, but somewhere between the two.

Operations are traditionally developed with physical systems involving human interaction such as manufacturing lines or banks. However, human interactions are never without the act of human decision-making. It is an essential ingredient. The motivation for modeling interactions, therefore, lies in the fact that human interaction and decision-making are the core of almost all operations systems.

 

But is the inclusion of elements of human-decision making necessary in developing better models?

There is a possible problem with this as Robinson (1994) presented in the diagram shown in figure 5. This shows that increasing a model’s level of complexity results in diminishing returns, in terms of accuracy. There are even arguments stating the fact that increasing complexity may reach appoint wherein it results reduced model accuracy because of insufficient knowledge to support the modeled detail. Proponents of the models would argue that the optimum point, or best model, is around point x, the point at which the model is sufficiently accurate and beyond which there is little gain from additional complexity. The exact location of point x  is dependent on the model’s purpose, which also determines the required accuracy level.

 

Improved accuracy in a model that has been increased in complexity is one of the driving forces behind modeling human decision-making. But the setback of this approach is that it could be trying to climb along the flat part of the curve in figure 5 and so gives little gain. Indeed, it could be argued that although a slightly more accurate model is generated by modelling human decision-making, it’s not a guarantee that it represents a better model since a large amount of effort is required to obtain only too little step up in accuracy and lead to faultiness of the model due to  too much intricacies. This dispute depends very much on the modelling context and the required accuracy level. But there are also cases wherein additional gains in accuracy are needed, whenever a higher level of fidelity is required.

 

The understanding and improvement of the modeling of human decision-making is another motivation. This approach should help improve the performance of the systems involving human interaction. The focus is not, any more, on making more accurate models, but on the employment of these models to analyze the effects of human interaction and to identify ways of changing the quality of human interaction in order to improve system performance. Generating insight and understanding is, therefore, primary to model accuracy. This is the true motivation behind the methodology of KBI.

 

Acknowledgements

The author wishes to acknowledge the support and collaboration of the EPSRC, Ford Motor

Company (John Ladbrook), Lanner Group (Tony Waller) and Aston University (Professor

John S. Edwards).

 

References

Checkland, P.B. (1981). Systems Thinking, Systems Practice. Wiley, Chichester, UK.

Flitman A.M. and Hurrion, R.D. (1987). Linking Discrete-Event Simulation Models with Expert Systems. J. Opl Res. Soc., 38 (8), pp. 723-734.

Lyu, J. and Gunasekaran A. (1997). An Intelligent Simulation Model to Evaluate Scheduling Strategies in a Steel Company. International Journal of Systems Science, 28 (6), pp. 611- 616.

O’Keefe, R.M. (1989). The Role of Artificial Intelligence in Discrete-Event Simulation. Artificial Intelligence, Simulation and Modeling (L. E. Widman, K.A. Loparo and N.R. Neilsen, eds.), pp. 359-379. Wiley, NY.

Robinson, S. (1994). Simulation Projects: Building the Right Conceptual Model. Industrial Engineering, 26 (9), pp. 34-36.

Robinson, S., Edwards, J.S. and Yongfa, W. (1998). An Expert Systems Approach to Simulating the Human Decision Maker. Winter Simulation Conference 1998 (D.J. Medeiros, E.F. Watson, M. Manivannan, J. Carson, eds.), The Society for Computer Simulation, San Diego, CA, pp. 1541-1545.

Robinson, S., Alifantis, A., Edwards, J.S., Hurrion, R.D., Ladbrook, J. and Waller, T. (2001). Modelling and Improving Human Decision Making with Simulation. Proceeding of the 2001 Winter Simulation Conference ed. B.A. Peters, J.S. Smith, D.J. Medeiros, and M.W. Rohrer. The Society for Computer Simulation, San Diego, CA, pp. 913-920.

Williams, T. (1996). Simulating the Man-in-the-Loop. OR Insight, 9 (4), pp. 17-21.

 

Leave a Reply

Your email address will not be published. Required fields are marked *