یک توضیح صرفه جو از سوگیری مشاهده شده هنگام پیش بینی عملکرد خود شخص A parsimonious explanation of observed biases when forecasting one’s own performance
- نوع فایل : کتاب
- زبان : انگلیسی
- ناشر : Elsevier
- چاپ و سال / کشور: 2017
توضیحات
رشته های مرتبط مدیریت و اقتصاد
مجله بین المللی پیش بینی – International Journal of Forecasting
دانشگاه Bath، انگلستان
نشریه نشریه الزویر
مجله بین المللی پیش بینی – International Journal of Forecasting
دانشگاه Bath، انگلستان
نشریه نشریه الزویر
Description
1. Introduction Being able to forecast one’s future performance based on an accurate perception of one’s abilities and skills can be important in a number of contexts. These include career choices, making personal assessments of one’s need for education and training, and decisions where personal failures may lead to danger or extensive losses. Forecasting one’s performances on different tasks can also be important to those involved in judgmental forecasting itself. For example, in sales forecasting, a tendency to over-forecast one’s future performance might lead to a lack of responsiveness to advice and feedback (e.g., Bonaccio & Dalal, 2006; Dunning, 2013; Lim & O’Connor, 1995). Similarly, in group forecasting situations, such as applications of the Delphi method, a propensity to over-predict one’s forecasting performance might lead one to overweight one’s own forecasts relative to those of the group. This may reduce a panelmember’s willingness to change their judgment when they receive information on the forecasts of other group members. In contrast, under-forecasting one’s performance or underestimating one’s expertise may lead one to discount the potentially valuable inputs that one could add to the forecasting process (though Rowe & Wright, 1999, found the evidence linking confidence in one’s forecast and a willingness to change to be inconsistent). These potential problems would apply in particular to group-based forecasting methods that require group members to self-rate their expertise explicitly (e.g., DeGroot, 1974). A number of studies have investigated how accurate people are at forecasting their own performances on tasks, tests or examinations (Burson, Larrick, & Kayman, 2006; Clayson, 2005; Kennedy, Lawton, & Plumlee, 2002; Krueger & Mueller, 2002; Kruger & Dunning, 1999; Miller & Geraci, 2011). The results have varied from findings of no correlation between predicted and actual performances to findings of a significant correlation. However, even when there is a significant positive correlation, a common finding is that, on average, relatively poor performers tend to overforecast their performances, while high performers tend to under-forecast how well they will do. Several explanations have been put forward for this phenomenon, which we will term regressive forecasting. For example, Kruger and Dunning (1999) have argued that poor performers are unaware of their own incompetence, while high performers suffer from a false consensus effect, in that they assume that their abilities are shared by their peers. Others have suggested that the bias is merely an artefact of regression (Krueger & Mueller, 2002). In this paper, we adopt a judgmental forecasting perspective in order to suggest an alternative explanation for this tendency, and test it empirically. Our explanation is more parsimonious than many others that have been suggested, and hence is consistent with Occam’s razor, which states that the simplest hypothesis, involving the fewest assumptions, should be favoured (see for example Domingos, 1999, for a discussion of Occam’s razor). We begin by reviewing the literature relating to this topic, before developing a theoretical model to represent the forecasting process. We then present an analysis of data from six in-course multiple-choice tests of statistical and forecasting knowledge. This enables us to model the process used by people to forecast their own performances under conditions in which the outcome was important and consequential to the individuals involved. The consequences arose because the final grade of the students’ degrees, or whether they were able to progress to the later stages of the course, depended partly on their performances in these tests. A key advantage of the use of multiple-choice tests in this research is that the scores achieved are determined objectively. The use of marks for an essay-based examination, for example, would introduce an additional element of variation, namely the subjective marking of the examiner. Thus, forecasting one’s performance would be confounded with forecasting the subjective (and probably inconsistent) scoring of the marker. Of course, the choice and nature of the questions on a multiple-choice test is based on the subjective judgment of the examiner, but the extent of the contribution of this subjectivity to the student’s mark is far less than in many other forms of performance assessment.