Artigo Revisado por pares

A Model-Based Approach to Measuring Expertise in Ranking Tasks.

2011; Wiley; Volume: 33; Issue: 33 Linguagem: Inglês

ISSN

1551-6709

Autores

Michael Lee, Mark Steyvers, Mindy DeYoung, Brent Miller,

Tópico(s)

Expert finding and Q&A systems

Resumo

A Model-Based Approach to Measuring Expertise in Ranking Tasks Michael D. Lee (mdlee@uci.edu) Mark Steyvers (msteyver@uci.edu) Mindy de Young (mdeyoung@uci.edu) Brent J. Miller (brentm@uci.edu) Department of Cognitive Sciences,University of California, Irvine Irvine, CA, USA 92697-5100 Abstract We apply a cognitive modeling approach to the problem of measuring expertise on rank ordering tasks. In these tasks, people must order a set of items in terms of a given criterion. Using a cognitive model of behavior on this task that allows for individual differences in knowledge, we are able to infer people’s expertise directly from the rankings they provide. We show that our model-based measure of expertise outperforms self-report measures, taken both before and after doing the task, in terms of correlation with the actual accuracy of the answers. Based on these results, we discuss the potential and limitations of using cognitive models in assessing expertise. Keywords: expertise, ordering task, wisdom of crowds, model-based measurement Introduction Understanding expertise is an important goal for cogni- tive science, for both theoretical and practical reasons. Theoretically, expertise is closely related to the structure of individual differences in knowledge, representation, decision-making, and a range of other cognitive capabil- ities (Wright & Bolderm, 1992). Practically, the abil- ity to identify and use experts is important in a wide range of real-world settings. There are many possible tasks that people could do to provide their expertise, in- cluding estimating numerical values (e.g., ”what is the length of the Nile?”), predicting categorical future out- comes (”who will win the FIFA World Cup?”), and so on. In this paper, we focus on the task of ranking a set of given items in terms of some criterion, such as ordering a set of cities from most to least populous. One prominent theory of expertise argues that the key requirements are discriminability and consistency (e.g., Shanteau, Weiss, Thomas, & Pounds, 2002). Experts must be able to discriminate between different stimuli, and they must be able to make these discriminations re- liably or consistently. Protocols for measuring exper- tise in terms of these two properties are well-developed, and have been applied in settings as diverse as au- dit judgment, livestock judgment, personnel hiring, and decision-making in the oil and gas industry (Malhotra, Lee, & Khurana, 2007). However, because these pro- tocols need to assess discriminability and consistency, they have two features that will not work in all applied settings. First, they rely on knowing the answers to the discrimination questions, and so must have access to a ground truth. Second, they must ask the same (or very similar) questions of people repeatedly, and so are time consuming. Given these limitations, it is perhaps not sur- prising that expertise is often measured in simpler and cruder ways, such as by self-report. In this paper, we approach the problem of expertise from the perspective of cognitive modeling. The basic idea is to build a model of how a number of people with different levels of expertise produce judgments or esti- mates that reflect their knowledge. This requires making assumptions about how individual differences in knowl- edge are structured, and how people apply decision- making processes to their knowledge to produce an- swers. There are two key attractive properties of this ap- proach. The first is that, if a reasonable model can be formulated, the knowledge people have can be inferred by fitting the model to their behavior. This avoids the need to rely on self-reported measures of expertise, or to use elaborate protocols to extract a measure of exper- tise. The cognitive model does all of the work, providing an account of task behavior that is sensitive to the latent expertise of the people who do the task. The second attraction is that expertise is determined by making inferences about the structure of the different answers provided by individuals. This means that perfor- mance does not have to be assessed in terms of an accu- racy measure relative to the ground truth. It is possible to measure the relative expertise of individuals, without already having the expertise to answer the question. The structure of this paper is as follows. We first de- scribe an experiment that asks people to rank order sets of items, and rate their expertise both before and after having done the ranking. We then describe a simple cog- nitive model of the ranking task, and use the model to infer individual differences in the precision of the knowl- edge each person has. In the results section, we show that this individual differences parameter provides a good measure of expertise, in the sense that it correlates well with actual performance. We also show it outperforms the self-reported measures of expertise. We conclude with some discussion of the strengths and limitations of our cognitive modeling approach to assessing expertise.

Referência(s)