Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Redefining expertise and improving ecological judgment

Redefining expertise and improving ecological judgment Introduction Applied ecology and conservation depend on expert scientific judgments ( Burgman 2005 ; Patterson 2007 ). Recent developments in ecology and environmental management have explored different methods for formally obtaining and combining expert estimates ( Martin 2005 ; MacMillan & Marshall 2006 ; James 2010 ). However, the questions of who should be included in the set of experts, and how expert judgments should be verified, remain open. A person's formal training and technical knowledge (known as their ‘substantive’ expertise; Stern & Fineberg 1996 ; Walton 1997 ) often are contrasted with the knowledge of people with no formal training (known as “lay” knowledge). Expert judgments are attractive when time and resources are stretched, and are especially important where existing data are inadequate, circumstances are unique, or extrapolations are required for novel, future and uncertain situations. Because decisions may create “winners” and “losers,” both the decisions themselves and the expert judgments that support the decisions may be controversial, prompting arguments about who is an expert and how experts’ opinions should be used. This is especially so when experts are called upon to advocate on behalf of stakeholders ( Dryzek 2005 ) and to contribute to legal proceedings. Decisions involve matters of fact and matters of value ( Stern & Fineberg 1996 ; Gregory 2002 ; Walshe & Burgman 2010 ). Although fact and value cannot be separated entirely, we are concerned primarily in this article with the role of experts in estimating facts. If decision making in conservation biology were an entirely objective, detached scientific process that led inexorably to a single, rational outcome, definition of expert status would not be problematic. Ideally, there would be a pool of people with appropriate qualifications, extensive experience, and sound technical skills who could be called upon to dispense judgments in a consistent manner. Unfortunately, this is rarely if ever the case. Social theories take a wider view, seeing expertise as distributed beyond conventional experts, and being sensitive to context ( Carr 2004 ; Evans 2008 ). In most practical situations, the pool of potential technical experts is small, composed of people with overlapping training, knowledge, and experiences, so their judgments are not independent. In addition, expert judgments may be compromised by values and conflicts of interest ( Krinitzsky 1993 ; Shrader‐Frechette 1996 ; O’Brien 2000 ). For example, Campbell (2002) found clear evidence of value‐laden biases in expert judgments for marine turtle conservation. Kahneman & Tversky (1982; see also Fischhoff 1982 ; Slovic 1999 ) demonstrated that experts and lay people are sensitive to a host of psychological idiosyncrasies and subjective biases, including framing, availability bias, and social context. Despite their weaknesses, expert estimates of facts are generally better than lay estimates, within the expert's area of expertise (see Shanteau 1992 ; Slovic 1999 ; Burgman 2005 ; Garthwaite 2005 ; Chi 2006 ; Evans 2008 for reviews). Unfortunately, experts stray easily outside the narrow limits of their core knowledge, and once outside an expert is no more effective than a layperson ( Freudenburg 1999 ; Ayyub 2001 ). Additionally, experts (and most other people) are overconfident in the sense that they specify bounds for parameters that are too narrow, thereby placing greater confidence in judgments than is warranted by data or experience ( Fischhoff 1982 ; Speirs‐Bridge 2010 ). The purpose of this article is to address the problem of defining expertise in conservation and suggest ways it could be improved. The conventional approach to defining experts is by their qualifications, track record, professional standing, and experience. We describe how these requirements can sometimes exclude people with useful knowledge, explaining how the frailties and biases of expert judgments interact with the social status sometimes afforded to experts ( Evatts 2006 ) to produce judgments that are both unassailable and wrong. We then evaluate approaches to the use of experts that will improve the reliability of their judgments. These include widening the set of experiences and skills involved in deliberations, employing structured elicitations, and making experts more accountable through testing and training (we do not evaluate the literature on aggregating the opinions of different experts, which previously has received thoughtful reviews; Wallsten 1997 ; Clemen & Winkler 1999 ). Our ultimate aim is to identify tools and strategies that will improve the quality of scientific expert opinion in conservation, highlight impediments to their use, and suggest approaches that will encourage their routine deployment. Scientific authority, objectivity, and trust The public, the courts, statutory bodies, and other decision makers accept expert opinions because they believe experts have specialized knowledge not available to all, obtained through training and experience, proven by track records of efficient and effective application ( Hart 1986 ; Gullet 2000 ). Scientific experts are a source for rules and standards ( Peel 2005 ), they estimate facts, and they contribute to decisions to undertake activities ( Gullet 2000 ). The US National Research Council, for instance, asserts that experts have indispensable substantive knowledge, methodological skills, and experience ( Stern & Fineberg 1996 ). Yet despite the incorporation of advisory panels in legal frameworks, legislation rarely defines expertise or specifies the composition of expert panels. The question then arises, how is expert status decided and validated? Expert opinions are sought in the trial and judicial determination of cases when the areas are specialized and held to be beyond lay knowledge ( Fisk 1998 , p. 3; Preston 2006 ) or in situations when direct evidence is unavailable or unattainable (e.g., Lawson 1900 , p. 236). Tests used to separate expert opinion from lay knowledge to determine the admissibility of opinion evidence ( Gans & Palmer 2004 ) are a combination of credentials, technical “knowledge” and reputation, reflecting conventional notions of expertise. Qualifications, reputation, and membership in professional groups are common guides to expert status ( Collins & Evans 2007 ). The expert, recognized by professional membership, is assumed to have privileged access to knowledge and is deferred to in its interpretation ( Barley & Kunda 2006 ). Some professional bodies are accorded the right of self‐regulation in return for competence, integrity, and altruistic service ( Cruess 2004 ). Expertise includes the abilities to communicate technical information to laypersons, synthesize knowledge, understand the history and context of a debate, work effectively with a range of people, and be familiar with the conventions and jargon of a field (termed “interactional” expertise; Collins & Evans 2007 ). Critically, because scientific analysis requires robust discussion, and (in legal contexts) cross‐examination, substantive experts who are unable to communicate may be just as unqualified as those without any substantive expertise. Effective communication is especially important when interactions with stakeholders are designed to foster broad acceptance of a proposed action (the “instrumental” value of participation; Stern & Fineberg 1996 ). One of the problems with this system is that experts assume a position of authority, reinforced by professional membership and status. It can intimidate people who wish to examine expert judgments critically ( Walton 1997 ), leading to a culture of technical control in which expert opinions are rarely challenged successfully ( Walton 1997 ). For instance, the Supreme Court of Canada noted that expert opinion “dressed up in scientific language” may appear “virtually infallible” ( Gans & Palmer 2004 , p. 244). Many people have a view that knowledge held by suitably qualified experts is a clear, objectively defined “truth,” while knowledge held by stakeholders and the public is fuzzy, oversimplified or corrupt ( Hilgartner 1990 ). We see this as a flawed characterization that may erode public trust in decisions, exacerbated by perceptions that experts are overconfident (see Krinitzsky 1993 ; O’Brien 2000 ; Yearley 2000 ; Cruess 2004 ; Barley & Kunda 2006 ). The remainder of this article examines ways of improving the definition and use of technical expertise. Broadening the definition of expertise Another common (and in our view, misconceived) distinction is that lay knowledge is grounded in real world, operational conditions while technical expertise is based on narrow professional perspectives or theoretical assumptions ( Sternberg 1993 ; Wynne 1996 ). Informed amateurs and “lay experts” feel as though their evidence is specific, concrete, and sensitive to local realities ( Beck 1992 ; Yearley 2000 ; Irwin 2001 ), and they expect “outside” experts to be general and abstract ( Gregory & Miller 1998 ; Leadbeater 2003 ). The broader view from social science is that differences between lay and expert knowledge depend on the type of problem, the person applying that knowledge, and the cultural context in which that knowledge is learned and applied ( Verran 2002 ; Carr 2004 ). Knowledge can be classified as “expert” or “lay” depending on the interests it serves, the purposes for which it is harnessed, or the manner in which it is generated ( Agrawal 1995 ). Motivational biases and conflicts of interest are difficult issues ( Slovic 1999 ). Scientific experts advocate a scientific position, albeit based upon an accepted range of data and methodologies, and they may do so on behalf of a client, such as a proponent for a particular project or decision ( Barley & Kunda 2006 ). In other words, knowledge is contextual (see Broks 2006 ). We agree with Jasanoff (2006) , Broks (2006) , Evans (2008) , and others that, in many cases, it is not possible to delineate sharply between expert and lay knowledge. Collins & Evans (2007) classified several forms of expertise, ranging from specific instruction to contributory expertise, the pinnacle of substantive knowledge ( Table 1 ). None of these categories depend exclusively on formal qualifications or professional membership. That is, the reviews and tests of expertise outlined above substantiate the view that expertise is real, but that it is more widely distributed than conventional qualifications and often associated with membership of social groups (which may or may not be professional groups). 1 A taxonomy of expertise (modified from Collins & Evans 2007 ) Type Characteristics Contributory expertise Fully developed and internalized skills and knowledge, including an ability to contribute new knowledge and/or teach. Interactional expertise Knowledge gained from learning the language of specialist groups, without necessarily obtaining practical competence. Primary source knowledge Knowledge from primary literature includes basic technical competence. Popular understanding Knowledge from media, with little detail, less complexity. Specific instruction a Formulaic, rule‐based knowledge, typically simple, context‐specific and local. a Collins and Evans used the term “beer‐mat knowledge” for this category. Local residents and resource users often are potential experts in the context of environmental management planning efforts that involve conservation biologists, ecologists, and other technically trained scientists. Collateral benefits of broader definitions of expertise include both improved factual estimates and broader acceptance of decisions. Failing (2007) , for example, demonstrated the benefit of considering local sources of knowledge in the context of relicensing a hydroelectric facility in British Columbia, Canada. Both conventionally defined technical expertise and “lay” knowledge, drawn from area residents and from members of a local aboriginal community, were used to construct values hierarchies, to understand causal pathways and to evaluate the consequences of the response of the river system to proposed flow changes. This structured, deliberative effort led to an adaptive management approach attractive to a diverse group of technical and public stakeholders. We conclude that managers should avoid arbitrary, sharp delineations of expertise, and instead include a process to examine knowledge claims critically ( Gregory 2006 ). We outline below three methods for testing claims of expert status. Making the most of expert judgment The importance and pervasiveness of expert judgments in conservation biology create an imperative for acquiring judgments that are as accurate and well calibrated as possible. Given the frailties and limitations of expert judgments and the narrow conventional definitions of expertise, what can be done to improve the situation? Essentially, there are three options: to use analytical tests to evaluate the skill and knowledge of potential contributors, to train experts, and to use elicitation procedures that encourage participation and cross‐examination of evidence and that anticipate and deal with biases. These are outlined briefly below. Analytical tests Cooke (1991) pioneered the idea of using hypothetical and empirical data to measure objectively the knowledge of experts. Essentially, the approach involves asking experts for facts, a subset of which are known to the facilitator but not to the experts (for instance, facts from recent case studies, experiments, hypothetical scenarios, or simulations). Answers to these questions provide information on the skill of the participants, including their reliability (the degree to which an expert's assessment is repeatable and stable across cases; Wallsten & Budescu 1983 ), accuracy, bias, and calibration (the frequency with which subjective intervals enclose the truth; Speirs‐Bridge 2010 ). Test results may be used to evaluate knowledge, weight opinions, or exclude some opinions altogether (e.g., Cooke & Goossens 2000 ; see Morgan & Henrion 1990 ; Cooke & Goossens 2000 ; Hoffrage 2002 ; O’Hagan & Oakley 2004 ). Such appeals for more explicit testing have also appeared in legal academic reviews ( Schum & Morris 2007 ). The prospect of doing this raises challenging questions. Who sets and administers the tests? Which elements of expertise should the tests examine? Where do the data come from to validate the answers? How does one overcome the fact that experts unused to being challenged are likely to be reluctant to be tested? These tools have been deployed and many of these hurdles overcome in applications in law, meteorology, and engineering (e.g., Cooke 1991 ; Murphy 1993 ; see also Fischhoff 1982 ; Murphy & Winkler 1977 ; Hora 1992 ). A complete review of these techniques and their implications is beyond the scope of this review. Feedback and training If people have the opportunity to learn how to improve their ability to judge, their performance generally improves ( Cooke 1991 ; Cooke & Goossens 2000 ). Typically, training outlines a field's jargon and theoretical concepts, and uses case studies, experiments, hypothetical scenarios, and simulations to illustrate processes relevant to the questions at hand. This may include numerical and graphical output derived from similar assessments and different ways of representing uncertainty and probabilities (e.g., Kadane 1980 ; Cooke 1991 ; Chaloner 1993 ; Garthwaite 2005 ). For people who are involved routinely in expert judgment exercises, feedback allows them to see the results of their earlier assessments, in relation to outcomes. Feedback protocols require procedures for administering and disseminating the results of professional judgments and test questions, so that experts improve their performance over time ( Cooke & Goossens 2000 ). Yet some situations for which expertise is desired have few or no opportunities for feedback. An example is predictions for the social, environmental, or health impacts of an emerging technology (e.g., nanotechnologies, Wintle . 2007 ); there are few clear parallels, and the success of predictions will not be known for decades ( Pidgeon 2008 ). While training and feedback generally improve expert performance, bias and overconfidence about facts may persist through many repetitions of an elicitation exercise. Practice and experience alone do not necessarily remove biases. Improvement is usually slow and a large number of similar assessments is needed to generate substantial improvement. Feedback protocols have been deployed in engineering risk assessments in Europe, but it has taken many years to establish accepted procedures ( Cooke & Goossens 2000 ). We conclude that, even though improvement is not instantaneous, systematic feedback is the single most important factor demarking domains in which expertise develops and improves over time (e.g., chess playing, weather forecasting) and domains in which it does not (e.g., psychotherapy) ( Dawes 1994 ). Structured procedures Structured elicitation procedures are explicit methods that anticipate and mitigate some of the most important and pervasive psychological and motivational biases. One of earliest and still one of the most useful of these tools is the Delphi technique (see Burgman 2005 ). In it, experts make an initial judgment of a fact. The responses are shown to other participants, who then make a second, private judgment of the fact. The group average may be weighted by performance on test questions ( Cooke 1991 ). The process circumvents or ameliorates many problems associated with dominance, availability bias, overconfidence ( Speirs‐Bridge 2010 ), and related effects. Participants may be given the opportunity to discuss differences of opinion, allowing people to reconcile the meanings of words and context ( Regan 2002 ), thereby removing arbitrary language‐based disagreements. Law plays a critical role in challenging expert judgment when evidence is presented in support of adversarial positions ( Christie 1991 ; Fisk 1998 ). Under cross‐examination, an expert's efficiency, effectiveness, veracity, credibility, and character may be attacked ( Christie 1991 ; Fisk 1998 ). The qualifications of an expert may be tested by the opinions of other experts. Experts may be tested by hypothetical questions or by proof that on a former occasion, an expert expressed a different opinion. This questioning has its origins in medieval tests of peer judgment ( Franklin 2001 ). We suggest that adversarial tests of expert evidence in domains outside law courts and tribunals will improve the reliability of expert judgments ( Franklin 2008 ). The structured elicitation processes outlined above provide a context in which opinions may be cross‐examined effectively. Participants have an opportunity to hear and to weigh the opinions of others, integrating new information, improving understanding of the question, and evaluating the context and motivations of other participants, before arriving at their final judgment. This process will work best when, as noted above, people from a variety of social contexts and “positions” in a debate are involved, providing a measure of protection against motivational bias. Experts may be stratified by geography, technical background, experience, affiliations, or other relevant criteria. Stern & Fineberg (1996) outline objective methods for stratifying and selecting stakeholder participants. Structured elicitation protocols provide an environment in which vigorous peer review and debate, including cross‐examination of competing claims, may be captured effectively. Cross‐examination of data, models, and reasoning may then allow an independent adjudicator to reach a final synthesis of evidence and to form a conclusion ( Franklin 2008 ). Conclusions Our review suggests that conservation biologists could contribute knowledge more effectively and enhance the credibility of their decisions by embracing a suite of new professional behaviors and wider definitions of expertise. Specifically, the review suggests that the credibility, accuracy, and reliability of expert deliberations will improve if explicit selection, testing, training, and feedback procedures are deployed. Opportunities for improved performance go beyond recommendations for individual experts. To work effectively, the system in which experts work should be structured to anticipate and deal with cognitive and motivational biases, as described above. In particular, it should ensure the selection of experts is inclusive and transparent, and it should provide ample opportunity for experts to be questioned critically by analysts, other experts, stakeholders, and others. Our review of scientific authority suggests that what counts as expertise depends on context. Expert performance is likely to be affected in subtle and unpredictable ways by motivations and psychology. If experts are tested, then expertise from all domains may be considered, including what may be considered lay knowledge. This accords with sociological theory of expertise as real but “unequally distributed,” not simply determined by formal qualifications and professional membership ( Evans 2008 ). There are several models for engaging a wider cross‐section of potential experts and numerous collateral benefits may accrue from doing so ( Carr 2004 ). These observations lead us to recommend a set of general prescriptions for involving experts in conservation. We recommend the following prescriptions for the managers. 1 Identify core expertise requirements and the pool of potential experts, including lay expertise (pp. 1, 3). 2 Create objective selection criteria and clear rules for engaging experts and stratify the pool of experts and select participants transparently based on the strata (pp. 3, 5). 3 Evaluate the social and scientific context of the problem (pp. 1, 2, 3). 4 Identify potential conflicts of interest and motivational biases and control bias by “balancing” the composition of expert groups, with respect to the issue at hand (especially if the pool of experts is small) (pp. 1, 2, 5). 5 Test expertise, relevant to the issues (pp. 2, 4, 5). 6 Provide opportunities for stakeholders to cross‐examine all expert opinions (pp 3, 5). 7 Train experts and provide routine, systematic, relevant feedback on their performance (p. 4). At minimum, we recommend a formal, transparent process for the definition and selection of those with relevant expertise, the adoption of new professional standards that employ structured elicitation methods, testing and feedback of expert judgments, aimed at improving the performance of both experts and elicitation methods over time. Acknowledgments We thank Tara Martin, Mark Colyvan, Fiona Fidler, Terry Walshe, Bonnie Wintle, Helen Regan, and three anonymous reviewers for their comments. The work was funded by ACERA Project 0611 and NSF Award (SES 0725025). The views expressed in this article are not necessarily endorsed by the authors’ respective organizations. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Conservation Letters Wiley

Redefining expertise and improving ecological judgment

Loading next page...
 
/lp/wiley/redefining-expertise-and-improving-ecological-judgment-k2ZV5xdgd0

References (103)

Publisher
Wiley
Copyright
"Copyright © 2011 Wiley Subscription Services, Inc., A Wiley Company"
eISSN
1755-263X
DOI
10.1111/j.1755-263X.2011.00165.x
Publisher site
See Article on Publisher Site

Abstract

Introduction Applied ecology and conservation depend on expert scientific judgments ( Burgman 2005 ; Patterson 2007 ). Recent developments in ecology and environmental management have explored different methods for formally obtaining and combining expert estimates ( Martin 2005 ; MacMillan & Marshall 2006 ; James 2010 ). However, the questions of who should be included in the set of experts, and how expert judgments should be verified, remain open. A person's formal training and technical knowledge (known as their ‘substantive’ expertise; Stern & Fineberg 1996 ; Walton 1997 ) often are contrasted with the knowledge of people with no formal training (known as “lay” knowledge). Expert judgments are attractive when time and resources are stretched, and are especially important where existing data are inadequate, circumstances are unique, or extrapolations are required for novel, future and uncertain situations. Because decisions may create “winners” and “losers,” both the decisions themselves and the expert judgments that support the decisions may be controversial, prompting arguments about who is an expert and how experts’ opinions should be used. This is especially so when experts are called upon to advocate on behalf of stakeholders ( Dryzek 2005 ) and to contribute to legal proceedings. Decisions involve matters of fact and matters of value ( Stern & Fineberg 1996 ; Gregory 2002 ; Walshe & Burgman 2010 ). Although fact and value cannot be separated entirely, we are concerned primarily in this article with the role of experts in estimating facts. If decision making in conservation biology were an entirely objective, detached scientific process that led inexorably to a single, rational outcome, definition of expert status would not be problematic. Ideally, there would be a pool of people with appropriate qualifications, extensive experience, and sound technical skills who could be called upon to dispense judgments in a consistent manner. Unfortunately, this is rarely if ever the case. Social theories take a wider view, seeing expertise as distributed beyond conventional experts, and being sensitive to context ( Carr 2004 ; Evans 2008 ). In most practical situations, the pool of potential technical experts is small, composed of people with overlapping training, knowledge, and experiences, so their judgments are not independent. In addition, expert judgments may be compromised by values and conflicts of interest ( Krinitzsky 1993 ; Shrader‐Frechette 1996 ; O’Brien 2000 ). For example, Campbell (2002) found clear evidence of value‐laden biases in expert judgments for marine turtle conservation. Kahneman & Tversky (1982; see also Fischhoff 1982 ; Slovic 1999 ) demonstrated that experts and lay people are sensitive to a host of psychological idiosyncrasies and subjective biases, including framing, availability bias, and social context. Despite their weaknesses, expert estimates of facts are generally better than lay estimates, within the expert's area of expertise (see Shanteau 1992 ; Slovic 1999 ; Burgman 2005 ; Garthwaite 2005 ; Chi 2006 ; Evans 2008 for reviews). Unfortunately, experts stray easily outside the narrow limits of their core knowledge, and once outside an expert is no more effective than a layperson ( Freudenburg 1999 ; Ayyub 2001 ). Additionally, experts (and most other people) are overconfident in the sense that they specify bounds for parameters that are too narrow, thereby placing greater confidence in judgments than is warranted by data or experience ( Fischhoff 1982 ; Speirs‐Bridge 2010 ). The purpose of this article is to address the problem of defining expertise in conservation and suggest ways it could be improved. The conventional approach to defining experts is by their qualifications, track record, professional standing, and experience. We describe how these requirements can sometimes exclude people with useful knowledge, explaining how the frailties and biases of expert judgments interact with the social status sometimes afforded to experts ( Evatts 2006 ) to produce judgments that are both unassailable and wrong. We then evaluate approaches to the use of experts that will improve the reliability of their judgments. These include widening the set of experiences and skills involved in deliberations, employing structured elicitations, and making experts more accountable through testing and training (we do not evaluate the literature on aggregating the opinions of different experts, which previously has received thoughtful reviews; Wallsten 1997 ; Clemen & Winkler 1999 ). Our ultimate aim is to identify tools and strategies that will improve the quality of scientific expert opinion in conservation, highlight impediments to their use, and suggest approaches that will encourage their routine deployment. Scientific authority, objectivity, and trust The public, the courts, statutory bodies, and other decision makers accept expert opinions because they believe experts have specialized knowledge not available to all, obtained through training and experience, proven by track records of efficient and effective application ( Hart 1986 ; Gullet 2000 ). Scientific experts are a source for rules and standards ( Peel 2005 ), they estimate facts, and they contribute to decisions to undertake activities ( Gullet 2000 ). The US National Research Council, for instance, asserts that experts have indispensable substantive knowledge, methodological skills, and experience ( Stern & Fineberg 1996 ). Yet despite the incorporation of advisory panels in legal frameworks, legislation rarely defines expertise or specifies the composition of expert panels. The question then arises, how is expert status decided and validated? Expert opinions are sought in the trial and judicial determination of cases when the areas are specialized and held to be beyond lay knowledge ( Fisk 1998 , p. 3; Preston 2006 ) or in situations when direct evidence is unavailable or unattainable (e.g., Lawson 1900 , p. 236). Tests used to separate expert opinion from lay knowledge to determine the admissibility of opinion evidence ( Gans & Palmer 2004 ) are a combination of credentials, technical “knowledge” and reputation, reflecting conventional notions of expertise. Qualifications, reputation, and membership in professional groups are common guides to expert status ( Collins & Evans 2007 ). The expert, recognized by professional membership, is assumed to have privileged access to knowledge and is deferred to in its interpretation ( Barley & Kunda 2006 ). Some professional bodies are accorded the right of self‐regulation in return for competence, integrity, and altruistic service ( Cruess 2004 ). Expertise includes the abilities to communicate technical information to laypersons, synthesize knowledge, understand the history and context of a debate, work effectively with a range of people, and be familiar with the conventions and jargon of a field (termed “interactional” expertise; Collins & Evans 2007 ). Critically, because scientific analysis requires robust discussion, and (in legal contexts) cross‐examination, substantive experts who are unable to communicate may be just as unqualified as those without any substantive expertise. Effective communication is especially important when interactions with stakeholders are designed to foster broad acceptance of a proposed action (the “instrumental” value of participation; Stern & Fineberg 1996 ). One of the problems with this system is that experts assume a position of authority, reinforced by professional membership and status. It can intimidate people who wish to examine expert judgments critically ( Walton 1997 ), leading to a culture of technical control in which expert opinions are rarely challenged successfully ( Walton 1997 ). For instance, the Supreme Court of Canada noted that expert opinion “dressed up in scientific language” may appear “virtually infallible” ( Gans & Palmer 2004 , p. 244). Many people have a view that knowledge held by suitably qualified experts is a clear, objectively defined “truth,” while knowledge held by stakeholders and the public is fuzzy, oversimplified or corrupt ( Hilgartner 1990 ). We see this as a flawed characterization that may erode public trust in decisions, exacerbated by perceptions that experts are overconfident (see Krinitzsky 1993 ; O’Brien 2000 ; Yearley 2000 ; Cruess 2004 ; Barley & Kunda 2006 ). The remainder of this article examines ways of improving the definition and use of technical expertise. Broadening the definition of expertise Another common (and in our view, misconceived) distinction is that lay knowledge is grounded in real world, operational conditions while technical expertise is based on narrow professional perspectives or theoretical assumptions ( Sternberg 1993 ; Wynne 1996 ). Informed amateurs and “lay experts” feel as though their evidence is specific, concrete, and sensitive to local realities ( Beck 1992 ; Yearley 2000 ; Irwin 2001 ), and they expect “outside” experts to be general and abstract ( Gregory & Miller 1998 ; Leadbeater 2003 ). The broader view from social science is that differences between lay and expert knowledge depend on the type of problem, the person applying that knowledge, and the cultural context in which that knowledge is learned and applied ( Verran 2002 ; Carr 2004 ). Knowledge can be classified as “expert” or “lay” depending on the interests it serves, the purposes for which it is harnessed, or the manner in which it is generated ( Agrawal 1995 ). Motivational biases and conflicts of interest are difficult issues ( Slovic 1999 ). Scientific experts advocate a scientific position, albeit based upon an accepted range of data and methodologies, and they may do so on behalf of a client, such as a proponent for a particular project or decision ( Barley & Kunda 2006 ). In other words, knowledge is contextual (see Broks 2006 ). We agree with Jasanoff (2006) , Broks (2006) , Evans (2008) , and others that, in many cases, it is not possible to delineate sharply between expert and lay knowledge. Collins & Evans (2007) classified several forms of expertise, ranging from specific instruction to contributory expertise, the pinnacle of substantive knowledge ( Table 1 ). None of these categories depend exclusively on formal qualifications or professional membership. That is, the reviews and tests of expertise outlined above substantiate the view that expertise is real, but that it is more widely distributed than conventional qualifications and often associated with membership of social groups (which may or may not be professional groups). 1 A taxonomy of expertise (modified from Collins & Evans 2007 ) Type Characteristics Contributory expertise Fully developed and internalized skills and knowledge, including an ability to contribute new knowledge and/or teach. Interactional expertise Knowledge gained from learning the language of specialist groups, without necessarily obtaining practical competence. Primary source knowledge Knowledge from primary literature includes basic technical competence. Popular understanding Knowledge from media, with little detail, less complexity. Specific instruction a Formulaic, rule‐based knowledge, typically simple, context‐specific and local. a Collins and Evans used the term “beer‐mat knowledge” for this category. Local residents and resource users often are potential experts in the context of environmental management planning efforts that involve conservation biologists, ecologists, and other technically trained scientists. Collateral benefits of broader definitions of expertise include both improved factual estimates and broader acceptance of decisions. Failing (2007) , for example, demonstrated the benefit of considering local sources of knowledge in the context of relicensing a hydroelectric facility in British Columbia, Canada. Both conventionally defined technical expertise and “lay” knowledge, drawn from area residents and from members of a local aboriginal community, were used to construct values hierarchies, to understand causal pathways and to evaluate the consequences of the response of the river system to proposed flow changes. This structured, deliberative effort led to an adaptive management approach attractive to a diverse group of technical and public stakeholders. We conclude that managers should avoid arbitrary, sharp delineations of expertise, and instead include a process to examine knowledge claims critically ( Gregory 2006 ). We outline below three methods for testing claims of expert status. Making the most of expert judgment The importance and pervasiveness of expert judgments in conservation biology create an imperative for acquiring judgments that are as accurate and well calibrated as possible. Given the frailties and limitations of expert judgments and the narrow conventional definitions of expertise, what can be done to improve the situation? Essentially, there are three options: to use analytical tests to evaluate the skill and knowledge of potential contributors, to train experts, and to use elicitation procedures that encourage participation and cross‐examination of evidence and that anticipate and deal with biases. These are outlined briefly below. Analytical tests Cooke (1991) pioneered the idea of using hypothetical and empirical data to measure objectively the knowledge of experts. Essentially, the approach involves asking experts for facts, a subset of which are known to the facilitator but not to the experts (for instance, facts from recent case studies, experiments, hypothetical scenarios, or simulations). Answers to these questions provide information on the skill of the participants, including their reliability (the degree to which an expert's assessment is repeatable and stable across cases; Wallsten & Budescu 1983 ), accuracy, bias, and calibration (the frequency with which subjective intervals enclose the truth; Speirs‐Bridge 2010 ). Test results may be used to evaluate knowledge, weight opinions, or exclude some opinions altogether (e.g., Cooke & Goossens 2000 ; see Morgan & Henrion 1990 ; Cooke & Goossens 2000 ; Hoffrage 2002 ; O’Hagan & Oakley 2004 ). Such appeals for more explicit testing have also appeared in legal academic reviews ( Schum & Morris 2007 ). The prospect of doing this raises challenging questions. Who sets and administers the tests? Which elements of expertise should the tests examine? Where do the data come from to validate the answers? How does one overcome the fact that experts unused to being challenged are likely to be reluctant to be tested? These tools have been deployed and many of these hurdles overcome in applications in law, meteorology, and engineering (e.g., Cooke 1991 ; Murphy 1993 ; see also Fischhoff 1982 ; Murphy & Winkler 1977 ; Hora 1992 ). A complete review of these techniques and their implications is beyond the scope of this review. Feedback and training If people have the opportunity to learn how to improve their ability to judge, their performance generally improves ( Cooke 1991 ; Cooke & Goossens 2000 ). Typically, training outlines a field's jargon and theoretical concepts, and uses case studies, experiments, hypothetical scenarios, and simulations to illustrate processes relevant to the questions at hand. This may include numerical and graphical output derived from similar assessments and different ways of representing uncertainty and probabilities (e.g., Kadane 1980 ; Cooke 1991 ; Chaloner 1993 ; Garthwaite 2005 ). For people who are involved routinely in expert judgment exercises, feedback allows them to see the results of their earlier assessments, in relation to outcomes. Feedback protocols require procedures for administering and disseminating the results of professional judgments and test questions, so that experts improve their performance over time ( Cooke & Goossens 2000 ). Yet some situations for which expertise is desired have few or no opportunities for feedback. An example is predictions for the social, environmental, or health impacts of an emerging technology (e.g., nanotechnologies, Wintle . 2007 ); there are few clear parallels, and the success of predictions will not be known for decades ( Pidgeon 2008 ). While training and feedback generally improve expert performance, bias and overconfidence about facts may persist through many repetitions of an elicitation exercise. Practice and experience alone do not necessarily remove biases. Improvement is usually slow and a large number of similar assessments is needed to generate substantial improvement. Feedback protocols have been deployed in engineering risk assessments in Europe, but it has taken many years to establish accepted procedures ( Cooke & Goossens 2000 ). We conclude that, even though improvement is not instantaneous, systematic feedback is the single most important factor demarking domains in which expertise develops and improves over time (e.g., chess playing, weather forecasting) and domains in which it does not (e.g., psychotherapy) ( Dawes 1994 ). Structured procedures Structured elicitation procedures are explicit methods that anticipate and mitigate some of the most important and pervasive psychological and motivational biases. One of earliest and still one of the most useful of these tools is the Delphi technique (see Burgman 2005 ). In it, experts make an initial judgment of a fact. The responses are shown to other participants, who then make a second, private judgment of the fact. The group average may be weighted by performance on test questions ( Cooke 1991 ). The process circumvents or ameliorates many problems associated with dominance, availability bias, overconfidence ( Speirs‐Bridge 2010 ), and related effects. Participants may be given the opportunity to discuss differences of opinion, allowing people to reconcile the meanings of words and context ( Regan 2002 ), thereby removing arbitrary language‐based disagreements. Law plays a critical role in challenging expert judgment when evidence is presented in support of adversarial positions ( Christie 1991 ; Fisk 1998 ). Under cross‐examination, an expert's efficiency, effectiveness, veracity, credibility, and character may be attacked ( Christie 1991 ; Fisk 1998 ). The qualifications of an expert may be tested by the opinions of other experts. Experts may be tested by hypothetical questions or by proof that on a former occasion, an expert expressed a different opinion. This questioning has its origins in medieval tests of peer judgment ( Franklin 2001 ). We suggest that adversarial tests of expert evidence in domains outside law courts and tribunals will improve the reliability of expert judgments ( Franklin 2008 ). The structured elicitation processes outlined above provide a context in which opinions may be cross‐examined effectively. Participants have an opportunity to hear and to weigh the opinions of others, integrating new information, improving understanding of the question, and evaluating the context and motivations of other participants, before arriving at their final judgment. This process will work best when, as noted above, people from a variety of social contexts and “positions” in a debate are involved, providing a measure of protection against motivational bias. Experts may be stratified by geography, technical background, experience, affiliations, or other relevant criteria. Stern & Fineberg (1996) outline objective methods for stratifying and selecting stakeholder participants. Structured elicitation protocols provide an environment in which vigorous peer review and debate, including cross‐examination of competing claims, may be captured effectively. Cross‐examination of data, models, and reasoning may then allow an independent adjudicator to reach a final synthesis of evidence and to form a conclusion ( Franklin 2008 ). Conclusions Our review suggests that conservation biologists could contribute knowledge more effectively and enhance the credibility of their decisions by embracing a suite of new professional behaviors and wider definitions of expertise. Specifically, the review suggests that the credibility, accuracy, and reliability of expert deliberations will improve if explicit selection, testing, training, and feedback procedures are deployed. Opportunities for improved performance go beyond recommendations for individual experts. To work effectively, the system in which experts work should be structured to anticipate and deal with cognitive and motivational biases, as described above. In particular, it should ensure the selection of experts is inclusive and transparent, and it should provide ample opportunity for experts to be questioned critically by analysts, other experts, stakeholders, and others. Our review of scientific authority suggests that what counts as expertise depends on context. Expert performance is likely to be affected in subtle and unpredictable ways by motivations and psychology. If experts are tested, then expertise from all domains may be considered, including what may be considered lay knowledge. This accords with sociological theory of expertise as real but “unequally distributed,” not simply determined by formal qualifications and professional membership ( Evans 2008 ). There are several models for engaging a wider cross‐section of potential experts and numerous collateral benefits may accrue from doing so ( Carr 2004 ). These observations lead us to recommend a set of general prescriptions for involving experts in conservation. We recommend the following prescriptions for the managers. 1 Identify core expertise requirements and the pool of potential experts, including lay expertise (pp. 1, 3). 2 Create objective selection criteria and clear rules for engaging experts and stratify the pool of experts and select participants transparently based on the strata (pp. 3, 5). 3 Evaluate the social and scientific context of the problem (pp. 1, 2, 3). 4 Identify potential conflicts of interest and motivational biases and control bias by “balancing” the composition of expert groups, with respect to the issue at hand (especially if the pool of experts is small) (pp. 1, 2, 5). 5 Test expertise, relevant to the issues (pp. 2, 4, 5). 6 Provide opportunities for stakeholders to cross‐examine all expert opinions (pp 3, 5). 7 Train experts and provide routine, systematic, relevant feedback on their performance (p. 4). At minimum, we recommend a formal, transparent process for the definition and selection of those with relevant expertise, the adoption of new professional standards that employ structured elicitation methods, testing and feedback of expert judgments, aimed at improving the performance of both experts and elicitation methods over time. Acknowledgments We thank Tara Martin, Mark Colyvan, Fiona Fidler, Terry Walshe, Bonnie Wintle, Helen Regan, and three anonymous reviewers for their comments. The work was funded by ACERA Project 0611 and NSF Award (SES 0725025). The views expressed in this article are not necessarily endorsed by the authors’ respective organizations.

Journal

Conservation LettersWiley

Published: Apr 1, 2011

Keywords: ; ; ; ;

There are no references for this article.