Racial and Ethnic Implicit Bias: Strategies for a Culturally Responsive Evaluation
Because racial and ethnic implicit biases are pervasive; the evaluator should consider the effects of racial and ethnic implicit bias, when conducting culturally responsive evaluations. The Kirwan Institute (Staats, Capatosto, Wright, & Jackson 2016, p. 14) defined implicit bias as “the attitudes or stereotypes that affect our understanding, actions, and decisions in an unconscious manner.” Unchecked racial and ethnic implicit bias has the potential to undermine the integrity of evaluation efforts, because it may affect decisions, behaviors, and interactions with others involved in the evaluation. Based on a review of the implicit bias literature, the paper concludes with a presentation of several strategies that could be used combat the effects of implicit bias on the evaluator’s behavior and responses.
American society is both racially and ethnically diverse, which throughout our history has resulted in power and privilege differences based on race and ethnicity, which shape and define how individuals and communities experience life in our nation. It’s important that we, as evaluators, are responsive to these different ethnic and racial experiences. The Program Evaluation Standards (Yarbrough, Shula, Hopson, & Caruthers, 2010) repeatedly mentioned the need for culturally response evaluations; Standard P1 states, “Evaluations should be responsive to stakeholders and their communities” (Yarbrough , et. al., 2010, p. 113). Culturally responsive evaluations allow evaluators to address inequalities due to race and ethnicity. Yet, typically the focus the culturally responsive approach has been primarily on addressing explicit bias, the concepts and associations of which we are aware (Frierson, Hood, Hughes, & Thomas, 2010; Kirkhart, 2010). Racial and ethnic implicit bias potentially is as damaging as explicit biases. Implicit biases often results in prejudicial attitudes of which individuals are unaware. Unchecked implicit attitudes in an evaluator may result in discriminatory responses and behaviors even in the face of positive racial or ethnic explicit attitudes.
Addressing racial and ethnic implicit bias at the individual and organizational levels should be an ongoing effort, when conducting an evaluation. The focus of this paper is implicit bias as it operates at the individual level. Implicit biases are tenacious and pervasive in all social groupings. The first step in combating implicit biases is the responsibility of the evaluator. The first task for the evaluator is to gain an understanding and awareness of how implicit attitudes and stereotypes form and operate. This paper begins with an overview of definitions and research studies of implicit bias or attitudes, followed by a review of research-based recommended implicit bias interventions for evaluators and, possibly, stakeholders.
Defining and Understanding Implicit Bias
The Kirwan Institute defined implicit bias as “the attitudes or stereotypes that affect our understanding, actions, and decisions in an unconscious manner.” (Staats, Capatosto,Wright, & Jackson, 2016, p. 14). Unchecked racial and ethnic implicit bias has the potential to undermine the integrity of all evaluation efforts, because it affects decisions, behaviors, and interactions with others. The proclivity toward implicit bias, begins with individuals’ need for categories for efficient cognitive and social functioning. Allport (1954, p. ) in his seminal study of the nature and roots of discrimination wrote, “The human mind must think with the aid of categories. . . Once formed, categories are the basis of the for normal prejudgement. We cannot possible avoid this process. Orderly living depends on it. “
According to Banaji and Greenwald (2016) categories serve as the brain’s fodder for the formation of what usually are damaging stereotypes; in other words, stereotypes emerge from neutral social categories. So, before describing Banaji and Greenwald’s (2013) conceptualization of stereotypes and stereotyping, the paper outlines the cognitive role of categories. Categories can be defined as a collection of concepts that share core commonalities in such a way that “it is convenient to treat them as kin” (Banaji and Greenwald, p. ). Social categories are then applied to the people in our lives, friends, family, business associates, or complete strangers. Mentally, individuals rapidly assemble different combinations of categories in such a way that allows them to view a person as a unique and distinctive individual. Categories also enable us to infer characteristics about those who know less well. Consider the type of inferences individuals may make about someone whom we’ve categorized as a feminist: the person is a female, liberal, and adult.” The other way in which individuals use categories is to communicate their identity to others in terms of the categories to which they belong; these social categories are used as “personal identifiers”
It’s not difficult to see how categories can easily transmute into stereotypes. Stereotypes are insidiously activated whenever individuals interact with a person from a targeted group. For the most part, stereotypes are less than favorable towards the target group; typically, stereotypical descriptions of groups are more negative than positive. Benaji and Greenwald (2013, p. ) concluded, “group stereotypes typically consist of traits that more noticeably negative than those we would attribute to our friends”.
The attribution of stereotypes is not evenly distributed. It seems that those who possess the “default” characteristics of society are least likely to be stereotyped. Those, however, who have very few, if any, of socially desirable characteristics, are more likely to be the target of a stereotype and are more likely to absorb the stereotype as part of their individual and group identity. That is, members of the target group not only believe the stereotype holds true for them as an individual and a group; but are also more likely to allow the stereotype to become a self-fulfilling prophecy.
Banaji and Greenwald (2013), described different categories of conscious or explicit stereotypes. That is, stereotypes may be known and endorsed or known and not endorsed. That is, individuals may have knowledge of a racial stereotype but not endorse or consent to it. These people consciously reject attributing stereotypical characteristics to a group based on race. Others may know or are aware of stereotyping and endorse or consent to it. Implicit bias refers to unconscious stereotypes, those that are activated automatically without awareness or intent. Kang, Bennet, Carbado, Casey, & Dasgupta, Faigman, et. al. (2012, p. 1132) suggested that these biases are “attitudes and stereotypes that are not consciously accessible through introspection”.
Rudman (2004) listed five sources that contribute to the formation of implicit biases: early experiences, affective experiences, cultural biases, and cognitive balance principles, and the self. Kang, et. al. (2012) also noted that explicit biases and structurally endorsed biases and implicit biases are strongly related. Rudman (2004, p. 139) offered the following caveat: “for a deep and lasting equality to evolve, implicit biases must be acknowledged and challenged; to do otherwise is to allow them to haunt our minds, our homes, and our society into the next millennium”.
So, what is the relationship between explicit and implicit bias? Generally, explicit and implicit bias are viewed as related, yet distinct constructs (Staats & Patton, 2013). Petty, Fazio, and Brinol (2009) claimed the primary distinction between explicit and implicit is one of awareness. Biases that “can be consciously detected and reported” (Amodio & Mendoza, 2010, p. 355) are considered explicit; otherwise they are considered implicit. Although implicit biases are unconscious, individuals to some extent are able to control their responses. Motivation to produce responses that are socially acceptable and aligned with one’s values enable individuals to exert some control over responses.
Measuring Implicit Biases: Issues and Challenges
Measuring implicit biases is challenging, because of the need to overcome effects of the need to appear socially desirable. Research participants are often reluctant to share true feelings to sustain the appearance of being politically correct. In addition, many are unaware of their own biases and so are unable to share their implicit biases when asked about them (Staats, & Patton, 2013; Staats, Capatosto, Wright, & Jackson, 2016). Because of this, those who wish to detect implicit biases increasingly rely on the use of indirect measures and unobtrusive measures; such implicit measures are less likely to be affected by social desirability concerns (Staats & Patton, 2013).
Several indirect measurement approaches of implicit bias have been developed, including physiological approaches, priming methods, and response latency measures (Staats & Patton, 2016). One of the most widely used measures is a response latency measure: Implicit Association Test (IAT) (Greenwald, McGhee, & Schwartz, 1998), which is typically taken on the computer (Bertrand, Chugh, & Mullainathan, 2005) and can be accessed online at the website (http://implicit.harvard.edu). The website is sponsored by Project Implicit, which is a non-profit organization that fosters collaboration between researchers who are study implicit social cognition. According to the website, Project Implicit (http://implicit.harvard.edu)
“The goal of the organization is to educate the public about hidden bias and to provide a “virtual laboratory” for collecting data on the internet”. The IAT allows for the measurement of a wide range of implicit attitudes about social groups, including race, age, disability, religion, sexuality, and etc. “
When completing the IAT, respondents are required to quickly classify words (such as happiness and sadness) and pictures of faces (such as black and white) as good or bad. Bertrand, Chugh, and Mullainathan (2005, p. 94) provided a clear and succinct description of how the IAT works, by referring to the race IAT:
The test-taker must quickly categorize words and pictures that appear in the center of the screen. Faces are to be categorized as African-American or white and words (such as happiness or tragedy) as good or bad. Pairs of categories appear on either side of the screen. If the stimulus belongs to the categories on the right (left), the test-taker hits on key on the right (left) side of the keyboard. Each test-taker completes two versions of the task, categorizing as many as 60 different stimuli. In one, the “compatible’ version, the two categories on the side are paired according to stereotype, such as “African American” with “bad” in one corner, and “White” with “good” in the other corner. In the “incompatible” version the categories are paired counter-stereotypically such as African-American” with “good” and “white” with “bad”. . . .Most people respond more quickly in the compatible pairing, when African American is paired with bad rather than with good, demonstrating a stronger mental response.”
The underlying assumption is that responses are easier and will be completed more quickly with greater accuracy if categories that are closely related share a response. (Lane, Banaji, Nosek, & Greenwald, 2007). So, if a respondent holds an implicit racial bias towards Blacks, then her responses will be faster and more accurate if Black and bad words are paired. Greenwald et. al. (1998) have shown that the IAT is fairly immune to the effects of social desirability. Numerous reliability and validity studies of the IAT have been conducted and have shown that the IAT has performed well psychometrically; for example, test-retest reliabilities range over 20 studies from .25 to .69, with a mean and median reliability being .50 (Lane, Banajii, Nosek, & Greenwald, 2007; Staats & Patton, 2013, Staats, et. al., 2016). Kang and Lang (2010, p. 477) conclude, “After a decade of research, we believe that the IAT has demonstrated enough reliability and validity that total denial is implausible” (Kang & Lane, 2010, p. 477).
Implicit Attitudes and Behavior
So, the question is to what extent do implicit attitudes influence behavior? Bertrand et. al. (2005) reported the results of a 61 study meta-analysis that suggested a .27 correlation between the IAT and outcome measures such as judgments, decisions, and physiological responses. According to Staats and Patton (2013) explicit and implicit attitudes towards race can be discrepant; those who hold non-prejudiced attitudes may not be even aware of their racial implicit bias or if aware, they are able to control their responses with motivation. Yet, there are times and situations when motivation is unable to control implicit bias responses. Bertrand et al. (2005) suggested implicit bias control becomes nearly impossible under time constraints and stress, especially when social conditions are “messy, pressured, and distracting” (Chugh, 2004, p. ) and ambiguous as often experienced in the work setting. Research shows individuals control verbal responses successfully much of the time; yet, seem to have more difficulty controlling nonverbal responses and behaviors. The unsuccessful control of nonverbal often results in the “leaking” of biased attitudes, revealing the individual’s true biases (Olson & Fazio, 2007; Staats & Patton, 2013; Stone & Moskowitz, 2011).
Several studies indicate implicit bias influences behaviors that may result in implicit racial discrimination. One important study was conducted by McConnell and Liebold (2001) as summarized here by Greenwald and Krieger (2006). For the study the behavior of White undergraduate students were videotaped, while being interviewed by White and Black experimenters. The participants also completed race IAT measures. The results showed that those whose IAT scores suggested an implicit preference for White rather than Black, exhibited fewer speech hesitations and errors when speaking to the White experimenter versus the Black experimenter. They also spoke more to and smiled more at the Black experimenter. Such behaviors suggest a sense of greater comfort and ease with the White experimenters. Greenwald and Krieger (2006) reported a finding concerning the effects of administering the race IAT to white Americans. It was discovered that the race IAT “predicts in white Americans the activation of the amygdala—a presumed indicator of fear or other negative emotional arousal—in response to photographic images of unfamiliar African American faces” (p. 962).
Implicit Bias Effects on Social Life
Studies have shown racial implicit bias permeates all aspects of American life, including criminal justice, health and health care, employment, education, and housing and neighborhoods (Staats, et. al., 2016). Grave racial injustices have occurred in criminal justice as a result of racial biases. For example, it’s been shown implicit attitudes or “related to Blackness and weapons often influences the speed and accuracy of shooting decisions, which is termed “shooter’s bias” (Correll, Hudson, Guillermo & Ma, 2014; Correll, Park, Judd, & Wittenbrink, 2001). In a fairly recent review of police racial violence, Richardson (2015) outlined two types of racial bias at work: a negative bias against Black individuals and a positive bias towards White individuals; both of which together led to very different policing outcomes for both groups.
The next example of implicit bias pertains to housing and neighborhoods. Studies have reported racial discrimination in Airbnb and other online marketplace rentals on the basis of the applicant’s profile photograph or his or her name (B. Edelman & Luca, 2014; B.G. Edelman). There are many other examples of the effects of implicit biases and racial discrimination in other sectors of American life.
Implicit Bias and Malleability
At one time researchers viewed implicit bias as a fixed trait, stable over time; unlikely to be influenced by policy, social, or individual interventions (Staats & Patton, 2013). The preponderance of research now suggests implicit biases are malleable, rather than fixed (Blair, 2002; Blair, Ma, & Lenton, 2001; Dasgupta & Asgari, 2004; Dasgupta & Greenwald, 2001; Joy-Gaba & Nosek, 2010; Kang & Lane, 2010; Mann &Ferguson, 2015; Rudman, Ashmore, & Gary, 2001). Because implicit associations are “deeply entrenched in the subconscious”, unlearning and reformulating new associations is, at best, an arduous feat. (Straat & Patton, 2013, p. 53). The process of debiasing, “can be likened to the breaking of a bad habit”, which requires “intention, attention, and time” to ensure the new responses are solidly entrenched, allowing them to compete with the original activated stereotypical responses (Devine, 1989, p. 15-16).
Individuals who recognize they hold implicit racial or ethnic biases often attempt to repress them, but often with little success. Staats & Patton (2013, p. 53) reminded the reader that “suppressing automatic stereotypes does not reduce them and may even amplify them by making them hyper-accessible”. Rather studies have suggested one should accept the fact of one’s biases and make the effort to challenge and eliminate them (bystan-‘dzin-rgya-mtsho & Cuttler, 2009). Mann and Fergurson (2015) reported the results of seven experiments that focused on the malleability of implicit associations; the findings suggested individuals’ implicit associations can be changed or reduced with the introduction of new information that forces individuals to reconstruct their earlier stereotypical associations. Greenwald and Krieger (2006, p. 964), however, added a caveat about the longevity of intervention effects, which are typically rather modest. At best, eradicating implicit race or ethnic associations is difficult; particularly in a cultural and societal context that “reinforces preexisting racial attitudes and stereotypes”.
Nonetheless, Greenwald and Krieger (2004, p. 964) offered hope, stating “This skeptical appraisal does not imply that long-term changes are implicit biases are impossible”. The authors provided an example of such a situation: When an individual forms a relationship with a member of “previously devalued” racial or ethnic group, his or her implicit associations may undergo a sudden metamorphosis, during which a positive implicit association replaces a negative implicit association (Olsson, Ebert, Banaji, and Phelps, 2005 as cited by Greenwald & Krieger, 2006).
Promising Implicit Bias Interventions
Promising implicit bias interventions have been developed and reported by researchers; two papers presented summaries of research based interventions (Godsil, Tropp, Goff, and Powell,2014; Staats & Patton, 2013). In their review of interventions Godsil et al. (2014) presented two categories of interventions: 1) interventions used to reduce or eliminate implicit biases; and 2) those used to eliminate the effects of implicit biases on decision making. Next, the focus is on methods for reducing or eliminating implicit bias, which are sometimes referred to as “debiasing” interventions (Staats & Patton, 2013).
Counter-stereotype training. The goal is to provide interventions that result in the replacement of racial and ethnical stereotypical associations with newly created cognitive associations. One example of this intervention was provided by Wittenbrink, Judd, and Park (2001). The intervention consisted of placing together ordinary people in a counter-stereotypic situation, “such as depicting young White and Black males in scenes that included a church and a graffiti-strewn street corner (Staats and Patton, 2013, p. 54). Such research has suggested that perhaps social context, which provides social category clues, may trigger individuals’ automatic stereotypical responses and attitudes. Other approaches to counter-stereotypical training include using posters, pamphlets, and photographs in courtroom settings to elicit counter-stereotypic associations in the minds of judges and jurors (Kang et. al., 2012).
Exposure to counter-stereotypic individuals. Straats and Patton (2013) reported this strategy as being effective in altering negative implicit stereotypes and attitudes. This strategy exposes participants to exemplars of individuals who “contradict widely-held stereotypes” (Straats & Patton, 2013, p. 56). Dasgupta and Greenwald (2001, p. 807) suggested that “creating environments that highlight admired and disliked members of various groups. . . may, over time, render these exemplars chronically accessible so that they can consistently an automatically overrides preexisting biases”. Examples of debiasing exemplars include Black doctors, male nurses, or elderly athletes. For the exemplars to be effective, they must connect the typically “stereotyped” individual to relevant non stereotypic characteristics. Research results have been mixed in terms of the overall effectiveness exposure to counter-stereotypic individuals, particularly in terms of the strength of the effects (Staats & Patton, 2013). Clearly, the effectiveness may have been overstated, which suggests a need for additional research.
Intergroup Contact. Allport (1954) has been a major proponent of this approach to debiasing; and suggested four conditions that must be satisfied to ensure optimal effectiveness: 1) interaction groups consist of individuals of equal status; 2) individuals must share common goal; 3) for the most part, intergroup interactions must take place within a cooperative rather than an competitive environment; and 4) the intergroup interaction must receive support and endorsement from authority figures, laws or customs. Pettigrew (1997, p. 180-181) explored the effects of intergroup interactions in a multi-national study and discovered “the reduction in prejudice among those with diverse friends generalizes to more positive feelings about a wide variety of outgroups”. The intergroup contact approach has been shown to be effective in two important societal domains: criminal justice and health care. (Betancourt, 2004; Peruche & Plant, 2006).
Taking the Perspectives of Others. The assumption behind this approach is that taking the perspective of someone who is different may be an effective way to dispel implicit bias. A number of research studies supported the effectiveness of this approach (Benforado & Hanson, 2008; Galinsky & Moskowitz, 2000; Todd, Bodenhausen, Richeson, & Galinsky, 2011). Galinsky and Moskowitz (2000, p. 720) described this approach as being effective, because it “tended to increase the expression of stereotypic content, and prevented the hyperaccessibility of the stereotype construct” (p. 720). One approach to implementing this intervention is ask individuals to imagine themselves in the position of a minority individual and then write a life story about that individual (Staats & Patton, 2013). Taking the perspective of others has limited success with those who are unable to judge accurately if they have been successful in taking on another’s perspective.
Next are descriptions of the interventions that are designed primarily to ameliorate the influence of implicit biases on decision-making.
Accountability. The goal of this approach is to create the expectation that one will be held accountable to others for his or her beliefs, actions, and feeling. Research suggests accountability is a powerful method for combating bias, particularly for those called on to make decisions (Staats & Patton, 2013). “When decision makers are not held accountable for their actions, they are less likely to self-check for how bias may affect their decision making” (Staats & Patton, 2013, p. 60). Kang et. al. (2012) found accountability was particularly successful in helping jurors face and counteract their implicit biases.
Fostering Egalitarian Motivations. This approach is based on the assumption that implicit biases are not inevitable; rather, “When activated, egalitarian goals inhibit stereotypes by undermining and counteracting the implicit nature of stereotype activation, thereby cutting stereotypes off before they are brought to mind” (Stone & Moskowtz, 2011, p. 773).
Deliberative Processing. This approach encourages decision makers to take slow down to “engage in effortful, deliberative processing” (Kang, et al, 2012, p. 1177). Because hurried judgements activate a strong reliance on stereotypes, this approach is particularly apt for those who work under time constraints, high cognitive demands, under ambiguous and/or stressful conditions, such as doctors and judges. (Staats & Patton, 2013).
Education about Implicit Bias. This approach, which has been used successfully with juries and judges, not only attempts to raise awareness about implicit bias; but also has the potential to debias (Staat & Patton, 2013). It was recommended that “this educational component be integrated into juror orientation, preferably with jurors receiving hands-on experiential learning that includes taking the IAT” (Staats & Patton, p. 59). Judges should also this should receive education about implicit bias (Kang et. al, 2012). Unfortunately, research findings are mixed about the long-term effectiveness of education (Staats & Patton, 2013).
Both types of implicit bias interventions have the potential to minimize the effects of implicit bias. Yet, sometimes the desire to appear egalitarian is more pressing to individuals than the need to shed or neutralize implicit biases (Godsil, et. al., 2014). But individuals’ need for impression management does not negate the potential benefits of implicit bias education. Godsil et. al, (2014) suggested that teaching about implicit bias is of value and “should be accompanied by a discussion of the many factors that contribute to its development and the strategies people can employ to reduce its influence” (p. 48).
Recommendations for the Culturally Responsive Evaluator
Finally, the author presents basic recommendations for the culturally responsive evaluator. The focus is on racial and ethnic implicit bias as it occurs at the individual level within the culturally responsive evaluation context (Frierson, Hood, Hughs, & Thomas, 2010). Because culturally responsive evaluations incorporate the practice of self-reflection, evaluator engagement and collaboration with stakeholders, and the desire to be culturally responsive; the stage is set for addressing the issue of racial and ethnic implicit biases in evaluation. Frierson et. al. (2010, p. 78) provided a description of what a culturally responsive evaluation entails:
“Culturally responsive evaluation does not consist of a distinct set of steps apart from any high-quality evaluation approach. Rather it represents a holistic framework for thinking about and conducting evaluations in a culturally responsive manner. It is a process entailing the manner in which the evaluator plans the evaluation, engages the evaluand and its stakeholders, and take into account the cultural and social milieu surrounding the program and its’ participants”
That is, culturally responsive evaluators engage with culturally diverse stakeholders by taking on a culturally sensitive stance, being open to all voices and perspectives and showing respect for local customs, beliefs, and values. Being culturally responsive can particularly challenging for the evaluators, who possesses the “default” ethnic and racial social characteristics of American society, is working with those belong to stereotyped racial or ethnic groups. In this situation, evaluators’ racial and ethnic implicit biases could easily compromise the their ability to be culturally responsive. Of course, racial and ethnic implicit biases may also hinder the stakeholders’ ability to participate effectively in a more collaborative culturally responsive evaluation. As a starting point, the author offer a brief listing of recommendations for reducing the effects of implicit bias for the evaluator and the stakeholders. These recommendations draw from the type of interventions that are designed to reduce biased decision making. The hope is the following recommendations could be easily incorporated into a self-help program for interested evaluators, serving as a first step in their quest to combat the personal effects of implicit bias.
Step One: Develop a self-awareness and understanding of racial and ethnic implicit bias. The culturally responsive evaluation literature emphasizes the importance of self-reflection in the development of the evaluator’s awareness of explicit biases within himself or herself (Frierson, et. al, 1010; Kirkhart, 2010; Hopson & Kirkhart, 2012). In the case of implicit biases, self-reflection, alone, may not lead to greater awareness; remember implicit biases typically lie well below the conscious surface and are not readily accessible. In other words, individuals are usually unaware of their implicit biases. One of the best ways to develop greater awareness and understanding of implicit biases is through the recommended intervention, “Education about Implicit Bias”. Taking the IAT is one of the most important steps of the education intervention, because it is most likely to impel the test taker to take prompt action. Following exposure to the educational intervention, evaluators are more likely to be aware and have some understanding of implicit biases. As a result, they are ready for step two, which involves a change in how they approach decision making.
Step Two: Adopt decision making approaches to reduce implicit biases. Two particularly promising interventions are “Fostering Egalitarian Motivations and “Deliberative Processing”; it is recommend that evaluators attempt to incorporate both into their evaluation practice. The first of the two interventions, “Fostering Egalitarian Motivations” has been embraced by the evaluation profession. Since the inception of program evaluation as a profession, evaluators (such as, Reid E. Jackson, Michael Scriven, Daniel Stufflebeam, David Fetterman) have written about and embraced the need for social justice and equality. Fostering egalitarian motivations is one advantage to having a professional and personal commitment to the egalitarian values espoused by Program Evaluation Standards (Yarborough, et. al., 2010) and Guiding Principles for Evaluators (American Evaluation Association, 2004).
The second intervention, “Deliberative Processing” is essential for the reduction of racial and ethnic racial implicit biases. When making decisions or rendering judgments that draw heavily on cognitive resources, under stressful, somewhat ambiguous, and time-limited conditions; implicit biases are most likely to be activated, resulting in decisions or judgments that rely on stereotypes (Straats & Patton, 2013). So, the recommendation is for decision makers to slow down and “engage in effortful, deliberative processing”. (Kang, et. al. 2012, p. 1177). Part of deliberative processing includes a self-awareness of one’s emotional state, particularly if that emotion is anger or sadness. Both emotional states have the potential to activate an individual’s implicit ethnic and racial stereotypes and attitudes (Staats & Patton, 2013).
More work needs to be done in how to best address the issues of racial and ethnic implicit biases in program evaluations. For instance, it may be that if individuals adopt a culturally responsive stance towards evaluation; it may place them in the position to learn more about implicit biases and their effects on those involved in the evaluation. It is recommended, at the very least, culturally responsive evaluators acquaint themselves with the implicit biases literature and take the IAT as a prompt for greater self-reflection. It is likely that the early social and cultural experiences of individual evaluators will determine the extent to which they would benefit from exposure to additional implicit bias interventions. At this point it is unclear to what extent racial and ethnic implicit biases influences decisions and relationships in an evaluation situation. More work needs to be done to explore the nature of racial and ethnic implicit biases in evaluators and the extent to which implicit biases compromise culturally responsive evaluations. Because most of the intervention research was conducted either in lab settings or applied settings that may not generalize to typical evaluation situations, additional studies are needed that explore the effectiveness of these and other interventions when applied in the typical evaluation setting.
Allport, G.W. (1954). The nature of prejudice. Cambridge, MA: Addison-Wesley.
American Evaluation Association. (2004). Guiding principles for evaluators. Washington DC: Author. Retrieved from http://www. archive.eval.org/Publications/GuidingPrinciples.asp
Amodo, D. M., & Mendoza, S.A. (2010). Implicit intergroup bias: Cognitive, affective, and motivational underpinnings. In B. Gawronski & B.K. Payne (Eds.), Handbook of implicit social cognition (pp. 353-374). New York: NY: The Guilford Press.
Banaji, M.R. & Greenwald, A. G. (2013). Blindspot: Hidden biases of good people. NY: Delacorte Press.
Benforado, A. & Hanson, J. (2008). The great attributional divide: How divergent views of human behavior are shaping legal policy. Emory Law Journal, 57(2), 311-408.
Bertrand, M. Chugh, D., & Mullainathan, S. (2005). Implicit discrimination. The American Economic Review, 95(2). Proceedings of the American Economic Association, Philadelphia, PA.
Betancourt, J. R. (2004). Not me!: Doctors, decisions, and disparities in health care. Cardiovascular Reviews and Reports, 25(3), 105-109.
Blair, V. (2002). The malleability of automatic stereotypes and prejudice. Personality and Social Psychology Review, 6(3), 242-261.
Blair, V., Ma, J.E., & Lenton, A. P. (2001). Imaging stereotypes away: The moderation of implicit stereotypes through mental imagery. Journal of personality and social psychology, 81(5), 828-841.
Bystan-‘dzin-rgya-mtsho, & Cuttler, H. C. (2009). The art of happiness: A handbook for living. New York: Penguin Group.
Chugh, D. (2004). Why milliseconds matter: Societal and managerial implications of implicit social cognition. Social Justice Research, 17(2), 203-222.
Correll, J. Hudson, S. M., Guillermo, S. & Ma, D. S. (2014). The police officer’s dilemma: A decade of research on racial bias in the decision to shoot. Social and Personality Psychology Compass, 8(5). 201-213.
Correll, J. Park, B., Judd, C. M., Wittenbrink, B. (2002). Across the thin blue line: Police officers and racial bias in the decision to shoot. Journal of Personality and Social Psychology, 92(6), 1006-1023.
Dasgupta, N. & Asgari, S. (2004). Seeing is believing: Exposure to counter-stereotypic women leaders and its effect on the malleability of automatic gender stereotyping. Journal of Experimental Psychology, 40(5), 642-658.
Dasgupta, N. & Greenwald, A. G. (2001). On the malleability of automatic attitudes: Combating automatic prejudice with images of admired and disliked individuals. Journal of Personality and Social Psychology, 81(5), 800-814.
Devine, P. G. (1989). Stereotypes and prejudice: Their automatic and controlled components. Journal of Personality and Social Psychology, 56(1), 5-18.
Edelman, B. & Luca, M. (2014). Digital discrimination: The case of Airbnb.com. Harvard Business School NOM Unit Working Paper No 14-054.
Edelman, B. G., Luca, M. & Svirsky, D. (2016). Racial discrimination in the sharing economy: Evidence from a field experiment. Harvard Business School NOM Unit Working Paper, No 16-069.
Frierson, H. T., Hood, S., Hughs, G. B., & Thomas, V.G. (2010). A guide to conducting culturally responsive evaluations. In J. Frechtling (ed.). The 2010 user friendly handbook for project evaluation (pp. 75-96). Arlington, BA: National Science Foundation.
Galinsky, A.D. & Moskowitz, G. B. (2000). Perspective-taking: Decreasing stereotype expression, stereotype accessibility, and in-group favoritism. Journal of Personality and Social Psychology, 78(4), 708-724.
Godsil, R.D., Tropp, L. R., Goff, P. A., & Powell, J. A. (2014). The science of equality, Volume 1: Addressing implicit bias, racial anxiety, and stereotype threat in education and health care. Retrieved from http://perception.org/wp-content/uploads/2014/11/Science-of-Equality.pdf
Greenwald, A. G., McGhee, D. E. & Schwartz, J. L. (1998). Measuring individual differences in implicit cognition: The Implicit Association Test. Journal of Personality and Social Psychology, 74(6), 1464-1480.
Greenwald, A.G, & Krieger (2006). Implicit bias: Scientific foundations, California Law Review, 94(4), 945-967.
Hood, S., Hopson, R.K., & Kirkhart, K.E., (2015). Culturally responsive evaluation. In K.E. Newcomer, H.P. Hatry, & J.S. Wholey (Eds.) Handbook of Practical Program Evaluation: 4th Ed. (pp. 281-317). San Francisco: Jossey-Bass.
Joy-Gaba, J. A., & Nosek, B. A. (2010). The surprisingly limited malleability of implicit racial evaluations. Social Psychology, 41(3), 137-146.
Kang, J., Bennett, M., Carbado, D., Casey, P., Dasgupta, N., Faigman, D. et. al. (2012). Implicit bias in the courtroom. UCLA Law Review, 59(5), 1124-1186.
Kang, J. & Lane, K. (2010). Seeing through colorblindness: Implicit bias and the law. UCLA Law Review, 58(2), 465-520.
Kirkhart, K.E. (2010). Eyes on the prize: Multicultural validity and evaluation theory. American Journal of Evaluation, 31(3), 400-413.
Lane, K. A., Banaji, M. R., Nosek, B. A., & Greenwald, A.G. (2007). Understanding and using the Implicit Association Test: IV: What we know so far about the method. In B. Wittenbrink and N. Schwartz (Eds.) Implicit measures of attitudes. New York: Guilford Press.
Mann, T. C., & Ferguson, M.J. (2015). Can we undo our first impressions? The role of reinterpretation in reversing implicit evaluations. Journal of Personality and Social Psychology, 108(6), 823-849.
Olsson, A., Ebert, J. P., Banaji, M. R., & Phelps, E. A. (2005). The role of social groups in the persistence of learned fear. Science, 309, 785-787.
Peruche, B. M., & Plant, E. A. (2006). The correlates of law enforcement officers’ automatic and controlled race-based responses to criminal suspects. Basic and Applied Social Psychology, 23(2), 193-199.
Pettigrew, T. F. (1997). Generalized intergroup contact effects on prejudice. Personality and Social Psychology Bulletin, 23(2), 173-185.
Petty, R. E., Fazio, R. H., & Brinol, P. (2009). The new implicit measures: An overview. In R.E. Petty, R.H. Fazio & P. Brinol (Eds.), Attitudes: Insights from the new implicit measures (pp. 3-18). New York, NY: Psychology Press.
Richardson, L. S. (2015). Police racial violence: Lessons from social psychology. Fordham Law Review, 83(6), 2961-2976.
Rudman, L. A. (2004). Social justice in our minds, homes, and society: The nature, causes, and consequences of implicit bias. Social Justice Research, 17(2), 129-142.
Rudman, L.A., Ashmore, R. D., & Gary, M. L. (2001). “Unlearning” automatic biases: The malleability of implicit prejudice and stereotypes. Journal of Personality and Social Psychology, 81(5), 856-868.
Staats, C., Capatosto, K., Wright R. A., & Jackson, V.W. (2016). State of the science: Implicit bias review, 2016 Edition. Retrieved from http://www.KirwanInstitute.osu.edu.
Staats, C. & Patton, C. (2013). State of the science: Implicit bias review, 2013 Edition. Retrieved from http://www.KirwanInstitute.osu.edu.
Todd, A. R., Bodenhausen, G.V., Richardson, J.A. & Galinsky, A. D. (2011). Perspective taking combats automatic expression of racial bias. Journal of Personality and Social Psychology, 100(6), 1027-1042.
Wittenbrink, B., Judd, C. M., & Park, B. (2001). Spontaneous prejudice in context: Variability in automatically activated attitudes. Journal of Personality and Social Psychology, 81(5), 815-827.
Yarbrough, D.B., Shula, L.M., Hopson, R.K. & Caruthers, F.A. (2010). The program evaluation standards: A guide for evaluators and evaluation users (3rd. Ed.). Thousand Oaks, CA: Corwin Press.