Assessing the effectiveness of Integrative Complexity interventions: a Dehumanization scale
Integrative Complexity (IC) is a psychological construct which denotes the extent to which one is able to differentiate and integrate multiple perspectives and values. Low IC indicates a simplified, black-and-white perception of the social world. This can increase vulnerability to radicalisation and involvement in violent extremism (RIVE). The ICThinking® Research Group has developed an empirically-validated set of interventions to increase participants’ IC levels, with the goals of reducing and preventing extremism, and promoting prosocial conflict resolution. The interventions are currently assessed for effectiveness by measuring IC pre- and post-course delivery. However, this method has a number of limitations as an indicator of reduced vulnerability to RIVE in the long-term. Therefore, IC researchers are also looking for other, complementary methods of assessing the effectiveness of IC courses.
This dissertation proposes the introduction of a new tool to supplement current assessments of the effectiveness of IC interventions: a ‘Dehumanization scale’. In Section 1, I outline the concepts of IC and dehumanization. In Section 2, I advance the proposal that a ‘Dehumanization scale’ may be suitable as a novel measure of the success of IC interventions. I justify this by arguing that dehumanization has important links with low IC and RIVE, and also draw attention to ways in which IC interventions may reduce dehumanization. In Section 3 I outline and evaluate a suggested design for the scale, and provide some guidance for practical implementation. The Conclusion flags certain limitations and identifies directions for future research.
In February 2015, the ISIS-owned magazine Dabiq published an editorial entitled ‘The Extinction of the Gray Zone,’ exhorting people to engage with the world in terms of starkly opposed categories of believers and kuffar (non-believers). This black-and-white worldview can be characterised using the psychological construct of Integrative Complexity (IC). IC denotes the complexity of a person’s thinking structure. High IC indicates the abilities to recognise, understand and integrate multiple dissonant perspectives on an issue – skills which are fundamental to conflict resolution and productive interfaith dialogue. Conversely, low IC involves an uncompromising, simplistic perception of the social world, and of the specific requirements of one’s own values. Low IC can render a person vulnerable to ‘psychological extremism’ – the adoption of a polarised ideological position. Extremist groups capitalise on this vulnerability, communicating by means of messages displaying strikingly low IC (Conway et al 2011).
The concept of IC was originally developed by Suedfeld and colleagues (Suedfeld and Rank, 1976). Founded in 2011, the ICThinking® Research Group at the University of Cambridge has developed a suite of empirically-validated ‘IC interventions’ inspired by the key role of high IC in intergroup negotiation and sustained cooperation. IC interventions typically comprise eight two-hour sessions. They employ experiential, embodied activities, including roleplay, multi-media DVDs, and prompted critical reflection on topics such as identity and values. IC intervention courses have now been applied to hundreds of diverse communities struggling with conflict across the world. They aim to ‘reduce and prevent’ conflict and extremism (Boyd-MacMillan et al 2016a).
The success of the courses is measured using pre/post IC coding. However, the use of IC coding to assess the effectiveness of IC interventions has a number of limitations. First, IC encompasses both dialectical and elaborative complexity. Dialectical complexity is a willingness to incorporate multiple perspectives as valid, whereas elaborative complexity refers to the complex elaboration of a specific viewpoint. A ‘high IC’ score is ambiguous between dialectical and elaborative complexity. The conflation of these two into a single measure has led to the seemingly paradoxical result that when IC is only elaborative in nature, psychological extremism can be positively related to IC (Conway et al 2008). Rather than maximising IC in general, the courses should specifically aim to maximise dialectical complexity.
Secondly, while high IC is an important psychological resource in conflict resolution, even dialectical complexity is not always more conducive to peace. Thus in war, generals exhibiting high IC are more successful military strategists (Suedfeld et al 1986). Rather, it is the combination of cognitive empathy (as indicated by IC, denoting the understanding of another’s viewpoints), and affective empathy (which IC cannot measure, denoting an emotional connection), which reduces a thinker’s vulnerability to committing violence (Oswald 1996). The courses should target affective rather than merely cognitive skills.
Thirdly, IC is a state rather than trait variable. Thus, a post-course high IC score alone does not necessarily indicate a better conflict resolution capacity – it must also be combined with the ability to manage IC in the long-term and in response to unavoidable stress. In addition to a pre/post course assessment of a labile variable, the courses should aim for longer-term assessment techniques or assessments of other variables which are likely to buffer high IC in situations of stress.
Consideration of these limitations motivates the search for a novel way to assess IC interventions. This dissertation proposes, designs and evaluates one such new measure: a ‘Dehumanization scale’. §1 describes the psychological constructs of IC and dehumanization. §2 advances and justifies the proposal that a Dehumanization scale may serve well as a supplementary indicator of the success of IC interventions. §3 outlines and evaluates a design for the scale, and suggests prospects for implementation.
§1: Integrative Complexity and Dehumanization
1.1 Integrative Complexity
The concept of Integrative Complexity has two components: differentiation and integration. Differentiation is the degree to which one identifies different and/or opposing values pertaining to an issue. Integration is the ability to connect and reconcile those perspectives in an overarching framework. When people are in situations of stress or (perceived) threat, particularly over long periods, IC tends to fall below their habitual level of functioning, or ‘baseline’ (Suedfeld et al 2006). When this change in IC occurs during intergroup conflict, violence often follows – but when IC is raised, conflict resolution and cooperation are reliably predicted.
IC can be measured by thematic content analysis of verbal communication, and scored on a scale from one to seven (Baker-Brown et al 1992). A score of one indicates ‘black-and-white’ thinking, or the perception of the world in sharply-defined dichotomous categories. Moderate scores indicate differentiation between multiple perspectives. Scores of four and above indicate integration of those perspectives.
High IC is one psychological building block which reduces the propensity to resort to violence. Another is the tendency to view one’s opponent as wholly ‘human’. Dehumanization – the perception that a person is less than fully ‘human’ – is rife in situations of intergroup hostility. The Nazis openly dehumanized Jews; in Rwanda in 1994, Hutu leaders called for machete attacks on the taller and thinner Tutsis by urging people to ‘cut the tall trees’. But dehumanization also abounds in today’s headlines – western media representations of the ‘war on terror’ feature systematic dehumanization of immigrants, likening them to animals, cancer and viruses (Steuter and Wills 2010). The recurrence of dehumanization throughout history makes sense in light of research suggesting that it is a fundamental dimension of human perception. Indeed, dehumanization is not reducible to prejudice against, or lack of familiarity with, the outgroup: its effects are robust even when these are controlled for (Cortes et al 2005; Paladino and Vaes 2009).
At least two conceptual distinctions have been drawn within the multidimensional construct of dehumanization. The first distinction concerns the type of humanness being denied. Haslam’s (2006) framework distinguishes two types of dehumanization, each defined by the denial of a different type of humanness. Animalistic dehumanization involves the under-attribution of characteristics considered to be ‘uniquely human’ (UH) such as cognitive aptitude and cultural refinement. Mechanistic dehumanization involves the denial of traits considered to be ‘essentially human’, or a part of human nature (HN) – the target is represented as cold and robotic. These two dimensions have been repeatedly found to be features of lay conceptions of humanness, in diverse cultural settings (Haslam et al 2008; Park et al 2011).
The second distinction is between subtle and blatant forms of dehumanization. This distinction reflects a dual-process model of the human mind. Concepts such as ‘humanity’ can be embedded both in unconscious associative networks, which account for automatic processing, and also in propositional representations, which guide reflective action (Kofta et al 2014). Subtle dehumanization occurs when subjects do not directly report a target’s lack of humanness, or provide evidence of dehumanization only implicitly: dehumanization is rooted within the unconscious semantic network. By contrast, blatant dehumanization involves a direct evaluation of a target’s lack of humanness, with propositional or reflective awareness. Kteily et al (2015) compared the relative behavioural effects of subtle and blatant dehumanization. They found, across a range of contexts, that subtle and blatant dehumanization separately predict behavioural outcomes. Blatant dehumanization was consistently a stronger predictor of negative intergroup attitudes and behaviour than subtle dehumanization, controlling for prejudice.
§2: Could a Dehumanization scale be used to assess IC interventions?
Though the two fields have not previously been compared, there are illuminating commonalities between dehumanization and IC. There are links between (1) the cognitive mechanisms of dehumanization and IC, (2) dehumanization and IC interventions, and (3) dehumanization and RIVE. Given (2), we may expect a pre/post-intervention reduction in dehumanization. Given (1) and (3), detection of this change would point towards the success of courses in their aim of reducing and preventing RIVE. These links therefore suggest that a Dehumanization scale could be used as a novel measure of the success of courses.
2.1 Dehumanization and IC
Consideration of the cognitive-perceptual mechanisms driving dehumanization reveals parallels between dehumanization and low IC.
First, social categorization processes underpin dehumanized perception (Hodson et al 2014). The low IC categorization of individuals into ‘us’ and ‘them’ sets the scene for an undifferentiated, ‘othered’ perception of members of a particular outgroup. According to the continuum model of impression formation (Fiske and Neuberg 1990), social targets are perceived anywhere on a continuum from more categorical to more individuated. The formation of a categorical impression of an outgroup member may be passive and automatic, occurring within 100ms of viewing them, and is less cognitively demanding than appreciating the individuating characteristics of each person (Lee and Harris, 2014). Social Identity Theorists place dehumanization on a continuum with depersonalisation (a de-individuated perception of a person), arguing that categorical perceptions of the outgroup are necessary (though not sufficient) for dehumanization (Tajfel 1981). A rigid, depersonalised categorical perception of the outgroup is in fact a feature of low IC.
A second mechanism of dehumanization is the belief that outgroup members do not share one’s values. Schwartz and Struch (1989) argue that outgroup membership, by definition, implies different value priorities from those of the ingroup: people’s values (for example, altruism) ‘express their distinctive humanity’. Perceiving between-group differences in values may therefore lay the ground for dehumanizing perceptions. The failure to integrate ingroup and outgroup values – as well as being a precursors of dehumanization is a feature of low IC. Specifically, it is a lack of dialectical (rather than elaborate) complexity: the kind most relevant to psychological extremism.
A further insight is provided by consideration of cognitive mechanisms which cause IC to change. Tetlock’s (1986) Value Pluralism model of IC holds that what motivates people to engage the extra cognitive energy required for complex (high IC) thinking is the drive to maximise the satisfaction of multiple values of personal importance when these are in tension. Dehumanization has been conceptualised as a mechanism of ‘moral exclusion’ (Opotow 1990), and empirical studies have shown that people dehumanize victims of past atrocities by their ingroup (Castano and Giner-Sorolla 2006), a process which reduces the feelings of (collective) guilt (Zebel et al 2008). In this way, dehumanization may buffer a thinker from the need to consider the humanity and rights of an opponent. This prevents any value tension which would arise from this, and, in consequence, may allow low IC to prevail.
The parallels between the cognitive mechanisms of dehumanization and IC suggest a mutually-reinforcing role for the two psychological constructs. It follows that a measure of dehumanization may be a proximal indicator of progress in IC management.
2.2 Dehumanization and IC interventions
The links between dehumanization and low IC extend beyond the cognitive level, to the interventions which may be effective in counteracting them. Three strategies which have proven successful in reducing dehumanization involve conditions which are facilitated by current IC interventions, suggesting that the interventions will initiate outgroup re-humanization.
One strategy to reduce dehumanization is intergroup contact. Tam et al (2007; 2008) demonstrated with Catholics and Protestants in sectarian Northern Ireland that reductions in dehumanization occurred as a result of intergroup contact, as measured by the quantity and quality of the contact. Not only face-to-face contact, but also imagined contact increases outgroup re-humanization (Vezzali et al 2012). Fittingly, both imagined and face-to-face contact are features of the IC courses: perspective-taking and social exercises encourage positive, high-quality engagement with members of the outgroup.
A second type of intervention involves the encouragement of multiple categorization (Prati et al 2016; Albarello and Rubini 2012). Multiple categorization is the perception of someone as having multiple social group memberships. Therefore, intergroup relations are defined as complex and multifaceted – rather than simple and clear-cut. We may reconceptualise this as a ‘high IC’ perception of the outgroup. Prati et al found that encouraging multiple categorization, as opposed to simple categorization, in perception of the outgroup led to humanizing outcomes. Through a sequential mediational model, multiple categorization increased the individuation of outgroup members, which in turn reduced threat responses and thus reduced dehumanization. Again, encouragingly, this strategy of re-humanization is stimulated by IC interventions: they result in a statistically significant rise in IC when thinking about the outgroup – a multiple-categorized perception (Boyd-MacMillan et al 2016a).
A third type of re-humanizing intervention is the prompting of mentalizing – the consideration of others’ mental states. In a study, participants who mentalized about others (hypothetically) were subsequently more likely to act pro-socially towards them (Gray et al 2007). Similarly, when people were asked to engage directly with the minds of outgroup members, such as by answering questions about their food preferences, the medial pre-frontal cortex – the brain region thought to be necessary for social cognition – increased in activity compared with a control task where participants made superficial categorical estimations about the targets’ ages (Harris and Fiske 2007). These convergent behavioural and neural findings attest to the effectiveness of mentalizing as a re-humanizing intervention. The process of mentalizing is facilitated in IC courses. Physically-enacted roleplay enables participants to place themselves in an outgroup member’s shoes, considering their thoughts and feelings, while activities such as ‘active listening’ encourage mutual consideration and understanding.
2.3 Dehumanization and RIVE
A further consideration is conduciveness of strategies countering dehumanization to the aim of reducing and preventing extremism. A Dehumanization scale will show predictive validity for use in IC interventions if high scores on the scale are correlated with subsequent vulnerability to RIVE. Such a correlation would depend on three elements: an association between higher outgroup dehumanization and increased vulnerability to RIVE; an association between reduced dehumanization scores (‘re-humanization’) and decreased vulnerability to RIVE; and the robustness of these associations in the conditions of stress or threat which typically characterise situations of conflict.
As respects the first element, there is evidence that dehumanization leads to increased vulnerability to violence and intergroup hostility. Dehumanization of victims leads to decreased empathy (Nagar and Maoz 2017), willingness to engage in (hypothetical) torture (Viki et al 2013), and support for war against the victim outgroup (Jackson and Gaertner 2010).
Turning to the second element, there is also evidence to suggest re-humanization reduces vulnerability to RIVE. Costello and Hodson (2010) found that experimentally-induced re-humanization of an (immigrant) outgroup predicted significantly lowered levels of prejudice. Majdandzic et al (2012) found that when hypothetical targets were ‘humanized’ through mentalizing about them, participants were less willing to sacrifice the targets’ lives under utilitarian principles. This indicates that they saw the targets’ lives as more individually valuable, implying they would be less likely to be persuaded by extremist arguments advocating violence against them. This finding is particularly relevant to the context under consideration, as the re-humanization technique, mentalizing, occurs in IC interventions.
The third element is vital: IC interventions are only of practical use if their effects can be maintained in real-life stressful situations: we must not only raise participants’ IC and outgroup humanization, but also equip people to retain these cognitive skills in the face of inevitable stress. While IC is known to decrease during stress, the experience of positive emotion may be key to maintaining high IC in these conditions (Andrews Fearon and Boyd-MacMillan 2016). Supporting this hypothesis (although with only correlational rather than causal evidence), Saslow (2014) found that two measures of higher positive emotion under stress (self-reports of current state and self-reports of general dispositions towards positive emotion) correlated positively and significantly with cognitive complexity. Findings from the dehumanization literature may shed light on how to promote these conditions. Intergroup dehumanization is associated with negative affect. While dehumanized targets are associated with disgust (Harris and Fiske 2006) and anxiety (Capozza et al 2013), humanized targets are associated with affective empathy (Andrighetto et al 2014). Thus a reduced propensity to dehumanize an outgroup may facilitate the positive affect which is required in situations of stressful intergroup contact in order to maintain high IC, by suppressing the anxiety or ‘threat’ response and directing an individual towards a positive ‘challenge’ response. Therefore, we may hypothesise that re-humanization can enable the conditions (positive affect) needed to maintain high IC during stress, making it a significant complement to IC in reducing vulnerability to extremism.
We therefore have strong theoretical grounds to expect that dehumanization exhibits predictive validity for RIVE on all three criteria (although the claim currently lacks specific empirical backing: see the Conclusion below).
Analysis of dehumanization and IC uncovers parallels and links between the two. Encouragingly, these links exist in places which suggest a Dehumanization scale may help address the shortcomings of IC coding. First, as perceiving another person as human is intimately connected to sharing their values, re-humanization may help ensure that the type of IC which is raised is dialectical rather than elaborative. Secondly, re-humanization is empirically associated with affective empathy, complementing the cognitive empathy measured by IC coding. Thirdly, re-humanization may help to maintain high IC under stress. A Dehumanization scale may therefore be a valuable novel measure of progress in IC management, and decreased vulnerability to RIVE.
§3: A design for the Dehumanization scale
The links discussed in §2 lend plausibility to the notion that a ‘Dehumanization scale’ could work well as a novel measure of the effectiveness of IC courses. But how might a ‘Dehumanization scale’ actually look? In this Section, I recommend a scale combining two measures which best seem to maximise power, construct validity, external validity and cross-cultural validity. I then suggest how the scale might practically be integrated into IC interventions.
3.1 Recommended scales
Unlike the IC research field, the dehumanization literature lacks a standardised method of measurement. Different measures target different forms of dehumanization. Dehumanizing perceptions may be manifested both implicitly and explicitly, and these forms have differential behavioural consequences (Kteily et al 2015). The combination of a subtle and a blatant measure will therefore increase the power of a test to detect dehumanization in whatever form it exists.
The ‘subtle’ measure that I propose employing relates to infra-humanization. Infra-humanization refers to the denial of humanness to an outgroup by the differential attribution of complex emotional states to ingroup and outgroup members (Leyens et al 2000). Typically, ‘secondary’ emotions – those often regarded as uniquely human, such as shame, resentment and hope – are preferentially attributed to the ingroup, whereas ‘primary’ emotions such as anger, pain and pleasure are attributed to both the ingroup and the outgroup. The basic framework has also been extended beyond emotions to stimuli such as values (Bain et al 2006) and personality traits (Paladino and Vaes 2007). Many different paradigms have been used in infra-humanization research. In considering which to adopt in the context of IC interventions, two particular dimensions must be considered: method of computing an infra-humanization value, and type of stimulus.
An appropriate method of computing is based on the approach of Vaes et al (2010), whereby participants rate the humanness (and desirability) of attributes and the extent to which they typify the outgroup. The measurement of infra-humanization can then be calculated as the within-participant correlations between ‘outgroup typicality’ and ‘humanity’ ratings of given characteristics. To control for valence, the ‘desirability’ ratings can be used to partial valence out of the correlations. This method of computing corrects for variation in perceptions of humanness and trait-desirability.
Turning to consideration of the type of stimulus, we may restrict the stimuli to emotions, and employ lists of emotions which have been generated in a pre-test by participants from the same cultural background and demographic as the course participants. This recommendation is justified below as a method of maximising cross-cultural validity.
The measurement of blatant dehumanization is taken from Kteily et al (2015). The ‘Ascent Scale’ employs the image of Darwin’s Ascent of Man, with five figures depicting the progress of human evolution (Figure 1). Participants complete the scale by indicating with continuous sliders where on the scale the outgroup lies. The positions of the sliders generate a numerical value which can then be used as an indicator of dehumanization.
Figure 1. The Ascent measure used in Kteily et al (2015). Responses are made for each target group using sliders next to the groups, which generate numerical values. In the IC interventions, it is recommended to provide measures of both the ingroup and outgroup.
It should be noted at the outset that although both these measures straddle the animalistic/mechanistic dehumanization divide, they may relate more to animalistic dehumanization (Haslam 2006). Haslam, Leyens and other infra-humanization researchers tend to conceptualise infra-humanization as a form of animalistic rather than mechanistic dehumanization, without empirically verifying this assumption. However, recent work has found a bidirectional causal relationship between infra-humanization and both animalistic and mechanistic dehumanization, implying instead that both forms are relevant (Martinez et al 2017). The Ascent scale is empirically significantly more associated with animalisic than mechanistic dehumanization (Kteily et al 2015).
In view of the demonstrated behavioural effects of animalistic dehumanization, this is no initial drawback. However, a study examining the distinct effects of the two types of dehumanization found that Italian participants were more likely to perceive Japanese people as lacking in HN, and Haitians as lacking UH. Correlatively, mechanistic dehumanization led to decreased willingness to help Japanese people after a natural disaster, and animalistic dehumanization led to decreased willingness to help Haitians (Andrighetto et al 2014). Therefore, future interventions may benefit from employing mechanistic measures, such as devising a pictorial analogue of the Ascent scale specifically depicting mechanistic aspects of blatant dehumanization (Kteily et al 2015), in cultural contexts where they are expected to be more relevant.
3.2 Construct validity
To ensure our measurement of dehumanization exhibits construct validity, we must evaluate the extent to which it purely captures dehumanization, rather than tapping into related concepts such as prejudice and moral disengagement. Measures of dehumanization used in previous studies have not always achieved this. For example, Bandura’s (1996) scale asked participants to rate items such as ‘Some people deserve to be treated like animals’ and ‘Someone who is obnoxious does not deserve to be treated like a human being’. This confounds measurement of dehumanization with valence and moral desert.
We may never be able perfectly to dissociate valence and moral desert from dehumanization. In their generation of ‘human potentials’ – propositions to capture the naïve propositional theory of humanness – Kofta et al (2014) found it impossible to identify human potentials that were simultaneously high on the humanity dimension and neutrally valenced: all ‘potentials’ participants characterised as distinctively human were also positively valenced. This may be because the human nature dimension is itself inherently positively valenced. Similarly, dehumanization is inherently associated with moral exclusion. Laham and Haslam (2009) found that participants were more likely to attribute greater amounts of ‘humanness’ to creatures they had previously included in their ‘moral circle’ (the set of individuals considered worthy of moral concern). This was the case even though whether or not a creature was included in the moral circle was experimentally manipulated by priming participants with an ‘inclusion’ or ‘exclusion’ mindset. This implies that people are dehumanized as a result of moral disengagement, not merely a cause of it. This bidirectional relationship implies that moral disengagement and the tendency to see another as human are intimately linked and not necessarily fully dissociable.
The Ascent scale, being non-linguistic, minimises the risk of confounding blatant dehumanization with moral desert or valence. The infra-humanization measure can be experimentally controlled for valence, by partialling it out of the within-participant humanity-typicality correlations, as recommended above. Thus although it may be conceptually impossible completely to dissociate measures of dehumanization from the constructs of moral disengagement and general negative appraisal, the suggested measures minimise confounds, maximising construct validity.
3.3 External validity
As well as assessing that the intervention has been effective in raising outgroup humanization, it is important to ensure that the progress made under experimental conditions is applicable to ecological situations outside the course. A reduction in dehumanization in this context is externally valid if it generalises to members of the outgroup other than those taking part in the course, and if its effects are long-term.
There are both theoretical and empirical reasons to expect a dehumanization intervention to exhibit external validity. I turn first to the theoretical reason. Dehumanized perception engages deep, associative processes which ground thinking and action in largely automatic ways (§1 above). Dehumanization processes are often metaphorical in nature. Metaphorical associations become internalised through repeated exposure to source-target pairing. Individuals then unconsciously allow their perceptions and opinions to be guided by these metaphors (Tipler and Ruschler 2014). IC courses encourage embodied enactment and explicit talking through of these cultural assumptions, bringing them to the level of conscious awareness and thus potentially re-calibrating the associations.
There is also sparse but promising empirical evidence for the long-term durability of reductions in dehumanization. The Ascent scale showed test-retest reliability over a period of four months (Kteily et al 2015). There has, however, been no substantive testing of the long-term robustness of infra-humanization interventions.
Although we have theoretical reasons to be optimistic, and some empirical support, both the dehumanization and IC fields will benefit from further empirical testing of the long-term robustness of the measurements. We may operationalize ‘long-term robustness’ in the context of IC interventions, with reference to the aim of increasing resilience to RIVE, by distinguishing two types of indicator of the long-term success of interventions.
A direct indicator would be a sustained high score on the Dehumanization scale, IC coding measures and other behavioural measures (such as resilience) after a given period of time. An indirectindicator would be a longer-term reduced vulnerability to RIVE for those individuals involved in the courses. While examination of the extent to which individuals become involved in conflict in the distant future after course participation would be difficult, we can look at more near-term predictors of reduced vulnerability. An example would be positive behavioural changes, such as increased cooperation with others and academic performance at school, as reported by family, friends and colleagues (or teachers). The indirect indicators have the advantage of being ecological. Therefore, they are both of more immediate practical relevance, and also less vulnerable to Hawthorne effects (whereby a change occurs only in the context of an experimental manipulation, due only to the manipulation). Future research measuring both types of indicators will strengthen the claim of a Dehumanization scale in particular, and IC interventions in general, to long-term robustness.
3.4 Cross-cultural validity
Behavioural measures of IC interventions must be applicable to the wide range of heterogeneous, culture-specific forms of conflict in which they are used. There are two ways in which a Dehumanization scale may fail to be cross-culturally valid: it may employ dehumanizing metaphors with different connotations in different cultures; or it may capture a type of dehumanization which applies only between culture-pairs with a particular relationship, for example from a higher to lower status.
Most infra-humanization research has relied on experimenters’ interpretation of the humanness of traits or metaphors. Thus in their infra-humanization studies, Leyens et al (2007) assumed that ‘secondary emotions’ were uniquely human and ‘primary emotions’ would be attributed to both humans and animals. They then interpreted the differential attribution of secondary emotions to humans and animals in terms of differential attributions of ‘humanness’. This assumption was based on ‘cross-cultural’ evidence showing ‘substantial’ consensus as to the extent to which various characteristics were ‘uniquely human’ (Miranda and Gouveia-Pereira 2006; Demoulin et al 2004). There are, however, three difficulties with work based on this method.
First, the cultures used in these studies were Portuguese, English, Belgian, French and Spanish – all western, European samples. However, a study of cross-cultural conceptions of the ‘humanness’ category using more diverse cultures – Australia, China and Italy – revealed significant differences between the cultural prototypes of ‘humanness’ (Bain et al 2012). Indeed, different metaphors have different connotations in different cultures: while snakes are symbols of evil in Judeo-Christian cultures, they are objects of worship in Cambodian mythology. Therefore we cannot assume cross-cultural consistency in the connotations of ‘humanness’. IC interventions are used with communities not only from Europe, but also America, Africa and Asia, so methods of measurement must be appropriate for a diverse range of cultural prototypes of ‘humanness’ – for which the assumptions of Leyens et al may not be valid.
A second difficulty is the assumption that the perceived ‘humanness’ of a characteristic is static. Paladino and Vaes (2009) demonstrate instead that the relationship between perceived ‘humanness’ of a characteristic and its attribution to the ingroup is bidirectional. Across three studies, characteristics were judged as significantly more human when previously associated with the ingroup rather than the outgroup. Thus, the ‘humanness’ prototype can be liable to idiosyncratic, cultural and dynamic variation.
A third difficulty is that the interpretation relies on a relative rather than absolute measure of dehumanization – the dehumanization of the outgroup is recorded relative to humanization of the ingroup rather than in absolute terms. In this way, the measure confounds ingroup humanization and outgroup dehumanization. However, ingroup humanization and outgroup dehumanization are dissociable processes (Demoulin et al 2005). If the processes are dissociable, and we are interested only in outgroup dehumanization, then we should be weary of relying on a measure which is also sensitive to changes in (the potentially independently varying) ingroup humanization.
The methodological innovation of Vaes et al (2010) frees interpretations of infra-humanization from the experimenters’ associations with the ‘humanness’ prototype, relativizing the attributions of characteristics to participants’ own assessment of their humanness. This minimises bias from idiosyncratic connotations of characteristics employed in the scale, and is also absolute rather than relative.
The Ascent measure of blatant dehumanization employs pictorial representations of ‘evolvedness’, avoiding linguistic markers of humanness which are more vulnerable to the charge of cultural-specificity. The pictures are silhouettes, which minimises association biases such as through skin colour. However, the interpretation of the images relies to some extent on the cultural connotations of the ‘Ascent’ image, and its colloquial usage to signify evolutionary hierarchy, and therefore may be more applicable to cultures where this has significance, limiting its cross-cultural validity.
A second form of cultural specificity may arise if the scales apply only between groups with a particular relationship. The relative status of groups has been hypothesised to influence the occurrence of dehumanization. Many infra-humanization studies have found that status does not predict dehumanization (Delgado et al 2006; Demoulin et al 2005). However, Capozza et al (2012) showed that higher-status groups dehumanized lower-status outgroups, but not vice versa. Those studies that failed to find predictive-value in status used secondary emotions as predictors of infra-humanization, whereas studies that included a wider set of attributes have found status effects. This may be because other qualities associated with ‘humanness’ – such as intelligence, rationality and talent – are taken to be more reflective of status than emotions. Supporting this conclusion, Leyens et al (2001) found that while a higher-status group infra-humanized the outgroup through secondary emotions and intelligence, the lower-status group infra-humanized the higher-status group only through secondary emotions. Therefore I have recommended the use of an infra-humanization measure which includes only emotions, as they are more independent of structural dimensions of society than other traits.
A shortcoming of the Ascent scale is that it may not be applicable to dehumanization of ‘high status’ groups: the notion of ‘evolvedness’ seems inherently related to status, therefore Americans, even if prejudiced against relatively high status groups such as Europeans, are unlikely openly to label them as less ‘evolved’ (Kteily et al 2015). However, Ascent dehumanization has empirically been shown to apply to both higher and lower status groups – in Gaza in 2014, Palestinians blatantly dehumanized ‘higher status’ Israelis (Bruneau and Kteily 2015). Thus the evidence for the effect of status on the utility of the Ascent scale is inconclusive. This may conceal a complex interaction between specific aspects of a particular cultural context and the applicability of the scale. The cross-cultural validity of IC has never been explicitly tested, but rather has been inferred from the wide-ranging successes of applications of IC. Analogously, though we may have provisional doubts about the cross-cultural applicability of the Ascent scale, only empirical implementation will give us the material to confirm or disconfirm these doubts.
3.5 Prospects for Implementation
As can be seen from the literature reviewed above, a Dehumanization scale which combines a subtle and blatant measure of dehumanization is essentially valid for use in IC interventions. A further issue is practical implementation.
The practical implementation of scales is constrained by the need to minimise demand characteristics – circumstantial cues which may influence the results of an experiment. These can bias the results of dehumanization measures on both a conscious and less-than-conscious level. On a conscious level, social desirability motives may conceal dehumanization. Individuals with a high motivation to control appearance of prejudice suppress prejudice on explicit, but not implicit, measures (Dunton and Fazio 1997). We can expect that in the context of the IC courses, individuals will experience a high motivation to control prejudice. There are also less-than-conscious demand characteristics: encouraging participants to complete a measure of outgroup dehumanization before participation in the course may increase accessibility of dehumanizing perceptions of the outgroup. Answering evaluative questions can lead to activation of information consistent with that possibility, which may in turn influence subsequent attitudes and behaviour (Mussweiler 2007). Thus the completion of a scale of blatant dehumanization may influence participants’ attitudes during the course, interfering with their ability to interact productively with the outgroup.
To minimise these effects, first, the timing of the tests of dehumanization may be manipulated relative to the timing of the course in order to avoid the effects of salience of dehumanizing thoughts. The pre-intervention test of dehumanization should take place at least one week prior to the course, so that any effects of dehumanization priming are offset by the time the course takes place.
Secondly, dehumanization tests should be taken in relative (apparent) privacy and anonymity, being filled out in a room alone, to minimise social desirability motivation and experimenter effects. The subtle infra-humanization measure is obscure and indirect – people are probably not aware that they express the superiority of their ingroup through the differential attribution of secondary emotions – therefore it is less likely to be vulnerable to these effects. In contrast, a disadvantage of the blatant measure is that it is more easily faked. Though these effects can be reduced, they cannot be eradicated.
Motivated by limitations of the current method of assessing IC interventions, I have recommended the implementation of a Dehumanization scale as a novel measure, and outlined and evaluated a basic design for this. Table 1 summarises the ways in which a Dehumanization scale addresses the limitations of IC coding.
|High IC is ambiguous between dialectical and elaborative complexity.||Re-humanization is specifically linked to dialectical complexity.|
|High IC does not measure affective empathy.||Dehumanization is linked to affective empathy.|
|An IC score at one time does not necessarily indicate the ability to maintain high IC in situations of stress.||Re-humanization may facilitate the positive affect needed to maintain high IC in situations of stress.|
Table 1. Ways in which dehumanization may compensate for limitations in IC coding.
The most serious lacunae in the research are the lack of empirical evidence for the claim to predictive validity of dehumanization for RIVE, for the long-term robustness of re-humanization effects, and for the cross-cultural applicability of the scales (particularly the blatant scale).
The most significant limitation of the implementation of a Dehumanization scale is demand characteristics. IC coding is less susceptible to demand characteristics: results are difficult to fake, because it measures the structure rather than content of thinking. The problem of fakeability is most serious for the blatant measure of dehumanization. I have outlined recommendations to minimise these effects, however only practical implementation of the scale in the context of actual interventions can reveal how damaging this potential drawback is to the process.
Future research should focus on the above-mentioned lacunae in the current research. First, research is required on the predictive validity of dehumanization for RIVE. This could employ a paradigm with an independent measure of dehumanization, and a dependent measure of sympathy for extremist statements on a Likert-type scale. Future tests could also compare the relative predictive validity of different measures of dehumanization, as Kteily et al (2015) first did for a subtle and blatant measure.
Secondly, research on the long-term durability of changes in dehumanization and IC is needed. Empirical evidence for the long-term robustness of the effects of IC courses comes from studies of IC interventions which have had six-month and nearly two-year follow-up interviews, both of which confirmed the robustness of changes in IC management through self- and observer-reports (Boyd-MacMillan et al 2016a). However, most IC interventions have lacked the funding for long-term follow-ups, and as the interventions are relatively new, the research in long-term robustness of the interventions is lacking. Future research could compensate for this by looking for both direct and indirect indicators of success, as well as the use of control groups in varying conditions (for example, a multi-faith as compared with a religious school environment) to analyse the extent to which different conditions reinforce or hinder the positive effects of IC courses. For the first time, a 39-month research study in Northern Ireland secondary schools will aim to do this (Boyd-MacMillan et al 2016b). This study will employ a longitudinal, multi-wave design with follow-ups at intervals of one, two and three years. Including a Dehumanization scale alongside the other behavioural measures in this study and subsequent ones of the same type may be informative.
Thirdly, research on the cross-cultural validity of Dehumanization scales is needed. This can be directly measured by experimental studies using participants from diverse cultures (cf. Bain et al 2012), or indirectly in the same way as IC has been taken as cross-culturally validated, by inferring validity from the success of disparate interventions across wide-ranging cultures. The IC research field may also benefit from a meta-analysis of cross-cultural studies, systematically identifying co-variation of culture and sensitivity to interventions. This would both reinforce the claim of IC coding to ‘cross-cultural validity’, and make it more specific.
The infancy of the dehumanization research field makes the gaps in our current knowledge no surprise – the term ‘dehumanization’ appeared as frequently in psychology articles between 2007 and 2011 as in the previous 40 years combined (Haslam 2014). Equally, as the positive results of IC interventions accumulate, there has been an explosion of research in new ways to improve and monitor their progress. In this exciting context, the implementation of the proposed Dehumanization scale would have dual practical and academic advantages. It presents not only a complementary assessment of the success of IC interventions, but also the opportunity for new empirical material to benefit the research fields of both dehumanization and IC, linking the two for the first time.