UN-BIASING THE WORLD

Togetherness by Anna Lopes, Published under Commons License

Everyone is biased, it’s built into the way we experience the world. But that doesn’t mean that we are slaves to our instincts.

By Corey S. Powell

We all have a natural tendency to view the world in black and white—to the extent that it’s hard not to hear “black” and immediately think “white.” Fortunately, there are ways to activate the more subtle shadings in our minds. Kristin Pauker is a professor of psychology at the University of Hawaiʻi at Mānoa who studies stereotyping and prejudice, with a focus on how our environment shapes our biases. In this podcast and Q&A, she tells OpenMind co-editor Corey S. Powell how researchers measure and study bias, and how we can use their findings to make a more equitable world. (This conversation has been edited for length and clarity.)


LISTEN TO THE PODCAST

OpenMind · Moving Beyond Bias

Kristin Pauker: We have to think about ways in which we can change the features of our environment—so that our weeds aren’t so prolific.


READ THE INTERVIEW


When I hear “bias,” the first thing I think of is a conscious prejudice. But you study something a lot more subtle, which researchers call “implicit bias.” What is it, and how does it affect us?

Implicit bias is a form of bias that influences our decision-making, our interactions and our behaviors. It can be based on any social group membership, like race, gender, age, sexual orientation or even the color of your shirt. Often we’re not aware of the ways in which these biases are influencing us. Sometimes implicit bias gets called unconscious bias, which is a little bit of a misnomer. We can be aware of these biases, so it’s not necessarily unconscious. But we often are not aware of the way in which they’re influencing our behaviors and thoughts.

You make it sound like almost anything can set us off. Why is bias so deeply ingrained in our heads?

Our brain likes to categorize things because it makes our world easier to process. We make categories as soon as we start learning about something. So we categorize fruits, we categorize vegetables, we categorize chairs, we categorize tables for their function—and we also categorize people. We know from research that categorization happens early in life, as early as 5 or 6, in some cases even 3 or 4. Categorization creates shortcuts that help us process information faster, but that also can lead us to make assumptions that may or may not hold in particular situations. What categories we use are directed by the environment that we’re in. Our environment already has told us certain categories are really important, such as gender, age, race and ethnicity. We quickly form an association when we’re assigned to a particular group.

In your research, you use a diagnostic tool called an “implicit association test.” How does it work, and what does it tell you?

Typically someone would show you examples of individuals who belong to categories, and then ask you to categorize those individuals. For example, you would see faces and you would categorize them as black and white. You’re asked to make a fast categorization, as fast as you can. Then you are presented with words that could be categorized as good or bad, like “hero” and “evil,” and again asked to categorize the words quickly. The complicated part happens when, say, good and white are paired together or bad and black are paired together. You’re asked to categorize the faces and the words as you were before. Then it’s flipped, so that bad and white are paired together, and good and black are paired together. You’re asked to make the categorizations once again with the new pairings.

The point of the test is, how quickly do you associate certain concepts together? Oftentimes if certain concepts are more closely paired in your mind, then it will be easier for you to make that association. Your response will be faster. When the pairing is less familiar to you or less closely associated, it takes you longer to respond. Additional processing needs to occur.

When you run this implicit association test on your test subjects or your students, are they often surprised by the results?

We’ve done it as a demonstration in the classroom, and I’ve had students come up and complain saying, “There’s something wrong with this test. I don’t believe it.” They’ll try to poke all kinds of holes in the test because it gave them a score that wasn’t what they felt it should be according to what they think about themselves. This is the case, I think, for almost anyone. I’ve taken an implicit association test and found that I have a stronger association with men in science than women in science. And I’m a woman scientist! We can have and hold these biases because they’re prevalent in society, even if they’re biases that may not be beneficial to the group we belong to.

Studies show that even after you make people aware of their implicit biases, they can’t necessarily get rid of them. So are we stuck with our biases?

Those biases are hard to change and control, but that doesn’t mean that they are uncontrollable and unchangeable. It’s just that oftentimes there are many features in our environment that reinforce those biases. I was thinking about an analogy. Right now I’m struggling with weeds growing in my yard, invasive vines. It’s hard because there are so many things supporting the growth of these vines. I live in a place that has lots of sun and rain. Similarly, there’s so much in our environment that is supporting our biases. It’s hard to just cut them off and be like, OK, they’re gone. We have to think about ways in which we can change the features of our environment—so that our weeds aren’t so prolific.

A clever idea people have been thinking about is trying to change consequences of biases.

Common programs aimed at reducing bias, such as corporate diversity training workshops, often seem to stop at the stage of making people aware that bias exists. Is that why they haven’t worked very well?

If people are told that they’re biased, the reaction that many of them have is, “Oh, that means I’m a racist? I’m not a racist!” Very defensive, because we associate this idea of being biased with a moral judgment that I’m a bad person. Because of that, awareness-raising can have the opposite of the intended effect. Being told that they’re biased can make people worried and defensive, and they push back against that idea. They’re not willing to accept it.

A lot of the diversity training models are based on the idea that you can just tell people about their biases and then get them to accept them and work on them. But, A, some people don’t want to accept their biases. B, some people don’t want to work on them. And C, the messaging around how we talk about these biases creates a misunderstanding that they can’t be changed. We talk about biases that are unconscious, biases that we all hold, that are formed early in life—it creates the idea, “Well, there’s nothing I can do, so why should I even try?”

How can we do better in talking about bias, so that people are more likely to embrace change instead of becoming defensive or defeated?

Some of it is about messaging. Biases are hard to change, but we should be discussing the ways in which these biases can change, even though it might take some time and work. You have to emphasize the idea that these things can change, or else why would we try? There is research showing that if you just give people their bias score, normally that doesn’t result in them becoming more aware of their bias. But if you combine that score with a message that this is something controllable, people are less defensive and more willing to accept their biases.

What about concrete actions we can take to reduce the negative impact of implicit bias?

One thing is thinking about when we do interventions. A lot of times we’re trying to make changes in the workplace. We should be thinking more about how we’re raising our children. The types of environments we’re exposing them to, and the features that are in our schools, are good places to think about creating change. Prejudice is something that’s malleable.

Another thing is not always focusing on the person. So much of what we do in these interventions is try to change individual people’s biases. But we can also think about our environment. What are the ways in which our environments are communicating these biases, and how can we make changes there? A clever idea people have been thinking about is trying to change consequences of biases. There’s a researcher, Jason A. Okonofua, who talks about this and calls it “sidelining bias.” You’re not targeting the person and trying to get rid of their biases. You’re targeting the situations that support those biases. If you can change that situation and kind of cut it off, then the consequences of bias might not be as bad. It could lead to a judgment that is not so influenced by those biases.

There’s research showing that people make fairer hiring decisions when they work off tightly structured interviews and qualification checklists, which leave less room for subjective reactions. Is that the kind of “sidelining” strategy you’re talking about?

Yes, that’s been shown to be an effective way to sideline bias. If you set those criteria ahead of time, it’s harder for you to shift a preference based on the person that you would like to hire. Another good example is finding ways to slow down the processes we’re working on. Biases are more likely to influence our decision-making when we have to make really quick decisions or when we are stressed—which is the case for a lot of important decisions that we make.

Jennifer Eberhardt does research on these kinds of implicit biases. She worked with NextDoor (a neighborhood monitoring app) when they noticed a lot of racial profiling in the things people were reporting in their neighborhood. She worked with them to change the way that people report a suspicious person. Basically they added some extra steps to the checklist when you report something. Rather than just reporting that someone looks suspicious, a user had to indicate what about the behavior itself was suspicious. And then there was an explicit warning that they couldn’t just say the reason for the suspicious behavior was someone’s race. Including extra check steps slowed down the process and reduced the profiling.

It does feel like we’re making progress in addressing bias but, damn, it’s been a slow process. Where can we go from here?

A big part that’s missing in the research on implicit bias is creating tools that are useful for people. We still don’t know a lot about bias, but we know a lot more than we’re willing to put into practice. For instance, creating resources for parents to be able to have conversations about bias, and to be aware that the everyday things we do are really important. This is something that many people want to tackle, but they don’t know how to do it. Just asking questions about what is usual and what is unusual has really interesting effects. We’ve done that with our son. He’d say something and I would ask, “Why is that something that only boys can do? You say girls can’t do that, is that really the case? Can you think of examples where the opposite is true?”


This Q&A is part of a series of OpenMind essays, podcasts and videos supported by a generous grant from the Pulitzer Center‘s Truth Decay initiative.

This story originally appeared on OpenMind, a digital magazine tackling science controversies and deceptions.


We are proud to report that SyndicatedNews.NET is now republishing OpenMind Magazine at https://SNN.BZ under Commons License Attribution-NonCommercial-NonDerivatives 4.0 International (CC BY-NC-ND 4.0 license .