We are all biased, so what can we do about it?

 

We are all biased. You, and me, and everyone we know. We naturally prefer certain people or things over others and often make decisions based on subjective beliefs or gut instincts rather than actual facts. Bias is a lens that distorts how we understand the world, warping our perception of reality in ways that are often unconscious and deeply ingrained. But where does it come from? And what can we do about it?

We understand the world in terms of mental models. These models, known as heuristics, are a kind of pattern recognition that allows us to process new information by associating it with patterns we’ve encountered in the past. These mental shortcuts help us navigate new and uncertain situations more quickly and make decisions more efficiently, but they can also be misleading, producing errors in our thinking and judgment. Humans are so predisposed towards recognizing patterns that we often make connections between things that have no relationship to one another. 

This tendency to find meaning in random data is known as “apophenia,” a type of cognitive bias that causes us to see shapes in the stars or the image of the Virgin Mary in a piece of burnt toast. In the work Cloud Face, the Korean artist duo Kimyonghun Shinseungback presents images of clouds that have been identified as faces by a machine learning algorithm. Humans often see faces in clouds too, but we know better than to mistake them for an actual face, whereas machines can’t tell the difference. The work explores similarities and differences between human and machine intelligence and illustrates how a deeply ingrained pattern or heuristic—in this case, the ability to recognize faces—can be misapplied to trick people (and machines) into seeing something that isn’t really there.

Much of our implicit bias is tied up in the construction of social identity, in the way we create social in-groups and out-groups, and draw distinctions between an “us” and a “them”. While this no doubt stems from our need for community, safety, and belonging, it also leads to various forms of social segmentation, exclusion, stereotyping, and prejudice based on gender, race, ability, class, or any other metric of social difference. Those we identify with most strongly tend to influence our attitudes, behaviors, and ideas about the world, and come to define our sense of what is normal. Mushon Zer-Aviv’s mobile app and installation, Normalizi.ng, brings the issue of socially defined normalcy into focus by asking participants to choose the most “normal” facial features and faces from a line-up of previous participants. The project is informed by the work of French criminologist Alphonse Bertillon, who invented the criminal mugshot in the 1880s and introduced the idea of confirming identity through the analysis and categorization of bodily measurements, an approach mirrored by contemporary biometric profiling and facial recognition technology which claims that the objectivity of data, calculation, and statistical analysis is a rational way to circumvent human bias. But does it really work that way?

Increasingly, algorithms and forms of artificial intelligence such as machine learning are being used to automate decision-making in hiring, healthcare, criminal justice, insurance, education, and virtually every other industry. The goal is to make systems more standardized, scalable, efficient, and cost-effective—and, supposedly, fair—because unlike humans, AI makes decisions based on data and statistical analysis, not favoritism or prejudice. Yet recent studies have shown that AI is not immune to bias either. There is no shortage of stories about AI hiring software that discriminates against people who went to women’s colleges, or have ethnic sounding names, or facial recognition software that resulted in the wrongful arrest of innocent civilians, most often from marginalized communities.

Bias in AI arises from the way AI is designed and trained. AI researchers Vladan Joler and Matteo Pasquinelli identify three types of bias in AI: Historical bias, which reflects existing social biases and inequities; Dataset bias, which is introduced through the selection, preparation and labelling of data by human intermediaries; and Algorithmic bias, which is produced through the kinds of statistical analysis, or pattern-finding, involved in the construction of algorithmic models. As artist Anna Ridler highlights in her piece Laws of Ordered Form, the process of selection (what to include or exclude) and organization (the creation of labels, hierarchies, and taxonomies) is deeply subjective and political, reflective of both one’s cultural and individual biases. Ridler turns to Victorian and Edwardian encyclopedias for the construction of her dataset, manually scanning and re-classifying images of the natural world found inside these compendia of knowledge. In the video, we see her hands at work, flipping through the pages to crop, cut, and classify, revealing both the tedious labor as well as the consequences of invisible choices made by invisible hands. Ridler’s use of encyclopedias underscores the way bias, values, and beliefs become encoded in knowledge production—in the creation of both mental models and algorithmic models—which, in turn, shapes institutions of power, cultural norms, and technological systems.

In today’s media environment, our every action becomes a data point captured and logged by the dozens of digital platforms and networked devices we interface with each day. Each scroll through the feed or trip to the supermarket generates hundreds of insights about our interests, relationships, likes and dislikes, hopes and fears—all of which are algorithmically analyzed to produce highly detailed individual profiles. Our personal data is mapped onto a massive social graph that lets companies compare our behavior against that of millions of other users in order to predict what we might do next. In an attempt to make services that are easy and convenient, and to serve up targeted content and advertisements, companies have created a surveillance apparatus that purports to know us even better than we know ourselves. These issues are satirized in SKU-Market, a new installation from artist Laura Allcorn and Trinity College professor Jennifer Edmond that explores how computational capitalism and algorithmic profiling create new forms of social stratification. The project takes the form of a shop where each purchase is used to determine the buyer’s character and make future decisions on their behalf. Visitors are asked to pick five things that “give them life” from the shelving display, which contains items like rosé wine, football jerseys, VR headsets, and dating apps, as well as human rights t-shirts and reusable plastic bags. The SKU-Market algorithm uses each selection to build a profile of the shopper and make predictions about their future needs and desires. It highlights the way digital platforms exploit and reinforce our biases through discrete forms of social engineering, initially developed by advertising and marketing industries in the 20th century and perfected by companies like Facebook, Google, and Amazon to optimize their advertising-supported business models.

Machine learning systems can replicate and strengthen existing human biases. They are also prone to unique forms of machine bias. Algorithmic processes are bound by the limits of computation and, as such, that which is not machine readable or cannot be captured in 1s and 0s will be inscrutable to a machine intelligence. In Uncanny Valley of Breath, artists Carlotta Aoun and Antony Nevin worked with Trinity College researchers Ed Storey and Patrick Cormac English to explore the breakdown in communication between humans and machines that occurs in speech recognition technology. There are countless variations to human speech, from different languages and dialects to regional accents and pronunciation, and virtual assistants like Siri, Alexa, or Google Home often struggle to understand speech that differs from the examples they’ve been trained on. Breath is a key factor in the subtle nuances of language, giving shape to vowels and consonants, cadence and tonality through hundreds of micro-exhalations and inhalations. Yet for a Natural Language Processing AI, breath is a meaningless interruption of data, mere noise or interference. The installation highlights this machine bias by contrasting human and AI-generated speech using video projections of spectrograms (visual representations of audio frequencies that illustrate fluctuations in sound and breath) which trigger walls of PC fans to turn on and off in a way that mimics the sensation of breath and makes the differences between human and machine perception both visible and palpable.

While an AI that can’t understand your accent may seem like an amusing inconvenience, when such technology is used to screen job applicants, process insurance claims, or provide telemedicine support, it’s hardly funny. Bias is especially harmful when embedded within institutions of power and social governance where it extends beyond the individual and becomes the operational logic of a larger system, resulting in discrimination against entire social groups that ends up producing or compounding inequality. As Ruha Benjamin, author of Race After Technology reminds us, “much of what is routine, reasonable, intuitive, and codified reproduces unjust social arrangements.” Since machine learning systems are trained on data from the past, we risk reproducing histories of discrimination in today’s digital platforms. These tools have the potential to cause real harm, particularly when used in high stakes sectors like healthcare or criminal justice. In one famous example uncovered by ProPublica in 2016, the COMPAS algorithm, which was used to predict rates of recidivism in former convicts and determine the length of their jail sentences, was found to incorrectly label Black defendants as “high-risk” at nearly twice the rate it mislabeled white defendants.

In the interactive film installation Perception iO, artist and storyteller Karen Palmer speculates on the possible evolution of AI in policing if it continues along the current vector of change. The branching film narrative is shot from the point of view of a police bodycam, and the viewer assumes the vantage point of a police officer watching a training video. On screen, we witness an altercation between an officer responding to a distress call where they encounter, in turn, both a white protagonist and Black protagonist. It is not clear whether either protagonist is experiencing a mental health crisis or is a crime suspect. The action unfolds, and escalates, according to the viewer’s emotional response to the events on screen, as determined by an AI software that analyzes their eye movement and facial expressions. The viewer’s level of anxiety determines whether the officer in the scene will arrest, call for backup, or shoot the protagonist. An emotionally charged and confrontational work, Perception iO reflects on how bias in policing disproportionately impacts the lives of people of color. It also asks viewers to examine their own implicit bias and demonstrates the feedback loop at work between human biases and technological systems. Yet the work’s reliance on emotion recognition technology, which is highly contested in the field of AI for being demonstrably flawed and unreliable, also calls into question our ability to blindly trust and rely on the accuracy of these tools.

The question of how to deal with the problem of bias – in AI and in society – has no easy answers. Some artists, like Jake Elwes, seek to disrupt the normativity of the model by adding new training data. In Zizi – Queering the Dataset, Elwes trains a facial recognition algorithm on images of drag and gender fluid individuals to move the model towards more ambiguous expressions of identity. Similarly, in the video essay Classes, Libby Heaney concludes her investigation into anti-working-class bias in AI systems with a call for “rewilding the algorithm,” culminating in a playful refusal of the disciplinary, prescriptive effects of classification that instead imagines systems of relation that are “not static but dynamic, not captive but adaptive.” Conversely, Johann Diedrick’s work questions the possibility of reforming these tools at all, even (and, perhaps, especially) from within the vaunted halls of tech corporations. Diedrick’s Dark Matters, which explores the absence of Black speech in voice training datasets, does not call for better representation, particularly when such tech can just as easily be weaponized as a surveillance tool against the Black community. Instead, the piece leads us towards a speculative narrative told in poetic, stream-of-consciousness-style prose, in which the AI is not saved, but sabotaged.

The question remains: can AI “nudge” us into becoming better versions of ourselves? If we understand the ways in which our perception is skewed or existing systems are flawed, perhaps we can use AI to help correct this imbalance, to provide a check on our unconscious biases. In the installation Your Balanced Media Diet, artist Ross Dowd and Trinity researcher Brendan Spillane take on the issue of bias in media consumption. Many news outlets tailor their coverage to a specific demographic, often skewing left or right in their reporting, a gap in objectivity that has grown and intensified alongside political polarization and has been further exacerbated by algorithmic echo chambers, filter bubbles, and the spread of misinformation and disinformation online. No media source is perfectly objective, but they vary greatly in credibility and fact checking. Playing with the visual aesthetic and language of healthy eating campaigns, Dowd and Spillane ask visitors to share their favorite media outlets and to consider which ones are most “nutritious” and which ones are more akin to “junk food.” They created an algorithm that gives visitors recommendations for how they might make better, more credible, more diverse, media choices to achieve a more balanced (and, hopefully, less biased) perspective. The playful nature of the work belies the dangers of a media landscape optimized for attention and engagement, particularly online where algorithms have been shown to favor inflammatory, controversial, and factually incorrect content, helping it spread farther and faster. This contributes to ideological radicalization and the spread of conspiracy theories that have real world consequences, something we have seen plenty of during the Covid-19 pandemic.

In an age of information overload, where the volume, speed, and scale of information flows far exceed the grasp of human processing capacity, we rely on AI more and more to support our sensemaking abilities. AI helps us find the signal in the noise; it does the pattern recognition for us. But there are many things that humans still do far better than AI, such as understanding meaning and context—skills that are vital for interpreting information accurately. Some believe that the best approach is one that combines the strengths of both human intelligence and AI to achieve an “augmented intelligence” that does not attempt to replace the human but rather keep the “human in the loop.” Caroline Sinders and Aphra Kerr explore how AI and humans might work together to make the internet a safer place through content moderation. Social media companies like Facebook, Twitter, YouTube and Instagram process billions of posts and days worth of content on any given day and are facing mounting pressure to take responsibility for the content shared on their sites. They use AI to automate the screening and moderation of this material, but still rely on human moderators to validate and ensure the accuracy of the algorithm. This approach may seem to offer the best of both worlds, but what is the experience of the human worker in a system designed to function at the speed of AI? This interactive game draws attention to the high-pressure working environment many content moderators, and other kinds of precarious workers who work alongside AI systems (and, ultimately, train them to replace themselves), experience on the job. 

With AI rapidly expanding into just about every sector of life and impacting decision-making that has far reaching ripple effects and long term implications, the need to scrutinize and interrogate algorithmic models to ensure they produce fair and equitable systems is vital, particularly during this moment of transition. In recent years, there has been a big push to develop tools for Explainable AI with the aim of making black box algorithms more transparent and giving developers and, in some cases, end users the means to probe the logic behind AI systems or challenge their predictions. Methods like Counterfactual Explanations, a form of explainability that has been developed at places like Accenture, Google and IBM, use “what if?” scenarios to see how an AI’s analysis changes with different data inputs and to assess whether it treats various social groups fairly. But then, this begs the question, how should we define what is “fair”? When exploring different metrics of fairness, Kate Crawford, author of Atlas of AI, described a Google image search for CEOs in which women comprised only 1% of the results, even though they account for roughly 8% of CEOs globally. How should we tweak the algorithm to make the results more fair? Should it reflect the percentage of women CEOs we have today, even though that number is produced through generations of discrimination against women? Should women comprise 50% of the results, even though that does not reflect our current demographics? Should an AI system be optimized to correct for historical bias, perhaps weighting a model in favor of a group that has been subject to prejudice in the past, or treat everyone the same and risk reinforcing existing social inequality? These are political and philosophical questions that exceed the scope of technical representations of neutrality.

Artists Risa Puno and Alexander Taylor explore questions of fairness, explainability and agency in the interactive installation, Most Likely to Succeed. To play the installation’s tabletop maze game, visitors must first undergo a virtual assessment where an AI will determine their probability of success in the game. Based on the player’s perceived level of skill and ability, as determined by their performance in the assessment, the AI will assign a certain amount of playing time, allocating more time to those it deems most likely to succeed. After playing, visitors receive a report that gives them insight into how the AI scored their assessment and what they could do to potentially improve their score and increase their playing time, a kind of counterfactual explanation that reveals what the user needs to change for the AI system to arrive at a different conclusion. To create Most Likely to Succeed, Puno and Taylor worked with researchers at Accenture Labs to understand the challenges and possible solutions of making AI fairer and more unbiased. How fairness should be defined within the algorithmic model was a central topic of concern — should the model be optimized based on need, equity, or deservedness? These are questions AI researchers and developers must wrestle with as they create these systems. The model in the installation is optimized for deservedness, or meritocracy, wherein the system rewards those it deems most capable in an effort to improve efficiency and success rates while mitigating risk. But, as visitors will see, this model of fairness doesn’t necessarily work for everyone.

Such debates over the design and implementation of AI systems force us to contend with existing social bias and inequality, offering us a rare opportunity to choose what to value and optimize for as our world is rapidly changed by this powerful technology. As Kate Crawford points out, “when we consider bias purely as a technical problem, we are already missing part of the picture.” Crawford maintains that structural bias is a social issue first and a technical issue second, and that we must be able to consider the social problems alongside the technical, as sociotechnical problems that rely on forms of classification that are always necessarily subjective, and products of the social, political, and cultural biases of their time. The problem of bias in culture and in our AI systems present thorny political and philosophical questions that cannot be solved through computation alone, and require an interdisciplinary dialogue that asks us to look closely at the values we are encoding into technology today and how they will shape our societies in the future.

 
 
Guest User