We entrust machines with our lives and livelihoods as we offload further decision making processes onto artificial intelligence. Modem worked together with Saba Babas-Zadeh, researcher and DE&I catalyst, and Monique Todd, artist and writer, to explore the unconscious bias built into AI systems and map potential strategies for a more inclusive tech landscape.
In the opening two minutes of Alex Garland’s 2018 thriller Ex Machina, Caleb Smith – a young programmer employed at a giant internet search firm – wins a trip to a remote mountain estate owned by the company’s CEO. As Caleb discovers this news via email, a slew of congratulatory messages immediately stream through his phone before he is applauded by the office team. All eyes are on him at this glorious moment, including the eyes of his cell phone and desktop cameras, whose face-tracking features splice this short montage. These somewhat minor, brief clips foreshadow Caleb’s eventual entanglement with the nearly “human” cyborg he’s enlisted to analyze; a cyborg that can return his gaze.
AI fantasies fixate on this idea of machine sentience, and sci-fi tropes weigh heavily on the fear that machines might eventually be able to look back at us. However the reality of our day-to-day lives surfaces the already large extent to which smart algorithms analyze our interactions and respond to our habits. From virtual assistants to facial recognition, machine-learning technology impacts how we see the world based on the way it sees us. With this in mind, the inefficiencies of such intelligence in sensitive fields such as healthcare, employment, human rights and law highlight the biased data feeding technologies that are otherwise paraded as impartial. Facebook’s AI labeled Black men as primates in a video circulating on its feed and later apologized for the “algorithmic error”. Speech recognition technology, now ubiquitous across most devices, is still unable to register the accents and speech patterns that differ from western tonalities. These developments, pioneered by an ever-shrinking list of platform providers, have scoped a terrain where “convenience” is the presiding moral guidepost, costing the safety and wellbeing of those who are most marginalized in society. But who can in fact make something applicable and accessible to all? And what about the inadequate efforts for diversity, where rigged systems attempt to account for/include/acknowledge marginalized groups as an afterthought? These issues are undoubtedly urgent, but speed is hardly an effective balm on its own. As we offload further decision making processes onto AI, entrusting machines with our lives and livelihoods, it is imperative that we address the embedded logics that lead to violence.
PATTERN MATCHING
Joy Buolamwini, computer scientist and founder of the Algorithmic Justice League, narrows in on the ways technology can reiterate the questionable motivations of its creators in her Time Magazine article Artificial Intelligence Has a Problem With Gender and Racial Bias. Here’s How to Solve It. Seductive aesthetics and the appeal of efficiency might overshadow these biases, but technology cannot escape them. “We often assume machines are neutral, but they aren’t. It is time to re-examine how these systems are built and who they truly serve.” writes Buolamwini. “When people’s lives, livelihoods, and dignity are on the line, AI must be developed and deployed with care and oversight.” Innovations that we take for granted, such as the camera, still uphold damaging ideals. Upwards of five billion images are taken daily, and each of those images inherit a history of embedded racial preference. Photographs that used color-film technology from the 1950s onwards, for instance, were corrected against an image of a white brunette woman. This visual benchmark, dubbed the Shirley card, was an industry staple for Kodak lab technicians until a multi-racial version was introduced in the 1990s. Digital photography has since parked the need for such crass calibration, but this process haunts image production to this day, with excess labor being applied in some pre and post production stages to amend the bias of the lens. As Harvard Professor Sarah Lewis writes, “by categorizing light skin as the norm and other skin tones as needing special corrective care, photography has altered how we interact with each other without us realizing it. Analogue and digital.”
The Shirley Card is essentially an example of Pattern Matching, where a system is optimized to complete various protocols through repeat recognition of any number of inputted references. AI algorithms build their knowledge on this process of data analysis in order to function. Machine Learning expert Lance Eliot stresses concern over the learned biases of devices through his analysis of the self-driving car – a sci-fi dream made real through algorithmic learning. According to Eliot, the self-driving car will need to contend with the biases that drivers already internalize, as this is the data that will inevitably provide the basis for its knowledge. For example, Black pedestrians are twice as likely to be driven past when attempting to enter a crosswalk, a prejudice so entangled with general crosswalk data that it is likely to inform the everyday functioning of a ‘smart’ vehicle. More immediately, medical devices such as pulse oximeters – which shines light through the skin to measure oxygen levels – are three times more likely to overlook low amounts in Black patients as compared to their white counterparts.
Large language models, like OpenAI’s GPT-4, possess inherent flaws too, displaying biases that result from training on substantial text data riddled with societal prejudices and stereotypes. These biases can inadvertently reinforce and intensify existing discriminatory beliefs, as demonstrated by Microsoft’s Bing AI, which recently “hallucinated” antisemitic remarks. Add to this the surveillance strategies compromising civil liberties across varying geographies, including South Africa where over six thousand CCTV cameras track and trace individual movement in the capital, and we have a recipe for digital apartheid. It’s evident that computational bias is not a glitch in the system but a continuation of social and cultural bigotry with extensive historical weight. If we are to tackle this, we must dig deeper behind the scenes to survey the teams and data that feed these infrastructures.
BIAS IN, BIAS OUT
Technology is a social currency. Those who can obtain and use the newest innovations are more likely to navigate day-to-day life with minimal friction. The visual language of such tech, sleek with chrome and glass fittings, uphold (and boost) the status of the user. But here lies the great deceit, use and ownership aren’t the only requirements that enable tech to serve its advertised purpose. More often than not, devices perform optimally if users are white cisgendered men. “Whiteness and maleness are implicit.” writes Caroline Criado Perez, quoting sociologist Pierre Bourdieu in her book Invisible Women. “They are unquestioned. They are the default. And this reality is inescapable for anyone whose needs and perspective are routinely forgotten.” Naturally, it follows that the composition of design and development teams across most industries are demographically unvaried, and AI is no exception. Only less than 22% AI researchers and 5% of software developers are women. What’s more, according to the People of Color Tech Report, an average of just 2.7% of executives in senior roles at 10 major tech companies are Black.
These statistics are supported by a similar dearth of diverse data, which Parez further names as the “data gap”, where Black women, women of color, disabled women, working-class women and the LGBTQIA+ community are rendered invisible whilst also being highly surveilled. Meta is currently looking to combat such bias through setting up Casual Conversations, an open-sourced data set that boasts over forty-five thousand videos of people having non-scripted conversations. Such data will allow researchers to test and create inclusive speech recognition tools. Their investment in the deceptively straightforward concept of fairness is spearheading this current development to promote a humane evaluative model that operates at a technical level but also takes into account the cultures, communities and politics that shape how a product or system is used. “A holistic approach to fairness considers not only whether algorithmic rules are being applied appropriately to all, but also whether the rules themselves — or the structure in which they are situated — are fair, just, and reasonable.” AI cannot perceive all life as equally valuable when the data it is fed, and the teams that feed it, account for everyone but marginalized groups. It becomes increasingly clear an attentive and sensitive approach to data set compilation is a structural imperative for inclusive tech and real accountability.
THE RIGHT TO VISIBILITY AND OPACITY
“Can we make the world a better, less fascist and more sensually appearing place from our devices?” asks theorist, artist, educator and writer Mandy Harris Williams. This question frames the current tech landscape as a space for reparative and transgressional schemes, of which Williams’s own social media intervention “#BrownUpYourFeed” is an example. A hashtag that now yields upwards of 36,000 posts, #BrownUpYourFeed was conceived in 2018 to interrupt the slew of cis-hetero-white content that overwhelms our feeds. Sociologist and Princeton University Professor Ruha Benjamin names the forces which Williams also contends with as the New Jim Code, a concept that riffs off writer and civil rights activist Michelle Alexander’s analysis in The New Jim Crow. It seeks to articulate the “combination of coded bias and imagined objectivity” in contemporary innovations. Similar to Williams, Benjamin favors an accessible action, and has conceived the freely available Our Data Bodies Digital Defense Playbook as a point of reference for activities and skill sharing to “engender power, not paranoia”. Girls Who Code sits adjacent to such efforts, by providing educational programs to girls across the world in order to combat the gender gap in tech.
When navigating and upheaving such fraught terrain, multiple approaches must be in process across a variety of levels, which undoubtedly include big money. The world of startups is a hotbed for new developments that can either advance progress or add fuel to an already furious fire. BBG Ventures and GOODsoil are examples of venture capital firms looking to nurture diverse global talent and women-led ideas respectively, valuing the disruptive potential of underrepresented visionaries. As for AI innovations that present alternative approaches, Q – a gender neutral voice assistant – interjects the binary world set up by Siri, Alexa and Google Assistant, whilst Gender Shades takes a step back to evaluate coded bias in facial analysis algorithms to dismantle embedded phenotype hierarchies. The Monk Skin Tone (MST) scale is a recent intervention from Google to evaluate the fairness of its recognition AI, which will mean that a 10-skin tone standard will be imposed across all its products, replacing the industry standard 6-tone range of the Fitzpatrick scale established in the 1970s – a switch that other tech behemoths may duplicate.
Such developments inspire hope for a networked future that does away with binaries and bro-led visioning. It is worth acknowledging, however, that our networked future will not so easily be remedied through ambitions for inclusivity and visibility alone. We could argue that this mode of thought in itself enforces another binary, whereby opacity is automatically cast as a threat. The right not to be seen, or to be seen on one’s own terms, is also worth aspiring towards. “We are encrypted: how we are coded is not meant to be easily read. We reject the conflation of legibility and humanity.” states curator and writer Legacy Russell in her tech manifesto Glitch Feminism. “Our unreadable bodies can render us invisible and hyper-visible at the same time. As a response to this, we work together to create secure passageways both on- and offline to travel, conspire, collaborate.” Here we land on a less straightforward, yet energizing point of departure. For AI to truly work in the favor of the most marginalized, we must navigate desires for inclusion whilst also engaging in the disruptive potential of opacity as a means to trouble the excessive surveillance that visibility naturally promotes. After all, we are not binary beings, and our machines don’t have to be either.
NEWSLETTER
Subscribe to the Modem newsletter to receive early access to our latest research papers exploring new and emerging futures.