Innovation Monitor: Bias, Algorithms & Unjust Systems
Innovation Monitor: Bias, Algorithms & Unjust Systems
Welcome to the NYC Media Lab Innovation Monitor.
We were supposed to focus on the future of payments this week, but given what’s happening in the U.S. today, I wanted to dive deep into the algorithmic bias and its uses in everything from the social justice systems to healthcare. Here we go.
Algorithmic bias is rooted far deeper than AI’s opaque or “black box” tech and neglected datasets. MIT Tech Review published a piece this week on how predictive modeling technology was designed to perpetuate racism for decades.
The piece begins by looking at a similar point in history — the protests and civil rights movement in the late 60s — we start to untangle why today’s algorithms are quietly destroying the lives of people of color. We can start with Simulmatics Corporation, a data company founded by MIT political scientist Ithiel de Sola Pool. Simulmatics was part of a DARPA project aimed at spreading propaganda about the Vietnam War. Then, President Johnson wanted to deploy the company’s “behavioral influence technology” to “quell the nation’s domestic threat, not just its foreign enemies.”
The MIT Tech piece continues, “Under the guise of what they called a ‘media study,’ Simulmatics built a team for what amounted to a large-scale surveillance campaign in the ‘riot-affected areas.’ [The team] identified and interviewed strategically important black people…. [using the data] to trace information flow during protests to identify influencers and decapitate the protests’ leadership.”
It was Cambridge Analytica before Cambridge Analytica. And this is the foundation of our criminal justice information systems.
For decades, algorithms have been left largely unchecked. Companies like Palantir, Clearview, Banjo, and social media platforms we all use have empowered authorities with Orwellian powers to track us everywhere we go — online and off — while proprietary algorithms have unfairly decided the fates of those arrested.
With no governing body to enforce fair algorithmic practices, no regulatory or compliance laws to ensure we’re getting fair AI on the market, and the secretive nature of surveillance companies, we’re stuck with opaque tech and nobody to hold accountable.
Researchers and journalists have played a vital role in highlighting the lack of transparency and the algorithmic unfairness across the country. ProPublica’s recent look at machine bias in criminal justice system algorithms showed that only 20% of people predicted to commit violence actually went on to do so. Flipping a coin in the courtroom would have been far more effective at predicting recidivist behavior. “The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants.”
NYU’s AI Now’s Algorithmic Accountability Policy Toolkit consists of FAQs, definitions, resources, and links to deep dives around the US government’s use of these systems; the type of transparency we need when surveillance companies — which are safe from oversight — are selling their questionable products to law enforcement agencies and the government. When access is granted to these algorithms, researchers and journalists seem to find bias the company was oblivious about. One example was published in Science last October:
“The study… concluded that the algorithm was less likely to refer black people than white people who were equally sick to programmes that aim to improve care for patients with complex medical needs. This type of study is rare, because researchers often cannot gain access to proprietary algorithms and the reams of sensitive health data needed to fully test them.”
Institutions around the world are now trying to publish and enforce ethical algorithm guidelines — you can read more about the efforts below in Meeri Haataja’s great Next Web piece below. And be sure to check out the ProPublica piece and AI Now toolkit in today’s features — all incredibly important pieces to read.
Next week, we will be covering the use of technology in surveillance. This newsletter generally seeks to strike an optimistic and positive tone on the impact of technology, but we feel it’s also important to spend a bit of time learning more about the history and applications of technology to civil society, especially given our current moment. We understand these may be uncomfortable and difficult topics to raise and discuss. Please reply or email me at email@example.com with any thoughts or feedback.
We wish you and your community safety, calm, and solidarity as we support each other through this unprecedented time. Thank you again for reading.
ProPublica’s dive into criminal assessment algorithms. It’s a frightening look at the government’s lack of proper vetting when employing algorithms that change people’s lives. It’s made more impactful by account after account of these systems predicting recidivist behavior in people of color — even when their charges are petty compared to whites with extensive criminal backgrounds, who receive lower “scores”:
“Jones, who had never been arrested before, was rated a medium risk. She completed probation and got the felony burglary charge reduced to misdemeanor trespassing, but she has still struggled to find work. ‘I went to McDonald’s and a dollar store, and they all said no because of my background,’ she said. ‘It’s all kind of difficult and unnecessary.’”
ProPublica — 22 min read Read More Algorithmic Accountability Policy Toolkit
AI Now’s Algorithmic Accountability Policy Toolkit is a compendium of FAQs, definitions, resources, and links to deep dives around the US government’s use of these systems. All the information is highly accessible — definitely bookmark this as a reference. We also liked the mention of the Little Sis Tracking tool, a “free and open database detailing the connections between people and organizations. Little Sis can be used to track the relationships between government officials, vendors, lobbyists, business leaders, philanthropic organizations, and independent donors.”
AI Now Institute — 34 pages Read More What Do Coronavirus Racial Disparities Look Like State by State?
NPR analyzed COVID-19 demographic data from the COVID Racial Tracker, finding that:
- “Nationally, African-American deaths from COVID-19 are nearly two times greater than would be expected based on their share of the population. In four states, the rate is three or more times greater.”
- “In 42 states plus Washington D.C., Hispanics/Latinos make up a greater share of confirmed cases than their share of the population. In eight states, it’s more than four times greater.”
- “White deaths from COVID-19 are lower than their share of the population in 37 states and the District of Columbia.”
NPR — 11 min read Read More How Certification Can Promote Responsible Innovation in the Algorithmic Age
We tend to expect FDA-approved drugs to be safe for use — that trust partly stems from regulatory and compliance laws keeping big pharma in check. Why isn’t there an FDA for algorithms? Meeri Haataja, CEO and co-founder at Saidot — a company that helps firms deploy responsible AI — notes that one of the challenges holding back the autonomous and intelligent systems (A/IS) industry is the lack of an oversight body that can communicate to the public that products are safe and trustworthy via certifications.
The Next Web — 6 min read Read More Walmart Employees Are out to Show Its Anti-Shoplifting AI Doesn’t Work
Walmart has been using Everseen, a small Ireland-based AI firm, since 2017. The retail giant must have seen some value in the tech, because it’s been using it in thousands of stores to prevent shoplifting at self-checkout counters. So it was peculiar when Walmart employees (under the banner “Concerned Home Office Associates”) reached out to Wired with complaints about the software — going as far as to call it “Neverseen” due to the software’s frequent mistakes.
The Associates were so frustrated they produced a video demonstrating the tech’s shortcomings at the checkout point (complete with elevator music and captions). Especially concerning during the pandemic is the software’s false positives, which are forcing employees to confront customers up close. “AI is now creating a public health risk,” said one worker.
Ars Technica — 2 min read Read More How Well Do IBM, Microsoft, and Face++ AI Services Guess the Gender of a Face?
MIT Media Lab analyzed three gender classification products from IBM, Face++, and Microsoft, finding that:
- “All companies perform better on males than females with an 8.1% — 20.6% difference in error rates.”
- “All companies perform better on lighter subjects as a whole than on darker subjects as a whole with an 11.8% — 19.2% difference in error rates.”
- “IBM had the largest gap in accuracy, with a difference of 34.4% in error rate between lighter males and darker females.”
Gender Shades — 5 min watch
Watch Now Voicing Erasure
A recent study by Standford PhD student Allison Koenecke revealed the racial disparities in five popular speech recognition systems — “with the worst performance on African American Vernacular English speakers.” The research inspired “Voicing Erasure”, a spoken word piece piece recited by “champions of women’s empowerment and leading scholars on race, gender, and technology.” Watch it below!
Algorithmic Justice League — 3 min watch
Watch Now This Week in Business History
June 8, 1912: Universal Pictures is created by Carl Laemmle
Carl Laemmle, known affectionately as “Uncle Carl” and had a reputation as the “best natured and least neurotic” of the studio bosses. He merged his Independent Moving Picture Co with a number of smaller film companies. Legend has it that the name was chosen after seeing a Universal Pipe Fittings truck drive by.
The company effectively began Hollywood’s “star system” by giving ing actress Mary Pickford a huge salary and huge billing. The conventional wisdom of the time was to withhold significant naming and billing of stars because it might lead to huge salaries; Laemmle gave her both.
Carl Laemmle gives an update on Universal via the newsreel.
This email was sent to <<Email Address>>
why did I get this? unsubscribe from this list update subscription preferences
NYC Media Lab · 370 Jay Street, 3rd floor · Brooklyn, New York 11201 · USA