DataDownload: Tech companies and BLM. Actions or Words?

NYC Media Lab
7 min readJun 13, 2020

--

DataDownload: Tech companies and BLM. Actions or Words? A weekly summary of all things Media, Data, Emerging Tech View this email in your browser

Bad news. No one likes to read it. It’s depressing. This is why even as the number of cases and hospitalizations goes higher, the headlines still herald the re-opening of hair salons, tattoo parlors, and beaches.

Somehow the coverage of the massive nationwide marches seems to rise above, providing a groundswell of calls for social change and a deep inward look at longstanding inequities.

So, please do dig in. Read, share, comment, explore. Ask the owners of Reddit about Racism as The Atlantic did with such clarity. Ask about the AI blindspots in COVID-19 as the World Economic Forum did. Ask about the disparities in what tech companies say about racial justice, and what they do about it as the LA Times reports.

Tech is at the forefront of changes and challenges ahead. And solutions. At the NYC Media Lab we’re focused on finding and sharing solutions — and empowering the companies, students, and faculty who build them.

Onward.

Steven Rosenbaum
Managing Director
The NYC Media Lab Must-Read Tech Companies Say They Support Racial Justice. Their Actions Raise Questions.

The distance between what tech companies say and what they do has become a widening gulf in the wake of nationwide calls for racial justice. The LA Times provides a detailed walkthrough of what Nextdoor, Amazon, Google, Facebook, and Reddit have publicly stated, and how their underlying policies often conflict. For example:

Nextdoor

What it said: “Black Lives Matter. You are not alone. Everyone should feel safe in their neighborhood. Reach out. Listen. Take action.”

What the record shows: “Nextdoor users, who use the service to share and read information about their immediate neighborhoods, often post unverified or unsubstantiated reports of ‘suspicious’ people of color and black people on its ‘crime and safety’ pages.”

6 min read

Read More Amazon Bans Police From Using Its Facial Recognition Technology for the Next Year

Just two days after IBM said it would be dropping its facial recognition service (for malicious use cases), Amazon announced a one-year moratorium on law enforcement use of its Rekognition platform. This is less an ethical stance than an overdue concession — Amazon has dodged backlash for a few years now.

When MIT Media Lab researcher and Algorithmic Justice League founder Joy Buolamwini and AI Now tech fellow Deborah Raji released research indicating Amazon’s system had difficulty identifying women and darker-skinned faces, AWS VP of AI Matt Wood issued a response, concluding that “the answer to anxieties over new technology is not to run ‘tests’ inconsistent with how the service is designed to be used, and to amplify the test’s false and misleading conclusions through the news media.” (You can read Buolamwini’s response here.)

Soon after Bloomberg reported that Yoshua Bengio and 25 other researchers asked Amazon to stop selling Rekognition services to the police. The work done by Buolamwini and Raji and other researchers, external and internal backlash, current protests, and last year’s failed pilot in Orlando are all factors that may have tipped the scale.

4 min read

Read More Tech+Media Machine Bias ProPublica’s dive into criminal assessment systems is a frightening look at institutions’ lack of accountability and vetting for algorithms that can easily destroy a person’s life. There’s account after account of these systems predicting recidivist behavior in people of color — even when their charges are petty compared to whites with extensive criminal backgrounds, who receive lower “scores”:

“Jones, who had never been arrested before, was rated a medium risk. She completed probation and got the felony burglary charge reduced to misdemeanor trespassing, but she has still struggled to find work. ‘I went to McDonald’s and a dollar store, and they all said no because of my background,’ she said. ‘It’s all kind of difficult and unnecessary.’”

On a related note, AI Now’s Algorithmic Accountability Policy Toolkit is a compendium of FAQs, definitions, resources, and links to deep dives around the US government’s use of such systems. The Toolkit also highlights the Little Sis Tracking tool, a “free and open database detailing the connections between people and organizations. Little Sis can be used to track the relationships between government officials, vendors, lobbyists, business leaders, philanthropic organizations, and independent donors.”

22 min read Read More Reddit Is Finally Facing Its Legacy of Racism Some of Reddit’s executive team have been staunch supporters of radical free speech — to the point where Reddit CEO Steve Huffman said that “obvious open racism” was not against the platform’s rules. Last week we saw Huffman do an about-face, issuing a company letter stating that employees and users “do not tolerate hate, racism, and violence,” sharing the letter on Twitter. The backlash on Twitter was immediate and intense — and included a response from former interim CEO Ellen Pao.

The response from redditors was even more immense: an open letter to Huffman garnered over 24k upvotes. The letter points out that “nearly six years ago, dozens of subreddits signed the original open letter to the Reddit admins calling for action. While the Reddit admins acknowledged the letter and said it was a high priority to address this issue, extremely little has been done in the intervening years.” (You can read Huffman’s comment reply here.)

11 min read Read More OpenAI’s Text Generator Is Going Commercial

“OpenAI’s leaders claim that only by commercializing its research for the benefit of investors can it raise the billions needed to keep pace on the frontiers of AI.”

OpenAI’s capped-profit about-face is well documented. But it’s still a bit of a surprise to see how an organization that once said its full GPT-2 model was too dangerous to be released… is now opening up a private beta API based on its GPT-3 model. The API link provides a sample request and a cached response: you input some text and get some back.

“That may sound limiting, but by crafting the right input it’s possible to steer the software to perform different tasks. The goal is to try and massage it to riff on the statistical language patterns from a particular part of the internet.”

6 min read

Read More What We’re Watching The News Industry Is Being Destroyed

On a recent episode of Patriot Act with Hasan Minhaj, the host described the near-unwatchability of TV news — included local TV stations. Some of the last bastions of truth remain local papers, which continue to produce high-impact investigative journalism (ex. Epstein was Miami Herald, and the Catholic church was the Boston Globe). Despite this, local newspapers are struggling. The threat isn’t just declining revenue — it’s also the vulture funds buying up these dying papers.

21 min watch

Watch Now What We’re Listening To Podcast: Techmeme Ride Home

Brian McCullough is an author, podcaster, and tech omnivore. Every day he absorbs current tech news and turns it into a fun, expansive, and engaging collage of what’s going on in the industry. The Ride Home podcast is produced with Techmeme, and Brian tells me that even though people aren’t “riding home” right now, appetite for a concise digest of tech news remains strong. We think it’s a must-listen!

Listen Now Virtual Events Virtual Event: F50 Global Capital Summit
Date: June 16–17
Silicon Valley’s largest international investor conference. The Summit finds and connects the next generation of world-changing tech innovators with partnerships to power their long-term impact. Register Here.

Virtual Event: Virtual Ghost Road — Beyond the Driverless Car
Date: June 16, 5PM-6PM
Ghost Road explains where we might be headed together in driverless vehicles, and the choices we must make as societies and individuals to shape that future. Register Here. A Deeper Look The “Flatten the Curve” Chart Was Ugly and Not Scientifically Rigorous. Why Did It Work So Well?

The graph below is from a 2007 CDC report on the “Early, Targeted, Layered Use of Nonpharmaceutical Interventions in a pandemic.” It wasn’t particularly remarkable — “two ugly humps, one purple, one steep…. Too artless to be art but lacking the hard empiricism we expect of science.”

So why has it been so effective in 2020? Mother Jones thinks it’s this in-betweeness that helped people take the threat seriously. Real data “would have detracted from the simplicity of the graph’s message,” notes Google data editor Simon Rogers.

4 min read

Read More Transactions & Announcements Flatfile Raises $7.6M From Two Sigma Ventures, Google’s AI Fund, and Others to Make Data Onboarding Easy for Enterprises

Domino Data Lab Lands $43M in Funding, Launches Model Monitoring Tool

Just Eat Takeaway to Acquire Grubhub for $7.3 Billion

outline-dark-forwardtofriend-48.png

Forward this digest

outline-dark-twitter-48.png

Tweet it Out

outline-dark-linkedin-48.png

Share on LinkedIn

This email was sent to <<Email Address>>
why did I get this? unsubscribe from this list update subscription preferences
NYC Media Lab · 370 Jay Street, 3rd floor · Brooklyn, New York 11201 · USA

--

--

NYC Media Lab
NYC Media Lab

Written by NYC Media Lab

NYC Media Lab connects university researchers and NYC’s media tech companies to create a new community of digital media & tech innovators in New York City.

No responses yet