DataDownload: Remembering RBG
DataDownload: Remembering RBG A weekly summary of all things Media, Data, Emerging Tech View this email in your browser
RBG.
Her extraordinary intellectual firepower and her relentless drive to fight for justice echoes in my head. If you haven’t watched the Notorious RBG documentary, it’s linked below. If you’ve watched it, maybe watch it again.
We’ve got strong material in this newsletter. Andrew Yang is keynoting our SUMMIT — and he is a passionate public servant and the winner of the Vilcek Prize at NYU. Jay Rosen interviews Alex Stamos about threat modeling. And Reed Hastings talks about a workplace without rules in his new book.
But RBG, and honoring her career and remembering her passing is what’s on my mind today. Shana Tova to our Jewish friends.
Please reach out with ideas, suggestions, or feedback is always welcome. Steve@nycmedialab.org.
Steve
Steven Rosenbaum
Managing Director
The NYC Media Lab
Steve@NYCMediaLab.org Must-Read Ruth Bader Ginsburg Changed America Long Before She Joined the Supreme Court
The supreme court justice Ruth Bader Ginsburg has died of pancreatic cancer, the court said Friday. She was 87. Ginsburg was the second woman appointed to the court in history and became a liberal icon for her sharp questioning of witnesses and intellectually rigorous defenses of civil liberties, reproductive rights, first amendment rights, and equal protections under the law.
13 min read
Read more Andrew Yang on 2020, UBI, and Fixing Government
Andrew Yang made a second appearance on The Ezra Klein Show to discuss universal basic income, AI’s effect on the economy, and his political future. You can listen to the 90-minute podcast here, and the linked Vox piece has the full transcript. Here’s Yang comment on the impact of AI on the labor force:
“One of the reasons why I was so concerned about the impact of AI on our labor force as well is that I know how many, frankly, inefficient jobs there are in a lot of these major companies. If you gobble up another company and you have two different systems, you might keep dozens, even hundreds of folks around just to keep the systems talking to each other. A lot of our work is more replaceable than we like to think. And what’s funny is if you ask Americans about this, they will actually say a majority of other people’s jobs are automatable and subject to technlogical replacement. And then if you ask them about their own job, the vast majority will say, not my job. You know, that’s just the way we’re wired.”
34 min read
Read More Tech+Media What Newsrooms Can Learn From Threat Modeling at Facebook
Jay Rosen — who will be presenting his “Big Idea To Fix the Internet” on Day 1 of the NYC Media Lab Summit, on Sarah Fischer’s panel along with Cory Doctorow and Nina Jankowicz — recently interviewed Alex Stamos, former chief security officer at Facebook. Before we get into Rosen and Stamos’s insights on applying threat modeling to the newsroom, let’s define what threat modeling and threat ideation actually are.
Threat modeling “is a formal process by which a team maps out the potential adversaries to a system and the capabilities of those adversaries, maps the attack surfaces of the system and the potential vulnerabilities in those attack surfaces, and then matches those two sets together to build a model of likely vulnerabilities and attacks.” Threat ideation is what people often mean when they say threat modeling, according to Stamos. This is a process “where you explore potential risks from known adversaries by effectively putting yourself in their shoes.”
When Rosen suggests employing threat models in the newsroom, he doesn’t mean protecting against attacks on IT systems, but a broader threat — a constitutional crisis, the manipulation of a news system by external forces. Rosen suggests a simple hierarchy of threats as a start. Here’s Stamos’s take: “I think an industry-wide threat ideation and modeling exercise would be great. And super useful for the smaller outlets. One of the things I’ve said to my Times / Post / NBC friends is that they really need to both create internal guidelines on how they will handle manipulation but then publish them for everybody else. This is effectively what happens in InfoSec with the various information sharing and collaboration groups.”
21 min read Read More Researchers Made a QAnon AI Bot Because Things Aren’t Already Bad Enough, Apparently We’ve oohed and ahhed at GPT-3’s impressive but ultimately flawed output, which, frankly, can be hilarious. Aside from the hype and entertainment, some have pointed out the model’s potential to perpetuate discrimination. However, these discussions weren’t as popular. Luckily, we’re getting more focus on the latter as the hype levels out.
Researchers at Middlebury Institute have published a report on how extremists can weaponize powerful natural language technology. The team fed the GPT-3 model conspiracy theories and tracked whether it regurgitated any of the material back at them. It worked… really well — check out co-author Alex Newhouse’s tweet thread for some examples:
3 min read Read More AI Ruined Chess. Now, It’s Making the Game Beautiful Again
Because computers have helped expose countless chess strategies, “for quite a number of games on the highest level, half of the game — sometimes a full game — is played out of memory. You don’t even play your own preparation; you play your computer’s preparation,” says former world chess champion Vladimir Kramnik. Somewhat counterintuitively, Kramnik recently teamed up with DeepMind to bring back some creativity that he believes chess has lost (Bobby Fischer complained about this in 1996 as well).
Kramnik and DeepMind used AlphaZero to explore new variants of the game. “Kramnik saw flashes of beauty in how AlphaZero adapted to the new rules. No-castling chess provoked rich new patterns for keeping the king safe, he says. A more extreme change, self-capture chess, in which a player can take their own pieces, proved even more alluring. The rule effectively gives a player more opportunities to sacrifice a piece to get ahead, Kramnik says, a tactic considered a hallmark of elegant play for centuries.”
7 min read
Read More What We’re Watching RBG
At the age of 85, U.S. Supreme Court Justice Ruth Bader Ginsburg has developed a lengthy legal legacy while becoming an unexpected pop culture icon. But the unique personal journey of her rise to the nation’s highest court has been largely unknown, even to some of her biggest fans — until now. RBG explores Ginsburg’s life and career. From Betsy West and Julie Cohen, and co-produced by Storyville Films and CNN Films.
2 min watch
Watch Now What We’re Listening To Podcast: What if Your Company Had No Rules?
Can corporate rules kill creativity and innovation? Netflix co-founder Reed Hastings thinks so — he even wrote a book about it, No Rules Rules. Freakonomics Radio speaks with Hastings on his company’s unorthodox culture, his new book, and “why for some companies the greatest risk is taking no risks at all.”
55 min listen
Listen Now Virtual Events Virtual Event: The NYC Media Lab SUMMIT 2020. Building the Future Together
Date: October 7–9
We’ll bring together 1,000+ virtual attendees from NYC Media Lab’s core community — including media and tech executives, university faculty, students, investors, and entrepreneurs — to explore the future of media and tech in New York City and beyond. Register Here.
Virtual Event: How the COVID-19 lock-down impacts data analysis and AI services?
Date: September 22, 5:30PM-6:30PM
Afshin Goodarzi of 1010 Data chats with Ahmed Elsamadisi of Narrator.Ai on how the COVID-19 lock-down impacts data analysis and AI services. Register Here.
Virtual Event: ETL Speaker Series — Shellye Archambeau, Verizon
Date: September 23, 4:30PM
Shellye Archambeau is an experienced CEO and board director with a track record of building brands, organizations and high-performance teams. She currently serves on the boards of Verizon, Nordstrom, Roper Technologies, and Okta. Register Here. A Deeper Look Gary Marcus: COVID-19 Should Be a Wake-Up Call for AI
Gary Marcus recently spoke at the Intelligent Health AI conference, arguing that there’s been too much attention on AI research that doesn’t change the world for the better. As this is Marcus talking, we get a critical view of deep learning’s role in diverting that attention. “The reality is that deep learning works best in a regime of big data, but it’s worse in unusual cases … so if you have a lot of routine data then you’re fine. But if you have something unusual and important, which is everything about COVID since there is no historical data, then deep learning is just not a very good tool.”
Ultimately, Marcus’ takeaway was that the pandemic should spur researchers to rethink the problems they’re trying to solve with AI. “COVID-19 is a wake up call, it’s motivation for us to stop building AI for ad tech, news feeds, and things like that, and make AI that can really make a difference.”
4 min read
This email was sent to <<Email Address>>
why did I get this? unsubscribe from this list update subscription preferences
NYC Media Lab · 370 Jay Street, 3rd floor · Brooklyn, New York 11201 · USA