DataDownload — California bans deepfakes, Netflix employs astrophysicists & more

NYC Media Lab
6 min readOct 13, 2019

--

California bans deepfakes, Netflix employs astrophysicists & more

Hi there. Glad you clicked to open. I’m your new DataDownload Newsletter guide. Some of you know me, and for some — I’m a new voice. I’m Steve, and I’m the new Media Lab Managing Director. You can check out my spiffy intro here.

This week — the Tech/Media space is pretty darn frothy. Slam journalism is worth paying attention to — because it breaks trust in a way that is hard to repair. Deepfakes in Politics and Porn, complete with PP alliteration is being addressed, but can a law stop it? But first…

Must-Read

What Is CTRL-Labs, and Why Did Facebook Buy It?
If you’ve seen CTRL-Labs CEO Thomas Reardon speak, you’re familiar with his adept ability to woo audiences with futuristic brain interface tech. But CTRL-Labs was a little-know player in the relatively little-know BCI space until Facebook decided to buy Reardon’s startup for somewhere between $500M and $1B. Now everybody wants to know a bit more. NYC Media Lab’s Steve Rosenbaum collates the material floating around online about CTRL-Labs.

Our favorite quote, from head of XR at Facebook Andrew Bosworth on Reardon’s wristband tech: “Technology like this has the potential to open up new creative possibilities and reimagine 19th-century inventions in a 21st-century world.”
Read More

The Rise of “Slam” Journalism
There’s a lot of “slamming” going on in the political world — just skim the headlines on Google News. Surely this violent headline language is a byproduct of the 2016 election? Turns out, there’s a bigger trend. Textio analyzed 140k headlines from 2015–2017, and they found some interesting stuff:
Violent headline language doesn’t seem to correlate with political bias.
More conservative outlets are less likely to have the president on the “slammed” end.

Read More

California Laws Seek to Crack Down on Deepfakes in Politics and Porn
California is one of the more progressive states with tech regulation (San Francisco banned facial recognition use by authorities). It’s no surprise then that the state has recently passed two laws prohibiting malicious use of manipulated videos in certain cases.

The first deepfakes ban “makes it illegal to distribute manipulated videos that aim to discredit a political candidate and deceive voters within 60 days of an election,” and the second “gives Californians the right to sue someone who creates deepfakes that place them in pornographic material without consent.”
Read More

For the Media

Can a Machine Learn to Write For The New Yorker?
A lyrical New Yorker dive into the workings and implications of advanced natural language processing systems — think GPT-2, Smart Compose, and Smart Reply — complete with elucidations from Joel Tetreault, computational linguist and former director of research at Grammarly, Dario Amodei, OpenAI’s director of research, Paul Lambert, who oversees Smart Compose at Google, and plenty more.

It’s also one of the few (or only) AI pieces we’ve seen use the word “Sisyphean”: “In an e-mail, [Joel Tetreault] described the Sisyphean nature of rule-based language processing. Rules can ‘cover a lot of low-hanging fruit and common patterns, but it doesn’t take long to find edge and corner cases.’”
Read More

Don’t Want to Read Privacy Policies? This AI Tool Will Do It for You.
Guard is a bit like Rotten Tomatoes for privacy policies, analyzing policies from popular apps and assigning a grade and percentage score based on the number of threats to your privacy it detects. For example, Twitter’s privacy policy has a score of 27%, or an “F”, while Mozilla gets an “A”.

Vox spoke with Guard developer Javi Rameerez, who outlines how the app’s NLP system is able to detect threats amidst all that legalese. Rameerez and his team are also planning to release an app that scans and rates your other apps — you can sign up for the beta here.
Read More

The Style-Quantifying Astrophysicists of Silicon Valley
It turns out quantifying someone’s fashion style is eerily similar to an astrophysicist’s PhD work. So is quantifying movies and songs. Wired asked why so many astrophysicists were leaving their academic positions and beelining straight to Silicon Valley giants. There are a few reasons: first, astrophysicists were entrenched in big data before big data became a thing.

They’re using supercomputers to model universe expansion and how galaxies crash into one another, gleaning patterns from terabyte-size datasets. This kind of work translates well to machine learning teams. (For example, Stitch Fix uses a quantum mechanics concept, eigenvector decomposition, to build a user’s unique style.) Second? The pay is better and the jobs are plentiful.
Read More

Events & Announcements

Event: Spatial Data Science Conference
Date: October 16
Location: Columbia University, NYC
Founded in 2017, the Spatial Data Science Conference (#SDSC19) brings together organizations who are pushing the boundaries of spatial data modeling — ranging from large enterprise to cities and government, as well as thought leaders from academic institutions. The agenda will be packed with keynotes, panels, technical workshops, and opportunities to network with experts in spatial data from across the globe. Register Here

Event: 2019 Brown Institute Showcase
Date: October 17, 6PM-8PM
Location: Brown Institute at Columbia University, NYC
Established in 2012, the David and Helen Gurley Brown Institute is a collaboration between Columbia University and Stanford University, designed to encourage and support new endeavors in media innovation. This event is the institute’s annual showcase of Magic Grant projects (which you can read about here). Register Here

Internship: Cyber NYC Inventors to Founders’ Student Venture Associate
Cyber NYC Inventors to Founders, a $100M public-private partnership, is excited to launch our Student Venture Associate program. This is a paid internship program for NYC university students (undergrad and graduate) to work on our Cyber NYC Inventors to Founders team during the fall and spring semesters. Register Here

Event: All Tech Is Human NYC
Date: November 9, 9AM
Location: Thoughtworks, Madison Ave.
An all-day ethical tech summit with 200 technologists, academics, advocates, students, org leaders, artists, designers, policymakers, and YOU. Join for an impactful mix of lightning talks, topical panels, strategy sessions, and tech/humanity art performance. Register Here

Event: Natural Language, Dialog and Speech (NDS) Symposium
Date: November 22, 9AM-6PM
Location: The New York Academy of Sciences
NDS2019 will convene leading researchers from academia and industry to discuss cutting-edge methodologies and computational approaches to applied and theoretical problems in dialog systems, spoken and natural language understanding, natural language generation, and speech synthesis. Register Here

A Deeper Look
Machine Learning Deployment — Benedict Evans
“The tech industry has been hitting everything with a hammer to see if it’s an AI problem, or can be made into one. Generally, it is.”

Benedict Evans walks us through the life-cycle of AI tech deployment, which in many ways mirrors the life-cycle of relational databases. At some point in the near future, AI will fade into the background and become “boring”, just as 40 years after the revolutionary concept of relational databases, they’re ubiquitous and “boring”.

“ML is the new SQL,” says Evans, laying out the three phases of AI that are leading us through familiar tech deployment territory: first there were “primitives” companies, building platforms for low-level ML tasks like sentiment analysis and computer vision; then, most tech companies went around looking for problems to apply AI to; and third, companies are now using AI to solve “complete problems in new ways,” without really having to say they’re using AI to do it.
Read More

Using Machine Learning to Hunt Down Cybercriminals
Researchers at MIT and the University of California at San Diego have developed a new ML system able to identify suspicious networks that have been hijacking IP addresses for years — a malicious practice used for a number of purposes, like sending malware, stealing Bitcoin, even regaining control of computers monitored in a police investigation.

The researchers are hoping to take a proactive approach to a crime that is usually dealt with reactively. The MIT news piece explains what goes into a Border Gateway Protocol hijack, and how the team was able to train a system that flagged networks with suspicious activity by observing the actions of “serial hijackers.”
Read More

Transactions & Announcements
AI Sales Software Firm Clari Raises $60M in Late-Stage Funding Round

--

--

NYC Media Lab
NYC Media Lab

Written by NYC Media Lab

NYC Media Lab connects university researchers and NYC’s media tech companies to create a new community of digital media & tech innovators in New York City.

No responses yet