DataDownload: Technology in the cross-hairs. Can the internet survive the attack?

NYC Media Lab
9 min readMay 30, 2020

--

DataDownload: Technology in the cross-hairs. Can the internet survive the attack? A weekly summary of all things Media, Data, Emerging Tech View this email in your browser

As technologists — it’s hard to know where to stand at the moment. When the NYC Media Lab invited Federal Elections Commissioner Ellen Weintraub to join our panel at SXSW, it was to talk about Section 230 of the Communications Decency Act of 1996.

It was — then — super geeky stuff. The good news is, that panel went Virtual, and you can watch it HERE. The bad news is that the fair harbor provision, as it’s known, was the building block of what we know today as the internet. Platforms were allowed to be treated as neutral parties, not publishers, and that gave companies like Friendster, YouTube, Reddit, 4Chann, and Facebook immunity from publisher liability.

You can read about this history in Twenty-Six Words, a nuanced and engaging look at the complicated history of Section 230.

So the problem has gotten sticky. The platforms — now mostly Facebook, Instagram, Twitter, Reddit, and YouTube in the US, have built businesses on acting like publishers, but not being responsible as publishers. Jack Dorsey famously danced around the “health and safety” of Twitter in a TED talk that would have become combative, if he’d actually answered any questions.

So today, we’re all in it. Can we reign in the internet? Do we want to? Can we hold platforms responsible? Can AI and Machine Learning moderate tone and content of hateful speech?

We’re at a technology crossroads. Twitter is labeling the president’s tweets as unsafe. Mark Zuckerberg is going on Fox News to take shots at Dorsey. And the election is five months away, with social distancing pushing us onto our screens and away from face-to-face conversations with friends and thoughtful dissenters.

It’s time for tech to stand up.

If you don’t know what section 230 is, it’s time to learn about it — because the President has it in his cross-hairs, and without a robust discussion the internet may find its content regulated by the FCC.

Are you ok with that?

Steven Rosenbaum
Managing Director
The NYC Media Lab Must-Read A Case for Cooperation Between Machines and Humans

The hype of fully autonomous vehicles has been floating around for a few years now, as has the skepticism. Critics cite jobs and safety as the top reasons why the world needs to rethink AVs. Though the idea has had its opposition, the popular belief is that automation will take jobs and drive down wages; and recent failures of autonomous control systems such as Boeing’s MCAS, Uber’s fatal crash, and the 2017 incident where a Tesla on Autopilot crashed into a stationary fire engine have given AV safety concerns even more relevance.

Ben Shneiderman, professor at the University of Maryland and human-computer interaction expert, believes that robots should be designed to work with humans rather than replace them. He argues that when humans can’t control systems, designers risk creating unsafe machines and absolving humans of ethical responsibility for their actions. Shneiderman challenges the engineering community to drop the current “one-dimensional” machine automation for a two-dimension alternative that allows for both high levels of machine automation and human control.

6 min read

Read More Is the Brain a Useful Model for Artificial Intelligence?

It’s a bit of a joke in the AI and neuroscience communities that the only similarity between the two fields is that researchers still don’t know why the brain or artificial neural networks work. This hasn’t stopped billions of dollars flowing into attempts to digitally recreate the human brain — something that the Wired author, neuroscientist Kelly Clancy, compares to Lewis Carroll’s imaginary nation that created a mile-for-mile map of their territory: “Even if neuroscientists can re-create intelligence by faithfully simulating every molecule in the brain, they won’t have found the underlying principles of cognition.”

The piece is part of Wired’s The Future of Thinking Machines — other interesting reads include It’s Called Artificial Intelligence — but What Is Intelligence? and As Machines Get Smarter, How Will We Relate to Them?

5 min read

Read More Tech+Media Trump vs. Twitter There’s a lot to talk about here, so let’s start with FT. The publication’s editorial board posted an opinion piece Friday describing Twitter’s decision on Tuesday to put a fact-check link on the president’s tweet about mail-in votes being “fraudulent” (Screenshot from mediapost.com).

The president, of course, retaliated. On Thursday, he signed an executive order to change section 230, which essentially “provides immunity to social media companies… against being sued over the content on their site. This allows them to operate and flourish without needing to moderate content,” according to Forbes. FT also noted that “Twitter is a private company and can host the president or not in whatever way it feels appropriate. It is not violating principles of freedom of speech by providing a fact check, and has a right to remove even Mr Trump’s tweets if they infringe its standards.”

Thursday night, the president posted a tweet with the phrase, “When the looting starts, the shooting starts,” a reference to a Miami police chief’s infamous 1967 warning. Again, Twitter responded, giving a warning that the president’s tweet was violent (Screenshot from The Guardian). Meanwhile, Facebook has been purposefully distancing itself — Mark Zuckerberg posted a statement that “our position is that we should enable as much expression as possible.” (According to leaked posts from employees, this isn’t the majority consensus.)

As the conflict unfolds, Twitter is experiencing its own internal battles. Republican mega-donor Paul Singer recently bought a large stake in the company and reportedly plans to oust current CEO Jack Dorsey. Singer was fiercely anti-Trump in 2016 but did a complete U-turn by 2017.

On a zoomed-out scale, it’s interesting how human all these decisions above are, when in the past few years there’s been so much talk about AI content moderation.

5 min read Read More AI Proves It’s a Poor Substitute for Human Content Checkers During Lockdown Despite attempts to automate fact-checking, hate speech removal, and the detection of misleading political messages, humans are still the best line of defense against online disinformation. Just days after Facebook sent thousands of content moderators home and replaced them with AI, users complained that the platform was taking down critical coronavirus posts. YouTube mistakenly categorized videos from NGOs documenting rights abuses in Syria as extremist — its systems have also failed to filter out fraudulent COVID-19 ads.

Moderating online content is difficult for algorithms — researchers work with limited data (really, how much data is enough to capture the immense variety of disinformation?) and models have trouble reading authors’ intent. Some platforms have been keen to address the risks of AI content policing beforehand; last month, Twitter advised users that their systems could “sometimes lack the context that humans bring.”

6 min read Read More Amazon Sent Out a Scripted News Segment, and 11 Stations Aired It

Editorial judgment dictates that a company’s self-promoting PR shouldn’t be disseminated as if it was actual journalism — yet 11 local news stations have already broadcasted an Amazon-scripted video that touted its effective delivery of essential products while “keeping its employees safe and healthy.” The video shows an inside look at Amazon’s fulfillment centers and features segments where employees share their experiences.

The timing was on point — employees and rights advocates have assailed Amazon’s workplace safety efforts after workers tested positive for the coronavirus. Over a dozen states’ attorneys general have sent a letter to Amazon demanding the release of statistics on employees who were infected or died during the pandemic. An Amazon spokesman remarked that the script “was intended for reporters who for a variety of reasons weren’t able to come to one of our sites themselves.”

3 min read

Read More What We’re Watching Minnesota Police Arrest CNN Team on Live Television

You know it’s 2020 when you’re… featuring Zoom potato overlays one week and reporters of color being arrested on live TV the next. Minnesota police arrested a fully-compliant CNN team this week before releasing them an hour later. “CNN’s Josh Campbell, who was in the area but not standing with the on-air crew, said he, too, was approached by police, but was allowed to remain. ‘I identified myself … they said, OK, you’re permitted to be in the area,’ recounted Campbell, who is white. ‘I was treated much differently than [Omar Jimenez] was.”

This isn’t the only incident of police action against journalists this week: a police officer recently fired pepper balls at a journalist and her crew from local news station WAVE 3.

6 min watch

Watch Now What We’re Listening To Podcast: On the Media — Boiling Point

In WNYC’s latest On the Media podcast, Boiling Point, University of Michigan’s Apryl Williams, CUNY Graduate Center’s Jessie Daniels, and Recode Decode’s Kara Swisher explore the Karen meme, the history of white women in racial dynamics, and the Trump+Twitter battle.

50 min listen Listen Now Virtual Events Virtual Event: Emerge Conference
Date: June 1–3
Online Tech Product conference. Made for makers and doers in tech. Streamed live from Minsk timezone with love. Register Here.

Virtual Event: Growing as a Data Scientist and the Role of Communication
Date: June 3, 2PM-3:30PM
Join Alexander Statnikov, Head of Data Science, Machine Learning, and Automation at Square as he makes a case about the importance of excellent communication skills to be a successful data scientist. Register Here.

Virtual Event: 5G & The Future of Real Time Sports
Date: June 4, 2PM-3PM
Join Alley CEO Noelle Tassey as she explores questions like, how will future audiences stay engaged during sporting events? And how will the viewing experience be transformed in a world of increased connectivity and technological leaps forward? Register Here. A Deeper Look Rediscovering the Small Web

An homage to the creativity of the early HTML-only internet, a history of GeoCities-era personal expression, and a straight-up time-capsule, Parimal Satyal’s latest blog explores the 90s small web — before sleek, SEO-optimized (and often ad-bloated) sites became a standard. Satyal’s excitement and nostalgia is palpable — it would snugly fit in a Tedium newsletter.

Satyal is even more excited about “restorative” projects like Wiby (“a search engine for old-school, interesting and informative webpages”), Neocities (“a modern web host that lets anyone to create a basic website for free”), and Curlie (“the largest human-edited directory of the Web”).

29 min read

Read More Transactions & Announcements Exclusive: Machine Learning Company Insitro Raises $143M to Bridge Biology and AI

Bluecore Raises $50M for Its First-Party, AI-based Marketing Automation Tools

AI Speech Data Startup DefinedCrowd Closes $50.5M Funding Round

AI Drug Company Exscientia Raises $60M in Series C Funding

Alibaba Invests Strategic Round Into Smart Automobile Cleaning Company 1KMXC

outline-dark-forwardtofriend-48.png

Forward this digest

outline-dark-twitter-48.png

Tweet it Out

outline-dark-linkedin-48.png

Share on LinkedIn

This email was sent to <<Email Address>>
why did I get this? unsubscribe from this list update subscription preferences
NYC Media Lab · 370 Jay Street, 3rd floor · Brooklyn, New York 11201 · USA

--

--

NYC Media Lab
NYC Media Lab

Written by NYC Media Lab

NYC Media Lab connects university researchers and NYC’s media tech companies to create a new community of digital media & tech innovators in New York City.

No responses yet