DataDownload: How Lil Nas X mastered the art of attention

NYC Media Lab
9 min readSep 25, 2021

--

DataDownload: How Lil Nas X mastered the art of attention A weekly summary of all things Media, Data, Emerging Tech View this email in your browser

You know — fall is a great season. Busy, forward-thinking, even pumpkin lattes (if you’re into such things).

In today’s Data Download, we’ve got Lil Nas X and the art of attention from GQ. Wired writes about GitHub Copilot and OpenAI, whose CTO Greg Brockman will be on the Media Lab stage in a keynote conversation on October 6th. We take a look at the YouTube Recommendation System. We’ve even got Family Guy’s guide to vaccinations.

But — everyone at the Media Lab is heads down, working on the details to make sure your attendance at the 2021 Summit is fantastic. We’re calling this year Future Imperfect, because the road ahead is exciting, but bumpy.

Our two-day online conference will once again bring together 1,000+ virtual attendees from NYC Media Lab’s core community — including executives, university faculty, students, investors, and entrepreneurs — to explore the future of media and tech in New York City and beyond.

Tix are free if you read this newsletter!

Grab your Tix for Summit 2021: Future Imperfect HERE

We’re over 500 attendees, and growing fast, so don’t get shut out.

Must-Read How Lil Nas X Mastered the Modern Art of Attention

Lil Nas X has managed to do what few artists can: catapult himself from a top single in 2019 — one that dangerously spelled one-hit-wonder — to far greater heights, debuting top songs on the Billboard Hot 100 over the next few years while releasing MVs that racked up hundreds of millions of view.

But it’s not just the steady stream of bangers that fueled Nas as he worked on his debut LP. Lil Nas X is a meme lord, a master of attention and drama. His unofficial Satan Nikes (apparently containing a drop of human blood) sold out in under a minute (and led to a lawsuit). After receiving a wave of pushback, he trolled the folks that got offended with a fake YouTube apology. Simply being a successful Black queer artist has drawn a sea of haters and supporters alike, with millions reacting to Nas’s red carpet gown and pink prison jumpsuits. He’s managed to take this attention and leverage it for far greater returns, even tying it into his cinematic universe:

“On August 25, Nas officially announced Montero in a promo clip where he played a news anchor who exists within the same universe as the ‘Industry Baby’ video, which was widely praised by fans and critics but angered some online for its celebration of LGBTQ+ people. ‘Breaking news, power bottom rapper Lil Nas X and his caucasian friend [Jack Harlow] led a prison escape this morning. This comes just months after the talentless homosexual was sentenced to five years in prison,’ Nas says, clad in a garish blonde wig.”

GQ / 6 min read

Read more AI Can Write Code Like Humans — Bugs and All

Whether you’re a programmer or just curious about coding, GitHub’s OpenAI-powered code autocomplete tool, Copilot, is sort of a marvel. The extension plugs into an editor and auto-completes lines of code for you, based on the open source software examples it was trained on. People we’ve spoke to who’ve tried it were, for the most part, blown away.

But generated code brings with it its own set of issues: “Alex Naka, a data scientist at a biotech firm who signed up to test Copilot, says the program can be very helpful, and it has changed the way he works.” Naka spends less time on code help forums, but errors still creep up: “There have been times where I’ve missed some kind of subtle error when I accept one of its proposals,” he says. “And it can be really hard to track this down, perhaps because it seems like it makes errors that have a different flavor than the kind I would make.”

Besides the risk of accuracy (and developers fretting about an industry enamored with copy-paste solutions), security is another issue: “Researchers at NYU recently analyzed code generated by Copilot and found that, for certain tasks where security is crucial, the code contains security flaws around 40 percent of the time. The figure ‘is a little bit higher than I would have expected,’ says Brendan Dolan-Gavitt, a professor at NYU involved with the analysis. ‘But the way Copilot was trained wasn’t actually to write good code — it was just to produce the kind of text that would follow a given prompt.’”

WIRED / 5 min read Read More Tech+Media On YouTube’s Recommendation System We’ve reported on the pitfalls of YouTube’s recommendation system over the years, so it’s about time we’ve featured an extensive look from the company itself. In his deep-dive, YouTube’s VP of Engineering Cristos Goodrow walks us through how the algorithm works, from how the “trending” page gets compiled to the myriad “signals” used to personalize recommendations (“that’s why providing more transparency isn’t as simple as listing a formula for recommendations, but involves understanding all the data that feeds into our system”).

The latter half of the piece is focused on addressing the big elephant in the room — responsible recommendations, misinformation, judging a source’s authoritativeness, and radicalization. While Goodrow is inherently biased, the blog does at least try and address the borderline content issue… and, peculiarly, picks out flat earthers as a generic example.

“We’ve found that most viewers do not want to be recommended borderline content, and many find it upsetting and off-putting. In fact, when we demoted salacious or tabloid-type content we saw that watchtime actually increased by 0.5% percent over the course of 2.5 months, relative to when we didn’t place any limits. Also, we haven’t seen evidence that borderline content is on average more engaging than other types of content. Consider content from flat earthers. While there are far more videos uploaded that say the Earth is flat than those that say it’s round, on average, flat earth videos get far fewer views.”

YouTube Official Blog / 15 min read Read More The Computer Chip Industry Has a Dirty Climate Secret

The chip industry is facing a paradox: climate goals will indirectly rely on semiconductors — for example in solar arrays of electric vehicle — but “chip manufacturing also contributes to the climate crisis.” Manufacturing semiconductors requires a lot of energy and water — a fab can use millions of gallons per day. TSMC by itself uses 5% of Taiwan’s electricity. Industry energy usage will only balloon in the coming dacade: the Chips for America Act promises $52B in funding for the US chip market, and the EU aims to increase its share in the global semiconductor market to 20% by 2030.

Some fabs are taking steps towards cleaner chip-making processes: A TSMC spokesperson admitted that energy consumption made up 62% of the company’s emissions. In response, the company signed a 20-year deal with a Danish energy firm, “buying all the energy from a 920-megawatt offshore windfarm Ørsted is building in the Taiwan Strait.” Other companies are replacing the gases used to clean delicate tools and etch patterns into wafers — a challenging thing to swap.

“Anything that touches the silicon wafer, such as an etching gas, is really hard to alter once a fab is operating…. The process involves a huge amount of precision. Fabs have to place up to 100m transistors on a postage-stamp sized wafer and need to do it perfectly. It takes four to five years for fabs to develop a recipe for this and ‘once you set it, you basically never want to change it.’”

The Guardian / 7 min read Read More Instagram Is Essentially a Weight-Loss App, and Facebook Is Loving That for It

We showcased the amazing Facebook Files investigative series from WSJ last week, and one of the pieces centered around how Instagram has repeatedly found ways its platform harms young users — particularly teenage girls.

Facebook never made the research officially public, but it’s not too hard to see for yourself. She’s A Beast writer Casey Johnston ran her own experiment on a separate account, searching for exercises on Instagram and soon after getting barraged with “hacky garbage like that video of a person running in slow motion as their body fat melts away” on the Explore page.

“These companies know that it’s addictive to make people think that, somewhere in their app, there’s a solution to feeling inferior and incomplete. The influencer who makes you feel not pretty enough, who also seems to have the key to becoming pretty enough? That’s Instagram candy. Stats I would love to see: how much more do people use Instagram who also report it makes them feel bad about themselves?”

She’s A Beast: A Swole Woman’s Newsletter / 12 min read

Read More What We’re Watching Family Guy COVID-19 Vaccine Awareness PSA | FAMILY GUY

Seth MacFarlane teamed up with a team of researchers and released an erm… viral Family Guy PSA on vaccinations, that’s actually quite educational.

Family Guy (YouTube) / 3 min watch

Watch Now What We’re Listening To Podcast: The Great Turnaround

Economist Peter Blair Henry has dedicated his career to understanding how a developing country can become more prosperous, lift more people out of poverty, and give its citizens better choices for how to work and live.

Spotify / 58 min listen

Listen Now Virtual Events Free Event: NYU Veterans Future Lab Summit
Date: November 16–18
Join pioneering entrepreneurs, top-tier investors, industry leaders, and innovative supporters at the must-attend veteran event of the year. Register Here. A Deeper Look DeepMind Tells Google It Has No Idea How to Make AI Less Toxic

“We can’t get the machines to stop being racist, xenophobic, bigoted, and misogynistic.” — Tristan Greene, TNW

Large language models are absolutely unpredictable. Never mind employing them in a business (or even paid entertainment) context and patching things up after-the-fact (see: the AI Dungeon controversy and OpenAI’s phone number generation). For all the media hubbub, those who have actually employed large language models in production know that it’s impossible to properly filter the text being generated, unless you have a human in the middle (or use a much smaller model with a vetted dataset).

DeepMind recently reinforced this notion with a preprint paper that compared SOTA toxicity intervention techniques with human evaluators. The conclusion was pretty straightforward: “Intervention techniques failed to accurately identify toxic output with the same accuracy as humans.”

TheNextWeb / 3 min read

Read More

d4e0a25d-c7ef-4431-b729-42b1b04c1ed2.jpg
outline-dark-forwardtofriend-48.png

Forward this digest

outline-dark-twitter-48.png

Tweet it Out

outline-dark-linkedin-48.png

Share on LinkedIn

This email was sent to <<Email Address>>
why did I get this? unsubscribe from this list update subscription preferences
NYC Media Lab · 370 Jay Street, 3rd floor · Brooklyn, New York 11201 · USA

--

--

NYC Media Lab

NYC Media Lab connects university researchers and NYC’s media tech companies to create a new community of digital media & tech innovators in New York City.