DataDownload: What’s your type? The dark side of personality testing A weekly summary of all things Media, Data, Emerging Tech View this email in your browser
Myers-Briggs. You know me, I’m ENFJ-A. The Guardian has a great piece about the test, and how — it may in fact be — nonsense. But then again, I like reading my horoscope too. Then, we travel to the Metaverse — and the potential for it to be beautiful if it’s built right.
Facebook owns up to the fact that political content in its newsfeed may be bad for users. Ok, now what? China is banning gaming — really. Ars Technica has the scoop. LinkedIn is bailing on its video stories product, which seems kinda odd to me. And — it turns out Fact-Checkers may not be any better at fact checking than average folk.
That’s what we’ve got this week. If you haven’t signed up for the NYC Media Lab Summit 20201, what are you waiting for?
See you soon — in our spiffy virtual space.
The NYC Media Lab
Invitation: Purchase Your Discounted Early Bird Tix for Summit 2021: Future Imperfect
New York City’s tech and media sectors have been fielding many curveballs. And yet, in the face of disruption we continue to innovate, no matter how uncertain the future may seem at times. In the spirit of endless innovation and exploration, NYC Media Lab is thrilled to host “Summit 2021: Future Imperfect” from October 6–7, 2021.
Our two-day online conference will once again bring together 1,000+ virtual attendees from NYC Media Lab’s core community — including executives, university faculty, students, investors, and entrepreneurs — to explore the future of media and tech in New York City and beyond.
Discounted early bird tickets are now available through September 15 (they’ll save you $10 per ticket). Are you a current student, faculty member, or friend of the NYC Media Lab? Go ahead and reserve your free ticket (and be sure to register using your .edu email address). Register here.
From online dating platforms to enterprise-level recruiting, the Myers-Briggs Type Indicator (MBTI) — commonly known as the Myers-Briggs test — is ubiquitous. Up to 89% of Fortune 100 companies use the Myers-Briggs tests for hiring or team building and management programs. According to Myers-Briggs, we are born with “a preference for extroversion or introversion, intuition or sensing, thinking or feeling, and judging or perceiving.” The different permutations boil down to 16 personality types, typically expressed with acronyms like INTP and ENFJ.
The Myers-Briggs company reports $20M in annual revenue from “typing” people, and there are countless knockoffs of the test online. Approximately 50M people have taken the MBTI since the 1960s — the possibility that human personalities can be codified into just 16 types obviously has enormous appeal. According to Merve Emre, author of The Personality Brokers, most people’s motivation to ‘type’ is a “utopian impulse. This desire not only to know yourself, but to be able to express yourself to people (of the same type) in a language that you share — that’s an incredibly powerful fantasy.”
Despite MBTI’s massive popularity, “most psychologists believe it to be deeply flawed — if not meaningless. You’ve probably never heard of The Big Five, but it is considered a “far and away more scientifically valid” method of effectively predicting human behavior. “It stands to reason that it would be attractive for those looking for a clean, simple narrative around our complex, messy, modern-day existence,” says tech ethicist David Ryan Polgar. Despite the Myers-Briggs Company forbidding unethical use — such as in recruiting — sophisticated psychometric testing is increasingly used to streamline hiring processes and filter candidates. According to Emre, also an executive of the recent documentary Persona, “It’s almost impossible not to be critical of them as a mechanism of exploitation.”
The Guardian / 15 min read
Darren Shou — CTO at NortonLifeLock, thinks the metaverse could be beautiful — but if only if it’s built and governed ethically. According to Shou: “Media fragmentation and echo chambers have already shattered our common reality. If left unchecked, the metaverse may only make things worse. It won’t be long until each of us is able to live in an entire world tailored to our own personalities, interests, and tastes, which may further erode our shared experiences and make it harder for us to meaningfully connect.”
Shou offers up several cybersafety frameworks he believes could make the metaverse a better place, including a virtual “public square” where there is a “single shared space that is provably the same for everyone.” Or “introducing a cue that actively notifies people if they’re experiencing something radically different from what others are experiencing.”
WIRED / 7 min read Read More Tech+Media Facebook Quietly Makes a Big Admission
For years, Facebook has been embroiled in one political scandal or controversy after another — its impact on the 2016 Presidential election and its role in the January 6th insurrection early this year being two glaring examples. Despite Mark Zuckerberg’s statement to Congress earlier this year that “meaningful social interactions” are Facebook’s goal, it’s almost universally accepted that Facebook’s NewsFeed algorithm is primarily driven by engagement.
According to author Gilad Edelman: “An algorithm that’s too focused on engagement might encourage the viral proliferation of material that’s false or harmful, because the system is selecting first for what will trigger engagement, rather than what ought to be seen.”
In February, Facebook announced a pilot program that would reduce how much political content a small subset of users in various countries — including the US — would see in their NewsFeed — then survey them about their experience. Last week, Facebook announced that it had received positive feedback from users and was expanding the testing to additional countries.
Will this lead to a partial depoliticization of users News Feeds and less reliance on engagement as a metric determining what content people see? Maybe. But according to Edelman, it’s also possible Facebook is “using some vague research findings as an excuse to lower its own political risk profile, rather than to improve users’ experience.”
WIRED / 6 min read Read More China Bans Online Gaming for Minors Except From 8 PM-9 PM Friday to Sunday
Many parents worldwide have concerns about the number of hours their kids spend playing video games. But China has just taken screen time limits to the next level for every under-18 in the country. Children can now only play online video games from 8 pm-9 pm Fridays, Saturdays, and Sundays — for a total of three hours a week.
According to The Wall Street Journal, China’s government blames video game addiction “for a host of societal ills, including distracting young people from school and family responsibilities.” And a state-run media outlet described online gaming as “opium for the mind.”
22% of China’s population was under 19 in 2019 — that’s over 300M game-starved kids. The draconian measure has also potentially erased more than $1T in value for China’s biggest tech firms.
Ars Technica / 2 min read Read More LinkedIn Tells Advertisers It Is Shutting Down Stories Videos
While the impermanence of Stories has worked well for Snapchat and Instagram, it clearly isn’t a surefire bet for other social media platforms. Twitter also recently abandoned its Fleets video format.
That hasn’t other platforms from trying, though — TikTok recently started experimenting with its own version of Stories style video content.
Ad Age / 2 min read
Read More What We’re Watching A Visit to a Chinese “Detox” Center for Video Game Addicts
As noted above, the Chinese government recently announced that under-18s are forbidden from playing video games a week.
According to CBS Morning: “One of the methods China is using as part of its crackdown on video game use is a rehab center for kids who are addicted to gaming.” Take a look inside a gaming rehab in this video.
CBS Mornings (YouTube) / 3 min watch
“Elizabeth Holmes, founder at Theranos, is headed to trial for allegedly defrauding investors and patients by misrepresenting the capabilities and accuracy of her blood-testing technology. Winning the jury’s sympathy is her best option to get acquitted. There are several ways she can do that.”
Check out this new podcast hosted by John Carreyrou, Pulitzer Prize-winning investigative journalist who first exposed Holmes in 2015 and best-selling author of “Bad Blood: Secrets and Lies in a Silicon Valley Startup.”
Spotify / 44 min listen
Listen Now Virtual Events Free Event: Creator Economy x Future of Work
Date: September 8, 12PM EDT
Are you a creator or aspire to be? Who is a creator anyway? Let’s explore the past/present/ future of the Creator Economy x Future of Work with an impressive panel, including representatives from Bloomberg and TikTok. Topics include How the Creator Economy came to be, The State of the Creator Union, and how things may look in 5–10 years. Register Here A Deeper Look Experiment Shows Groups of Laypeople Reliably Rate Stories as Effectively as Fact-Checkers Do
In this era of “alternative facts,” social media platforms and newsrooms often turn to professional fact-checkers to help stem the ever-rising tide of misinformation. But, according to Jennifer Allen, co-author of a new MIT study, “One problem with fact-checking is that there is just way too much content for professional fact-checkers to be able to cover, especially within a reasonable time frame.”
The study suggests an alternative approach: the wisdom of crowds. Using small groups of 10–15 politically balanced lay readers to evaluate 200+ news stories flagged for scrutiny by Facebook’s algorithm, the researchers found that, on average, the group was just as effective as professional fact-checkers. “This helps with the scalability problem because these raters were regular people without fact-checking training, and they just read the headlines and lead sentences without spending the time to do any research,” says Allen.
While crowdsourcing fact-checking shows promise, incentivizing people to perform the required tasks remains a vexing problem. According to David Rand, a professor at MIT Sloan and senior co-author of the study, “It is a classic public goods problem: Society at large benefits from people identifying misinformation, but why should users bother to invest the time and effort to give ratings?”
Phys.org / 5 min read