AI Runs the World
AI Runs the World
AI is in many spheres, including government.
View this email in your browser
When governments get techy
AI, machine learning, computer vision. With all the AI buzz in industry, it’s not surprising that governments are looking to AI, too. And in doing so, there are important considerations — civil rights, decision making, and margins of error — when assessing the potential and pitfalls of AI in service of the public interest.
This week we’re exploring how governments can more responsibly use AI to run more efficiently and effectively. We’ll begin with the American AI Initiative announced last February by the White House. We’ll then shed light on its current inaccuracies and inconsistencies, and consider instances in which AI ingrains existing biases and systemic inequities that need to be addressed when considering AI’s use by governments.
We’re also thinking about the legacy of Clayton Christensen, author of The Innovator’s Dilemma, considering a potential city of the future (Merwede in the Netherlands, where cars and bicycles will be communally, rather than privately, owned and accessible to all) and considering how digital design can positively impact users’ behavior.
We hope you’ve been enjoying this newsletter and would love any feedback (erica@nycmedialab.org). Thank you again for reading!
Best,
Erica Matsumoto
NYC Media Lab
AI’s grand vision is to reach objective truths and reduce human bias. It purports to capitalize on the idea that “the machines are smarter than us” and will make better, more perfect decisions.
As AI becomes increasingly incorporated into workflows and systems across industries, it’s important to recognize that the machines that comprise AI in many ways are still significantly imperfect. Systems, datasets, algorithms are developed, designed and tested by humans (who capture, analyze and create the models that powers AI). Because of this human element, AI reflects the biases and flaws of our society and individual and collective human psychology.
If all this sounds like gibberish to you, here’s a quick primer on how to think about AI:
As many readers may remember, AI isn’t particularly new. The term “artificial intelligence” itself was coined in the 1950s amongst university academics and industry leaders, and a machine — Deep Blue — that could beat a human in a chess match was developed in 1997. Today, AI applications using AI to replace humans in logical systems marche apace. Last year, IBM Project Debater faced off against a champion debater, Harish Natarajan.
Given the continued growth of AI, it’s unsurprising to realize that governments are getting in on the AI game. However, there are some unique challenges and considerations that must be taken into account if the government gets into the business of using AI.
A FRAMEWORK FOR GOVERNMENT AI
To start, it’s important to understand how governments are considering AI deployment. We’ve covered how cities and local governments are responding to AI and facial recognition tech. This week, we’re looking at the federal level.
In the U.S., the Trump Administration’s February 11, 2019 Executive Order on AI (Executive Order 13859) announced an American AI Initiative, which is now the United States’ national strategy on artificial intelligence.
Per this Executive Order, the American AI Initiative implements a whole-of-government strategy to pursue five pillars for advancing AI, including:
- Promoting sustained AI R&D investment
- Unleashing federal AI resources
- Removing barriers to AI innovation
- Empowering American workers with AI-focused education and training opportunities
- Promoting an international environment that supports American AI innovation and its responsible use
In September 2019, the White House hosted a Summit on Artificial Intelligence in Government to discuss ideas for adopting AI to make the federal government more effective. Among the topics discussed were the idea of using a Center of Excellence Model (COE) to catalyze interagency information-sharing on AI and the need to hire, train, and reskill workers to use AI in the federal government.
Outside the federal government, organizations like OpenAI (a San Francisco-based research laboratory whose mission is to ensure that AI benefits all of humanity) and the Partnership on AI (which seeks to bring diverse global voices together to realize the promise of AI) are also actively working on developing frameworks and best practices for governments’ use of AI. We are also fans of the thought leadership work by NYU’s AI Now Institute (an interdisciplinary research center dedicated to understanding the social implications of artificial intelligence). THE GOOD There are numerous potentially positive uses of AI in the government context. AI could make governments more efficient, reduce bureaucracy and duplication of efforts, possibly increase national security and more. By increasing transparency, it could potentially even improve trust in the government.
Some of the key applications of AI for the public interest include:
- Creating intelligent chatbots to answer routine questions in call centers, check callers’ qualifications and refer complex questions to human operators
- Improve solar forecasting accuracy by up to 30% by using AI-powered self-learning weather forecasting technology that integrates machine learning
- Mining and developing insights from social media posts about restaurants in order to develop intelligent decisions about when and how health departments inspect restaurants (historically, these tests have been random)
- Analyzing historical crime data to narrow suspect searches down more quickly, reducing the time-intensive nature of investigations and giving workers more time to spend with victims and families instead of sifting through paperwork and hunting down information (caveat: more on this below).
THE BAD
Given the potential of AI, it is of paramount importance to recognize that AI today has serious and real drawbacks. An overreliance on AI to make objective judgments opens the door to reifying ingrained biases in the technology itself, and fails to recognize the deeply complex nature of human decision making. It overlooks the fact that some aspects of decision making cannot, should not be replicated or replaced by a machine.
We must also recognize the data constraints in AI technology today. At present, there are large gaps or nonexistent data as well as errors in the data that reflect societal and human biases. In addition, IP restrictions make it nearly impossible for experts to assess and address algorithmic bias. Requests to access and understand algorithms developed by companies are routinely denied, as companies are protected by legal clauses to keep trade secrets and proprietary information private. For advocates, this presents a significant barrier to addressing the problem of blackbox tech.
Consider the AI used for risk-based assessments in criminal sentencing: a recent ProPublica investigation revealed racial basis in this AI due to the fact that men of color are disproportionately arrested and jailed. For more on this, read the NYU Law review’s excellent articles on this topic (1, 2).
As ProPublica also recently explored, algorithms inaccurately predicting rates of recidivism (a tool called COMPAS) currently power parole decisions. Check out the documentary “Algorithms rule us all” linked below to see how one person made the case against COMPAS to win his parole.
There are also concerns about how AI could amplify the power of the information wars and create echo chambers that allow misinformation to spread.
As AI is increasingly used to automate certain workflows, there’s also risk that it could worsen economic disparities both within and between countries.
Finally, in consideration of AI’s use for autonomous weapons, there’s also the risk that AI could give wealthy nations too much power to project military force over increasingly long distances and exert undue influence beyond their own borders.
At a broader level, there are also implementation challenges associated with using AI in government. By their very nature, governments are often slower than the private sector at adopting and understanding new technology. Given how quickly the AI landscape is changing, it’s difficult for governments to implement this technology and stay abreast of all the developments around it.
THE “SO WHAT” FOR US
AI’s impact on our lives is important for everyone to understand. As President Barack Obama noted in 2018, algorithms powering ubiquitous search platforms like Google and social media often reinforce existing biases instead of providing a nuanced and objective view of the world (around 1:14). The full clip is worth a watch if you haven’t seen it before:
On a broader level, the idea that algorithms are coming to rule the world raises concerns about whether this is, on the whole, better or worse for society. It’s worth taking a pause to ask, “Should we let algorithms define our lives?”
If you’re interested in reading more about this topic, we recommend Weapons of Math Destruction by Cathy O’Neil, which explores the societal impact of algorithms in a smart, accessible way.
While it may not seem this way, everyone can take steps to raise awareness and voice their concerns regarding a dystopian world dominated by flawed, blackboxed AI. One small step everyone can take today is to set DuckDuckGo as their default search engine. This excellent search tool pulls results that are just as good as, if not even better, than those that Google returns — and unlike Google, it doesn’t store users’ search history to repackage it as data.
Below the Fold The man who changed disruption — and saw his own theories get disrupted Clayton Christensen, author of The Innovator’s Dilemma, inspired a generation of business leaders and scholars to think seriously about technology’s impact on businesses. He lived long enough to see his concept of “disruptive innovation” become omnipresent, at the expense of its original meaning and identity. However, his most important legacy is his own orientation toward self-reflection, thoughtfulness and self-criticism — all of which are critical for business leaders seeking to emulate Christensen’s example. 6 min read In this new Dutch neighborhood, there will be 1 shared car for every 3 households Merwede — a nearly proposed neighborhood in Utrecht — will be home to 12,000 people on a nearly 60-acre site. It will be focused on pedestrians and cyclists, with a public transportation system connected to all parts of the Netherlands. Instead of privately owned cars and bikes, it will boast a fleet of shared cars and bicycles available to all residents. 3 min read How Digital Design Drives User Behavior
New research shows that many organizations undervalue the importance of digital design and should invest more in behaviorally-informed designs to help people make better choices. Better digital design could help people pay closer attention to a business’ message, travel further down its purchase funnel and even make more efficient decisions.
8 min read This Week in Business History
February 3, 1690: the British colony of Massachusetts establishes a provincial bank
This bank will soon go on to print the New World’s first paper money. The notes, which are issued in denominations of two shillings to £5, are meant to quell the dissatisfaction of unpaid soldiers who’d unsuccessfully attacked Quebec with the expectation that they would be paid in plunder from the city.
When other colonies see how easy it is to print money, they follow Massachusetts’ example and begin producing their own paper money, as well. However, the system winds up being abused through overprinting. By 1780, there are about £40 of notes in circulation for every £1 of silver. This leads to massive inflation: for example, a pair of shoes costs about £5,000 in Virginia.
This email was sent to <<Email Address>>
why did I get this? unsubscribe from this list update subscription preferences
NYC Media Lab · 370 Jay Street, 3rd floor · Brooklyn, New York 11201 · USA