Google’s Quantum Computing Achievement
Welcome to the 30th edition of the NYCML Innovation Monitor.
Quantum computing… explained
Last week, Google published an article in Nature (one of the most prestigious science journals around) claiming “quantum supremacy using a programmable superconducting processor.”
For many of the topics we cover, our knowledge feels binary: either we understand it or we don’t — with the caveat that there is always more to learn. With quantum computing, sometimes it feels like we completely get it, yet also have no clue, at the exact same time. This week, we’ll do our best to explain what quantum computing is and what Google’s announcement means for the space.
We’ll also see what Gartner believes next year’s 10 big tech trends will be, learn about a chief decision scientist does, and remind ourselves that at the end of the day, management is about people, not just data.
We hope you’ve been enjoying this newsletter and would love any feedback (erica@nycmedialab.org), especially in these early stages. Thank you again for reading!
Best,
Erica Matsumoto
NYC Media Lab
To understand quantum computing, we first need a basic understanding of quantum mechanics. Quantum mechanics hypothesizes that the same thing exists in multiple places at the same time (this is called superposition).
A quantum computer uses the same principle. Rather than rather than being encoded in single positions, data is encoded in multiple positions simultaneously — exploiting superposition.
In just a few words: Calculations are significantly faster, because they are done simultaneously instead of one-by-one (how traditional computers work).
If you’re more of a visual learner (or are just still lost), try this video:
So, Google built a computer that puts these ideas into action. Why does that matter? It matters because Google’s computer worked, and it achieved quantum supremacy.
GOOGLE’S BIG ANNOUNCEMENT
“Quantum supremacy” means that a quantum computer has done something (even if something not very useful) that a classical digital computer can’t simulate in a given period of time. In practical terms, this isn’t necessarily world-altering news. However, it’s an early milestone for quantum computing’s development into what promises to become a powerful computing paradigm.
What Google and its team of affiliated university researchers accomplished:
- They built a chip called Sycamore and wired it into a massive exoskeleton enabling it to run at super-cooled temperatures and execute programs — called circuits — loaded from a control computer.
- Then, they programmed the computer’s 53 (working) qubits randomly, using both single and two-qubit gates (operations).
- Finally, they ran the random circuit (program) a million times and recorded the outputs. They were able to do that in about 200 seconds. By their estimation, simulating this process on a uniquely powerful classical supercomputer called Summit would have taken 10,000 years.
This is a unique achievement for two reasons:
- Google was able to control errors in the system (a notorious issue with quantum computers) well enough for their outputs to come close to the theoretical results.
- Google did the math and simulation at smaller bit lengths to be fairly sure that their error estimates were realistic (this is key because their full 53-bit results can’t be verified on a traditional computer).
IBM’S RESPONSE, AND GOOGLE’S RESPONSE TO IBM’S RESPONSE
IBM researchers were quick to point out that Google’s achievement isn’t world-altering. They estimated that by using all of the 250 petabytes of memory and disk drives to add hard disk space to the world’s most powerful classical computer, the Summit OLCF-4 at Oak Ridge National Laboratory, they could do the same calculation in 2.5 days, not 10,000 years.
Subsequently, Google responded by challenging IBM to make their theoretical response reality. IBM hasn’t yet… but don’t discount the possibility. Google Quantum AI lab founder Hartmut Neven also defended the scale of Google’s quantum computing claim by comparing it to Sputnik: “Sputnik didn’t do much either. It just circled the Earth and beeped.” And yet, we talk about an industry having its “Sputnik moment” as shorthand for the momentous occasion when an idea cross over from the theoretical to the real.
WHAT DOES THIS ALL MEAN
By all indications, this is an important point in computing. The best historical parallel is the decision in the 1960s to build computers with electronic transistors; that decision was the catalyst for the entire Information Age.
Quantum computing, when realized, will catalyze another leap in computers’ abilities. Consider, for example, simulation of quantum systems. This currently isn’t possible, but Richard Feynman recognized quantum computing as the solution in the early 1980s. He said, “I’m not happy with all the analyses that go with just the classical theory, because nature isn’t classical, dammit, and if you want to make a simulation of nature, you’d better make it quantum mechanical.”
With ordinary computing power massively improved by quantum computing, power on a massive scale — such as databases with yottabytes (for scale, one yottabye is one septillion (1024) bytes) of storage and exaflops of processing (capable of a quintillion, or a billion billion, calculations per second) — might allow much more sophisticated models. Those, in turn, might make it possible to answer previously-impossible questions.
Gartner Top 10 Strategic Technology Trends for 2020
Gartner’s top 10 strategic technology trends that will drive significant disruption and opportunity over the next five to 10 years are structured around the idea of “people-centric smart spaces.” Gartner research VP Brian Burke explains, “Rather than building a technology stack and then exploring the potential applications, organizations must consider the business and human context first.” Among those listed are edge computing, distributed cloud computing, automation and human augmentation.
Source: Gartner
5 min read
Google’s got a chief decision scientist. Here’s what she does.
Google’s chief decision scientist, Cassie Kozyrkov, sees artificial intelligence and big data analytics as tools to improve human decision-making. She’s coined the term “decision intelligence” to describe combining applied data science, AI and analytics to create better tools and products.
2 min read
Leaders Don’t Hide Behind Data
altMBA founder Seth Godin argues that managers have turned digital tools into a crutch. Because it’s now easier than ever to do A/B testing and to stay busy, it’s become too easy to focus on lowering costs rather than improving management practices. Rather than focusing on digital tools, Godin argues that managers need to focus on fostering human connection.
7 min read
This Week in Business History
November 1, 1927
The Ford Motor Company begins production of the Model A, its first new model in decades. The hugely successful Model T had outlasted its usefulness, and as losing market to more powerful, stylish, comfortable and option-rich cars from rivals. Although Henry Ford has essentially produced the modern automobile industry, he’d missed one of its offshoot trends: the desire for continual update, especially updating to more expensive models as one’s economic status improved.
Ford’s changeover from Model T to Model A production was ungainly: in May, production lines had to be shut down abruptly, and they ultimately sat idle for half a year while the Model A was developed and production lines were retooled accordingly.
On the first day Model As appeared for sale in December after a nationwide advertising blitz, anticipation was sky-high — 50,000 cash deposits were received in New York alone, and mob scenes at dealerships led to showroom windows being blown out. Basically, it was the Supreme drop of the day. By the time production ended in early 1932, Ford had sold five million Model As.