“Quantum Computing and the Entanglement Frontier” John Preskill, CalTech


(ambient music) – Good evening, everyone. I’m Joel Moore, the interim chair of the
Berkeley Physics Department, and it is my great pleasure
to welcome you this evening to the annual J Robert
Oppenheimer Lecture. Since 1998, Berkeley Physics
has had the opportunity to bring truly world-renowned theoretical
physicists to campus to speak in honor of J Robert
Oppenheimer and his legacy. This lecture series occurs every spring, and it highlights trends, discoveries and groundbreaking research in theoretical physics, and it was made possible
through the generosity of Jane and Robert Wilson. Before introducing tonight’s lecturer, I would like to say a little bit about the Berkeley Physics
Department and some news, and say a little bit more about Oppenheimer and his legacy
and what he particularly means in the present time. To start with, Oppenheimer
was a theoretical physicist and created the first school
of theoretical physics in the US at Berkeley. He came to Berkeley one year after Ernest Lawrence, who was maybe the largest figure in Berkeley’s experimental
physics program, and between them, they turned Berkeley into one of the best departments
to do physics in the world. Oppenheimer’s achievements in physics, I would normally tell you a lot about. These include the
Born-Oppenheimer approximation for molecular weight functions and work on the theory of
electrons and positrons, the Oppenheimer-Phillips process infusion, and various other things. I would like to focus tonight on another aspect of
Oppenheimer’s career, which is what he did after he was at Berkeley. You may have heard that he
was the scientific leader of the Manhattan Project, the project to create the atomic bomb. And in that capacity, he was an incredible scientific manager. And normally, management is
not the most exciting thing to talk about. The particular point I would make is that Oppenheimer put together an unbelievable collection of talent. And this is not so much a scientific thing to me as it is being able to recognize very smart people and get them together and
then get out of the way, I think is Oppenheimer’s
great achievement. And one point I would like
to make about that talent is, some of it was born in the US, like Richard Feynman, for example, but a great deal of it was not. Much of it was immigrant, and, of that part, much
of that was refugee. And I think a great deal
of America’s leadership in post-war physics came
to that Manhattan Project, came from that Manhattan
Project generation. And that’s why if you try
to talk with physicists about politics, you’ll get
widely different opinions. Even with Oppenheimer, you
will get different opinions on, was Oppenheimer sufficiently
careful about security? Was the atomic bomb a
worthwhile exercise, and so on. Physicists are very
capable of disagreeing. You will very rarely hear a physicist say, in fact, never in my experience, that American physicists
have not benefited greatly from international collaboration and from people coming
from other countries, and I think that’s
probably worth remembering. So, Oppenheimer’s progress as the father, the founding father of the American School
of Theoretical Physics was to ask, what a talk was about, or what a piece of physics was about, what was learned by it, and what were the remaining
unsolved problems, and we continue to ask
those same questions, and that’s the theme of tonight’s lecture. So, as I mentioned, the
Oppenheimer Lecture we’ve had since 1998, and it’s become
a very nice tradition in our physics department, and there are new things
happening that I believe will last at least as well and become new traditions, and I wanted to call your
attention to a couple of those. One is, tonight is a night for theory, but we have created a
new experimental program at the undergraduate level, Physics 5, with a beautiful new laboratory that I think is going to make Berkeley, if it isn’t already,
the best place to learn experimental physics as an
undergraduate in the world. We have a new center for
quantum coherence science, which is actually very connected to the kind of work that
you’ll hear about tonight. It’s very much in the same theme, that certain fundamental ideas
of quantum mechanics unify a vast number of different
areas of physics. So that’s one of our main priorities in research at the moment. And then lastly, in order to link physics with the outside world in the same way that Oppenheimer did, we have a new industrial
partnership program, called Berkeley Physics Partners, or BP2, and I would be happy to talk with you about
any of these things, but I think, with that, let me move on to a little bit more about science and the Oppenheimer Lecture, and our distinguished guest
tonight, Professor Preskill. So, Oppenheimer lecturers,
since 1998, have included six Nobel laureates and distinguished figures from all areas of theoretical physics, ranging from astrophysics,
to condensed matter, to cosmology, to atomic and molecular physics. Tonight, we have an
unusually broad speaker in that Professor Preskill’s lecture on quantum computing and
the entanglement frontier will take us on a journey
into quantum entanglement and the various aspects of
physics that it unifies. So, Professor Preskill comes to us from the California
Institute of Technology, better known as Caltech. He is the Richard P Feynman professor of theoretical physics there, and he’s also director of The Institute for Quantum Information
and Matter at Caltech, and that institute, which has existed for quite some time now, I believe it started in 2000, was one of the first to
recognize that notions of quantum information are very powerful in linking the work of physicists
in different disciplines. Getting back to Professor Preskill, he received his PhD in
physics in 1980 from Harvard and moved rather quickly
to Caltech in 1983. He is a member of the National Academy. He is a two-time recipient of the Associated Students
of Caltech Teaching Award. He’s mentored more than 50 PhD students and more than 45 postdoctoral
scholars at Caltech, and many of those, a few of those are here, I believe, and many of those have
gone on to be leaders in their research areas. So if I had to pick a few
sentences to sort of summarize the theme of his research,
at least since 2000 or so, he’s especially intrigued
by the ways that our deepening understanding
of quantum information and quantum computing can be applied to other
fundamental issues in physics, such as the quantum
structure of space and time. Aside from his research papers, his celebrated lecture notes
from his Caltech course on quantum computation, which, by this time, includes
a great deal of things that I wouldn’t necessarily
call computation, they’ve exerted a profound influence on the development of the subject. And I would say that Caltech has become one of the leading centers
for theoretical research on quantum information
and quantum computing. Our own center for quantum
coherence science has a different emphasis in some ways, it’s based on what Berkeley leads in, but it’s fair to say that one of our intellectual progenitors in setting up this new center was what’s been done at Caltech. So, Preskill has been described as less weird than a quantum
computer and easier to understand. I agree with the second part, and the first, I’ll reserve
judgment until after the talk, but we are thrilled to
add Oppenheimer lecturer to his very long list of accolades. Please join me in welcoming
Professor John Preskill. (applause) – Thank you very much, Joel, for the beautiful introduction. And I’m deeply honored to be here to carry on the tradition of the Oppenheimer Lecture and to join the roster of great scientists who
have preceded me here. I’m going to be talking about quantum physics, but also about information. Everybody knows that
information technology has had a huge impact on our everyday lives, but we also recognize that
information technology that seems impressive to us
today is going to be surpassed in the future by new technology that we can’t really
expect to imagine today. It’s interesting just the same to speculate about future technologies, and I may not be the
ideal person to engage in that type of speculation. I’m not an engineer, I’m
a theoretical physicist and I can’t really claim to
be deeply knowledgeable about how computers really
work, but as a physicist, I do know that the crowning
intellectual achievement of the 20th century was the
development of quantum theory, and it’s natural for a
physicist to wonder how the development of quantum theory in the 20th century will impact 21st-century technology. Quantum theory is, of course,
an old subject by now, but some of the deep ways
in which quantum systems are different from classical systems we’ve only come to appreciate
relatively recently. And a lot of those differences have to do with the properties of information encoded in physical systems. To a physicist, information
is something we can encode and store in the state
of a physical system, like, for example, the pages of a book, but fundamentally, all physical systems are really quantum systems governed by quantum mechanics, and so information is something
that we can encode and store in a quantum state. And physicists have
appreciated, for a long time, that information carried
by quantum systems has some notoriously
counterintuitive properties. That’s why we like to
speak about the weirdness of quantum theory, and we relish that weirdness and find great enjoyment in it. But we’re also starting
to ask more seriously in recent years whether it’s possible to put the weirdness to work to exploit the unusual
properties of quantum information to perform tasks that wouldn’t be possible if this were a less weird classical world. And that desire to put
weirdness to work has driven the emergence of a field we call
quantum information science, which derives much of
its intellectual vitality from three central ideas, which are quantum entanglement,
quantum computing, and quantum error correction, and my goal in the talk is to introduce you to these ideas. I’d like to start at the beginning. We all know that any amount of
digital classical information can be expressed in terms
of indivisible units, bits of information, and we might think of a
bit as a physical object, like a ball, which can be
either one of two colors. Now if I want to, I can store a bit inside a box, and then later on, if I
want to recover the bit, I can open the box, and
the color that I put in comes out again, so I can
read the bit accurately. And when I speak of quantum
information, what I mean is information carried in a quantum system, and it, too, can be expressed in terms of indivisible units,
what we call quantum bits, or qubits for short. And for many purposes,
it’s useful or instructive to envision a qubit as an
object stored inside a box. Where we’re now, we have the
option of opening the box through two complementary doors, which correspond to two different ways in which we can prepare or observe the state of the qubit. And you can put information
in door number one of the box or door number two, and if, later on, you open that same door again,
the color ball that you put in comes out again, just as though the
information were classical. But if I put information into a qubit through door number one, for example, and then later on, I observe the qubit through door number two, observe it in the complementary way, then no one can predict what we’ll find. There’s a 50% probability
that the ball is red and 50% that it’s green. So if you want to read
quantum information, you have to do it the right way. If you do it the wrong way, then you will unavoidably
damage the information. And one consequence of
that we can appreciate, if we think about copying a quantum state. If I had a quantum copy machine, that would mean that if I happen to have put information through door
number one of our qubit, I can make a copy of the qubit, and then if I open the
original and the copy through door number one, then the color ball that I put in would come out of both boxes. And likewise, if I happen
to have put information in door number two of the original Qubit, once I build a copy, I
could open door number two on the original and the duplicate, and the color that I put in
would come out of both boxes. But, in fact, no such quantum
copying machine is possible. It’s not allowed by the laws of physics. We can’t make high-fidelity copies of unknown quantum states. And the reason why not is that in order to make the
copy, the copy machine has to probe inside the box, and if it guesses right and
uses the same door that I did, then it will be able
to copy the information just as though it were classical, but if it guesses wrong
and opens the wrong door, that will damage the information and there won’t be any way to build a high-fidelity copy. So although we might be
able to clone a sheep, we can’t clone a qubit. Now I’ve described qubits
in an abstract way, which I think is a useful
way to think about them, but a qubit always has
some physical realization, and I’ll give a few other examples later, but just so you’ll have something concrete to think about. We could consider, for example, the qubit to be a polarization state of a single particle of light, a photon. A photon has an electric
field, and if it’s oriented either horizontally or vertically, that corresponds to looking through door number one of the box, and if the polarization is tilted to the 45-degree rotated axes, that corresponds to door number two. So, for example, we could make a horizontally polarized
state of a single photon and observe it through the tilted axes, and what we would generate
is just a random bit. But the really interesting ways
in which quantum information is different from classical information we can only appreciate
if we consider states of more than one qubit. So let’s imagine we have two qubits, and they could be far
apart from one another. One at Caltech in Pasadena, the other in the custody of my friend in the Andromeda Galaxy. And some time ago, these two
qubits were both on earth and they interacted in a
certain way that prepared a correlated state of the two qubits which has some unusual properties. Namely, I can open my box in Pasadena through either door number
one or door number two, and either way, what I
find is just a random color with the 50% probability of
being either red or green, and the same thing is true
for my friend in Andromeda. He can open the box through
either door number one or door number two and just finds a random bit. So neither one of us finds any information in the boxes by opening a box in Pasadena or Andromeda, which seems kind of funny, because with two boxes,
we should have been able to store two bits of information. But where has that
information been hidden? The answer in this case is that all the information is actually encoded in the correlations between what happens when
you open the box in Pasadena and when you open it in Andromeda. Because it turns out, for this
particular correlated state of the two qubits, if I open door number
one, what I find might be red or green, but if
my friend in Andromeda also opens door number one for
that particular qubit pair, he’s guaranteed to find
the same color that I do. And the same thing is true if
we both open door number two. As long as we open the same door, we’re guaranteed to find the same color. And there are four perfectly
distinguishable ways in which a box in Pasadena
could be correlated with a box from Andromeda. We could see that the same
color or opposite colors when we both open door number one or both open door number two, and by choosing one of those four ways, we’ve encoded two bits of information in our pair of qubits. But what’s unusual in this
case is that that information is completely inaccessible locally, it’s a property stored non-locally, shared by the two
distantly separated qubits. And this property, that
information can be shared non-locally between
distantly separated objects is what we call quantum entanglement, and it’s the really important way in which quantum information is different from classical information. Correlations themselves
are nothing unusual. We encounter them all
the time in daily life. My socks are normally the same color. So if you look at my left
foot and observe my sock, then you know, without looking, what color you expect when
you look at my right foot. And it’s kind of like that
with the quantum boxes. If I want to know what my
friend is going to see when he opens door number one in Andromeda, I can open door number one
in Pasadena to find out. And if I want to know what
he’ll see when he opens door number two in Andromeda, then I can open door number
two in Pasadena to find out. So it might seem to you that
it’s really the same thing that the boxes are just like the soxes, but I claim that, in fact,
they’re fundamentally different. The boxes are not like the soxes, and the essence of the difference is there’s just one way to look at a sock, but because we have these
two complementary ways of observing the qubit, the correlations among qubits are richer and more interesting than the correlations among ordinary bits. This phenomenon of quantum
entanglement is an old subject. It was first explicitly discussed in a paper by Einstein,
Podolsky, and Rosen in 1935. And to Einstein, entanglement
was so unsettling as to indicate that something is missing from our current understanding of the quantum description of nature. And that paper elicited
some thoughtful responses, including a particularly
interesting one from Schrodinger. The way Schrodinger put it was, “The best possible knowledge of a whole “does not necessarily indicate
the best possible knowledge “of its parts.” What Schrodinger meant was
that even if we had the most complete description that the
laws of physics will allow of a pair of qubits, we’re still powerless to
predict what we’ll find when we open door number
one or door number two of one of those two qubits. And it was Schrodinger who suggested using the word entanglement to describe these unusual correlations. He also said, “It is rather discomforting “that the theory should
allow a system to be steered “or piloted into one or
the other type of state “at the experimenter’s mercy “in spite of his having no access to it.” And what Schrodinger
meant is it seems funny that it’s up to me to decide, by either opening door
number one or door number two in Pasadena, whether
I’ll know what my friend will find when he opens door
number one or door number two in Andromeda. But Schrodinger understood
that these correlations, though different from
ordinary correlations, don’t allow us to send
an instantaneous message from Pasadena to Andromeda. When my friend in Andromeda opens his box, he just finds a random bit, and the probability distribution governing what he finds is not affected by what I
choose to do in Pasadena. So no message is sent from
one party to the other. Now this theory of quantum
entanglement really didn’t advance very much
for the next 30 years, until the work of John Bell in the 1960s. And beginning with Bell,
we started to think about entanglement in
a rather different way, not just as something weird,
unsettling, and surprising, but as something potentially useful; a resource that we can
use to perform tasks that wouldn’t otherwise be possible. We don’t have to go into the details, but what Bell described can be thought of as a
game that two players play. Alice and Bob, it’s a cooperative game. Alice and Bob are on the same side. They’re trying to help each other win. And the way the game works is that Alice and Bob receive inputs, and their task is to
produce outputs which are correlated in a way that
depends on the inputs that they both receive. But under the rules of the game, Alice and Bob are not
allowed to communicate with one another between
when they receive the inputs and when they produce their outputs. And for this particular
version of the game, if Alice and Bob played
the best possible strategy, they’ll be able to win the game with a success probability of 75% if we average uniformly over the inputs that they could receive. But there’s also a quantum
version of this game, where the rules are exactly the same, except that, now, Alice
and Bob are allowed to use entangled pairs of qubits which have been distributed
to them before the game began. And with those short
qubits, they can play a better quantum strategy, which allows them to win the game with a higher success probability, about 85% rather than 75%. So they can use entanglement as a resource to perform a task winning the game better than they could using just classical correlations that they share. And experimental physicists
have been playing this game for decades now, and
winning with the higher probability of success,
which Bell pointed out, the laws of quantum mechanics will allow. So it seems that the
super strong correlations really are part of nature’s design. Einstein didn’t like quantum entanglement. He called it spooky action at a distance. This sounds even more derisive
when you say it in German. But it doesn’t even matter
what Einstein thinks. Nature is the way
experiments reveal her to be, and we should all learn
to love her as she is. So, boxes are not like soxes. Quantum correlations are
different from classical ones. You can use them to win a game with an 85% success probability instead
of a 75% success probability. Is that a really big deal? Yeah, it’s really a big deal. And we can appreciate
better why it’s a big deal if we think about more complex
systems with more qubits. We can think about quantum
entanglement this way. Imagine a book that’s 100 pages long. If this were an ordinary
book, written in bits, you could read the pages one at a time, and every time you read another page, you’ll know another 1% of
the content of the book, and after you’ve read all 100 pages, you know everything that’s in the book. But suppose it’s a quantum
book, written in qubits, and suppose the pages are highly
entangled with one another, then when you look at
the pages one at a time, all you see is random gibberish, revealing almost no
information that distinguishes one highly entangled book from another. And that’s because the
information in the quantum book is not written in the individual pages. It’s stored almost entirely
in the correlations among the pages. That’s quantum entanglement. And these correlations can be very complex and are hard to describe
in terms of classical bits. So, for a modest number of
qubits, just a few hundred, if I wanted to give a
complete description, in classical language,
of all the correlations among 300 qubits, I
would have to write down more bits than the number of
atoms in the visible universe. So it’ll never be possible,
even in principle, to write down that complete description of all the correlations. And that property of quantum information is very intriguing to the
physicist, Richard Feynman. They’d let him make the
suggestion in the early 1980s that if we could build
a computer that operates on qubits instead of
bits, a quantum computer, we’d be able to perform tasks that are beyond the reach of any
conceivable digital computer. Feynman’s idea was that
if we can’t even express, in terms of ordinary bits, the information content
of a few hundred qubits, then by processing the
qubits, we ought to be able to perform tasks that a digital computer would never be able to emulate. And at the time Feynman
was making this suggestion in the early 1980s, there was
an undergraduate at Caltech studying mathematics. Like all of our undergraduates,
he studied quantum physics as part of our core curriculum. And like most of our undergraduates, he retained what he learned and later put it to good use when he made a remarkable discovery. Shor thought about the problem
of finding the prime factors of a composite integer. This is a problem which we think is hard for classical computers, though there’s no
mathematical proof of that. And what Shor found is that if we had a quantum computer, the factoring problem would be easy. It wouldn’t be much
harder than multiplying two numbers together
to find their product. And when I heard about this in 1994 when Shor made the discovery, I was really awestruck, because what it means
is that the difference between hard and easy problems, the difference between problems that we’ll be able to solve some
day with advanced technologies and the problems that we’ll
never be able to solve because they’re just too hard, that that boundary between hard and easy is different than it otherwise would be because this is a quantum
world, not a classical world. And I thought that was one
of the most interesting ideas I had heard in my scientific life, and thinking about it eventually led me to change the direction of my own research from elementary particle
physics to quantum computing. Now does anybody care whether
factoring is a hard problem? Yeah, in fact, a lot of people care, because the security of the
protocols that we use everyday to protect our privacy when we
communicate over the internet are based on the presumed hardness of factoring and other similar
number theoretic problems. And in a few decades, when
everybody has a quantum computer, we won’t be able to protect our privacy using these protocols. We’ll have to do something else. Alternatives exist, but it’s still not exactly clear what will be the best
way to protect privacy in the coming post-quantum world. The important thing that we
learn from Shor and others is that there is an interesting
classification problem, classification of problems, that there are problems
that are hard classically and quantumly easy. Can’t be solved by
ordinary digital computers, could be solved if we
had quantum computers, and it becomes a compelling
research question to understand better what are the problems which are of such intermediate difficulty. And we’ve learned a lot of things about
that in the last 20 years, but I think the most
important thing we know, from a physicist’s point of
view, about quantum computers is that we think that we
can’t say this for sure, but with a quantum computer, we’d be able to simulate efficiently any
process that occurs in nature, which isn’t the case with digital computers, which are unable to simulate highly entangled systems. And that means with a quantum computer, we’d be able to explore
physics in new ways. For example, by simulating
strongly coupled field theories, we’d be able to compute the
properties of complex molecules, study exotic quantum materials, and study fundamental processes, like the formation and
evaporation of a black hole or the properties of the universe
right after the big bang. So a lot of people work
on developing applications for quantum computers
even though we don’t have large-scale quantum computers yet. One of them is my friend, Eddie Farhi, who, like me, is a lapsed
particle physicist, and when he wrote one of his
billion papers a few years ago, it inspired me to send him a poem, which read, in part, “We’re
very sorry, Eddie Farhi. “Your algorithm’s quantum. “Can’t run it on those mean machines “until we’ve actually got ’em.” And the poem goes on, but the point is that we have a lot of interesting ideas about what to do with quantum computers, but we don’t have quantum computers yet that can run those applications. So why not? What is it that’s taking so long? Well, it’s really hard to
make a quantum computer. And one of the difficulties is the phenomenon we call decoherence. Physicists like to imagine
a quantum state of a cat, which is simultaneously dead and alive. And we never observe,
in everyday experience, that type of superposition of macroscopically distinguishable
states of a system. And we understand the reason why not. It’s because no real cat
can be perfectly isolated from its surroundings. And the interactions with
the environment, in effect, immediately measure the cat, projecting it onto a state, which is either completely
dead or completely alive. That’s the phenomenon of decoherence, and decoherence helps us to understand why even though quantum physics holds sway at the microscopic scale, still, classical physics is quite adequate for describing most of the processes of our everyday experience. A quantum computer won’t be
otherwise much like a cat, but it, too, will be hard to perfectly isolate
from its surroundings. And interactions with
the environment can cause the quantum information
stored in a quantum computer to be damaged, and that will cause the
computation to fail. So if we’re going to operate a
large-scale quantum computer, we have to figure out how to protect it from the damaging effects of decoherence and other sources of error. Errors can be a problem, even in the classical world, we all
have bits that we cherish, but everywhere there
are dragons lurking, who take pleasure in damaging our
bits, flipping their color. We learn, in the classical world, some ways to protect our information. The important concept is
that we can redundantly encode the information so that
if it’s partially damaged, we can still recover the information. So if I want to store a bit,
which is one that I cherish, I can store backup copies of the bit, and then a dragon might
come along from time to time and change the color of one of the balls, but I can also ask a busy beaver to frequently check the balls, and whenever she sees that
one’s a different color from the other two, she repaints it so all three match again. So unless the dragon has had a chance to damage two out of the three balls, the information is well protected because of the redundant storage. Now we’d like to use the same idea that redundancy provides
protection for quantum states. But at first, there seem
to be difficulties because, as already discussed, we can’t
copy unknown quantum states. So I can’t, for example,
make a backup copy of the state of a quantum computer in the middle of a computation in case my original gets damaged. And furthermore, with
the quantum computer, there are more things that can go wrong with the information. It might be that a dragon opens
door number one of the box and flips the color of the
ball and then recloses the box. That would be like a bit flip that occurs for classical information. But instead, the dragon
could open door number two and change the color of the
ball and reclose the box. That’s what we call a
phase error on a qubit, and it really has no classical analog. We need to be able to protect
against both the bit flips and the phase errors to make sure our quantum
information is undamaged. There’s another way of thinking
about these phase errors, which is we might imagine that the dragon opens door number one, and instead of flipping the color of the ball, just observes the color and remembers it. It never had the effect
of changing the color, as observed through door number two. And in many physical settings, it’s easier for the environment to
remember the state of a qubit than to flip the qubit, and that makes phase errors
particularly pervasive in some physical settings. So the key thing is that if you look at quantum
information, you disturb it, and so if we want to
protect quantum information, we have to keep it almost perfectly isolated
from the environment. So there’s no leakage of information about the state of our quantum computer to the outside world. And that sounds impossible
because our hardware will never be perfect. So how can we perfectly isolate a quantum computer from the outside? But we learned, in principle, how to do it through the concept we call
quantum error correction, and the essential trick
is to use entanglement to protect the information. So if I have one qubit
that I want to protect, I can encode that one qubit of information in an entangled state of five qubits, which is chosen in such a way
that if the dragon comes along and observes or performs any action on one of the five boxes, that dragon doesn’t
acquire any information about what the encoded state is. Because the information doesn’t reside in that individual box, it’s a collective property
of the five qubits. It’s just like that 100-page book. When you look at one of
the qubits at a time, the information is completely hidden. And so it’s possible
then to ask the beaver, after the dragon has
acted on one of the boxes, to make some collective observations of the five qubits and restore the right kind of the entanglement. And, in the process, the
beaver doesn’t learn anything either about the protected encoded state, and so that state can be undamaged. So the basic idea of quantum
error correction is that we can use redundancy to
protect quantum states, but we have to do it the right way, and the right way to do it
is to encode the information in the form of entanglement
among many parts of the system. So just like that 100-page book, which reveals no information when
you look at one page at a time, the environment will interact locally with the parts of the
system one page at a time, and, in doing so, won’t be able to detect the encoded information or damage it. And we’ve also learned how to process information which is encoded
in this entangled form, and so operate a robust quantum computer, at least in principle. So, although we may never see a real cat in a superposition of
the dead and alive state, we should be able to prepare
an encoded state of a cat and maintain it in that
delicate superposition state for as long as we please. Well, we understood these principles of quantum error correction
about 20 years ago. We were very excited. And so my then-student, Daniel Gottesman, wrote a sonnet. And I’ll just read the beginning of it. “We cannot clone,
perforce; instead, we split “coherence to protect it from that wrong “that would destroy our valued quantum bit “and make our computation take too long.” And so on. The point is we were excited, because we had understood that, at least in principle, we could make a quantum computer resistant to the effects of noise and decoherence. Now another hero of this story is my Caltech colleague, Alexei Kitaev. The day when we met, which
was about 20 years ago, was one of the most exciting
days in my scientific life. When I heard his seminar
and took these notes, I thought that I was hearing, from Kitaev, ideas about quantum error correction which are potentially transformative. And what I learned from him
was the connection between quantum error correction and topology. Topology means the properties
of a mathematical object which remain invariant when
we smoothly deform the object without ripping or tearing it. And when we think of operating
a robust quantum computer, what we want is for the
protected information that’s being processed to remain invariant even as we deform the computer
by introducing some noise. So we would like to use interactions which take advantage of
topological principles for the purpose of information processing. And physicists now have such
topological interactions. For example, the Aharonov-Bohm effect. I can imagine transporting
a charged particle, like an electron, around
a magnetic flux tube. And then the quantum state
of that electron is modified in a way that depends
on the magnetic flux, which is enclosed in the
tube even though the electron never directly visits the region where the magnetic field is non-zero. And that change, that interaction
is a topological property. If we deform the trajectory of the electron, the effect of circling the
flux tube doesn’t change, the only thing that matters
is the topological property, the winding number of the electron around the flux tube. Now if we can engineer
two-dimensional systems, for example, in a layer separating two slabs of semiconductor, then there’s a very rich family of possible topological
interactions that can be realized. In these systems, if properly designed, we can have what we call anyons. And anyons have the interesting property that if I have a system of
many of these particles, that the quantum information
carried by the particles can be very complex, but when we visit the
particles one at a time, that information is completely invisible. Because it’s not a property
of the individual particles, but a collective property
of all the particles. And that’s just the type
of encoding of information that we want to protect against noise. That information will be well hidden from the influence of the environment. And furthermore, we can
process the information by performing exchanges of the particles in which they swap places. So we can imagine operating a topological quantum computer, which we would initialize by
in some two-dimensional medium, preparing pairs of anyons, then processing the
state of those anyons by successively exchanging pairs of particles so that their world lines in
two-plus-one-dimensional spacetime trace out the braid, and then we could read out a final result say by bringing the
particles together in pairs and observing whether the
pairs of particles annihilate and disappear or not. So what’s beautiful about this idea is that, in principle, we can do any computation we want this way, and the computation is
intrinsically resistant to decoherence if we keep
the temperature lower so we have no unwanted
anyons diffusing around, and if we keep the particles
far apart from one another, except at the very
beginning and the very end so there’s no unwanted exchange of charges between the particles, then as long as the world
lines execute the right braid, then we’ll do the right
computation and get the right answer. So I really like this idea, which led me to write a poem about it. And I won’t read you the whole thing, but part of it reads this way. Alexei exhibits a knack
for persuading that someday we’ll crunch quantum data by braiding, with quantum states hidden
where no one can see, protected from damage through topology. Anyon, anyon, where do you roam? Braid for a while before you go home. And there’s more to it than that, but the point is, it’s a really beautiful, exciting idea. But it’s a theorist’s dream, and it’s something that
we can really realize in hardware that can be built. Well, here, too, Kitaev
had a seminal idea, which is to use the principle that, under the right circumstances, we can divide an electron in half. That sounds ridiculous
because we know an electron is a fundamental elementary
particle and it’s indivisible, but in a highly entangled environment, in the right kind of
two-dimensional medium, electrons can split into pieces, and anyons can arise that way. Here’s one relatively simple setting in which that can happen, actually. In a one-dimensional wire, it’s possible for the wire
to be superconducting. That means it conducts electricity without any resistance. And there are two types of superconductor: what we might call the conventional type, and a more exotic type, called the topological superconductor. And at the boundary between the two types, there resides an object that we call a Majorana fermion. And now it’s possible
to add a single electron to this finite segment of
topological superconductor, and that electron will, in effect, dissolve and disappear. So we can’t tell whether
it’s been added or not. But in the process, the state
of these two Majorana fermions at the two end points of the
segment will have changed. But that change in the state
of the Majorana fermions is not locally visible; we can’t see it if we visit the endpoints of the
segment one at a time. It’s a collective property of the two. So that’s the type of non-local
encoding of information that we want to protect against
errors in a quantum system. And this type of Majorana fermion in a superconducting wire, well, we have some very
interesting evidence that it can be realized experimentally, more
experiments will be needed to make that case completely ironclad. Of course, we’d like to be
able to do more than just store information reliably, we’d like to be able to process it. And using quantum wires, one way to do that would be
to build a network of wires so that if I had two Majorana fermions, I would be able to change their positions, let’s say with voltage
gates underneath the sample, so that one Majorana
fermion could be parked around the corner, the other move from right to left, and then the first one unparked, and that would perform an exchange of the positions of the two particles, which would be a kind
of quantum operation, one step in a quantum computation which is protected from decoherence. That type of experiment
hasn’t been attempted yet, but I expect it will be in
the next couple of years, and when done successfully, that will not just be an interesting step towards a future technology,
but a real milestone in basic physics. Now I don’t want to give
the impression that this exotic topological
approach is the only way that we can build large-scale
quantum computers. No, that’s not at all the case. There are a number of ways
of building quantum hardware, which are currently being developed and are making impressive progress. I already mentioned one
way of encoding a qubit using the polarization
state of a single photon. There are a number of other ways. One is we could store our qubit in the state of a single atom, which could be, say, in either
its internal ground state or some long-lived metastable state corresponding to the
two states of the qubit. Or we could encode a qubit
in a single electron, which has a magnetic moment, or spin, which could be oriented either up or down. So these are two remarkable encodings, because in each case, we are encoding the information which is to be processed in a truly microscopic system, either a single atom or a single electron. Another possibility, though, is to use superconducting circuits, not the exotic topological type that I
just mentioned a minute ago, but conventional superconductors, where, although, in practice, there are better ways of doing things. You could imagine encoding a qubit by choosing a state in which this current in the circuit either circulates clockwise
or counterclockwise. That’s a remarkable encoding, too, because, in this case, the qubit involves the collective motion of billions of electrons, and yet, for information
processing purposes, it behaves like a single atom or electron and can be quite well-controlled. We’re not far away. I expect, in the next couple of years, we will have quantum computers with more than 50 qubits, and these will be systems
which are sufficiently complex that we can’t simulate
them with digital computers that exist today. So this will be in the onset of the age of quantum supremacy, in which quantum systems
are performing tasks that go beyond what we can
achieve in the classical world. And I think we should
view that as the opening of a new frontier in
the physical sciences, what we could call the
complexity frontier, or entanglement frontier. This is different from the frontier we explore in particle
physics at short distances, or in cosmology at long distances, but, like those, very
fundamental and exciting, and, like those, in order to make advances, we
need more and more powerful instruments. We are now in the process of developing
and perfecting the ability to prepare and precisely control highly entangled states of many particles, which go beyond what we can simulate. We don’t have the theoretical tools to predict very well the behavior of these systems, and that’s going to open new
opportunities for discovery. What are the things that
we’ll be able to do with a quantum computer, which we hope we’ll have
in a couple of years, with 50 to 100 qubits? Well, maybe one of the most
important things is we’ll use these smaller quantum computers to learn how to make rather
big quantum computers, in particular, by testing and perfecting our procedures for doing
quantum error correction. But we’ll also be able to run, at relatively small scales, new
kinds of algorithms, which will already surpass what we
can do with digital computers, study certain quantum simulation problems, for example, to investigate
quantum chaos in new ways, or to simulate complex
molecules going beyond what we can do classically. But once we have quantum
computers that we can try out and play around with, I expect we’ll discover a number of new applications
which we haven’t anticipated. Now how far off is it that we’ll have scalable quantum computers that can, for example, break the RSA
public-key cryptosystem? Oh, that’s farther away, perhaps decades. You know, I said earlier that
you can’t solve this problem using digital computers, but that’s not strictly true, it’s just a question of resources. So if you wanted to break RSA
as it’s typically used today, it’s possible, but you would have to cover about a quarter of the land area of North America with a server farm, and then you’d be able to
solve the problem in about 10 years, but the catch is that, with
existing computing technology, the power consumption would burn up the world’s supply of fossil
fuels in just one day. So, from that perspective, the quantum computer looks pretty good. If we just took the technology
we have today and sort of brute force scaled it up, it’s not quite as simple as it sounds, but suppose we did that, and this estimate was done
by John Martinis, who’s a experimentalist who works
in superconducting qubits, well, in order to have
sufficient redundancy to do error correction,
we’d probably need about 10-million physical qubits, and then we’d be able to
run the algorithm that factors a number and breaks
RSA in less than a day, and the power we would
need is just 10 megawatts. The thing is, at the current cost of making a very good qubit, it would cost 10s of billions of dollars. So, the cost is gonna have
to come down, and it will. So there are three questions
about quantum computers that I’ve been emphasizing. One is, what will we do
with quantum computers? Why build one? And I think the best
answer we have to that is that, with a quantum computer, we’d be able to simulate, we think, any process that occurs in nature, which we can’t do with
digital computers, which are unable to simulate
highly entangled systems. Can we really build one? Well, we know of no
insurmountable obstacles to doing so now that we
understand the principles of quantum error correction. And how will we do it? Well, as I’ve emphasized, there
are a number of approaches to building quantum hardware that are under development and
making good progress. And it’s important to continue
those different paths because different quantum technologies may find different applications, and we don’t really know which technology will ultimately have the best
prospects for scalability to large devices. What I really find interesting is the ways in which our
ideas about quantum computing are giving us new approaches to some of the other
fundamental problems in physics, particularly quantum
condensed matter physics, and also elementary particle physics. There’s been a surge of
interest in recent years among the community of people who work on quantum field theory
and quantum gravity in quantum information concepts. These people feel that
quantum information ideas are highly relevant and useful for addressing the problems
that they’re interested in. And in a way, that’s not so surprising, because the quantum gravity
community has been struggling for 40 years with a very deep puzzle, whose origin really has to
do with quantum entanglement, specifically, the quantum entanglement between the inside and the
outside of a black hole. A black hole is a wonderful object, and one of the seminal papers
on the subject, by the way, was by J Robert Oppenheimer. It’s an extremely simple object. It’s composed of nothing but warped spacetime geometry. Its defining property
is its event horizon. If you are foolish enough
to cross the event horizon and enter a black hole, you’ll be unable to return
to the outside or even communicate with your
friend who stays outside. But the inside and the
outside of a black hole can be and will be
entangled with one another, and Stephen Hawking
understood in the 1970s that, as a result, a black hole will emit radiation due to quantum effects and eventually radiate away
all its mass and disappears. And that creates a
puzzle, because we can ask about what happened to
any information that fell into a black hole
during its lifetime. It’s a foundational principle
of quantum mechanics that information is not destroyed, though it can be scrambled up
into a form that’s exceedingly hard to read. So, we’re faced with an unpleasant choice. If we lose information inside a black hole and then the black hole disappears, if that information is lost
from the universe forever, then we have to recast the
foundations of quantum theory. On the other hand, if that
information manages to escape from the interior of the black hole, that means we have to
rethink the foundations of general relativity. And after 40 years, we still don’t have a clear and completely
satisfactory resolution of this puzzle. What we can say about it, the best thing we can say about it is that we understood the resolution, to a large degree but not completely, in a particular setting, what we call AdS-CFT duality, and this is a description of
quantum gravity in the case where the vacuum energy is negative and the curvature of
spacetime is negative. And in that setting, we have two complementary ways of
describing the same physics. In a way, this correspondence allows us to put a black hole inside a tin can. The walls of the can are what we call CFT, for conformal field theory, and that’s just an ordinary
quantum theory without gravity. And in the interior, we have gravitation, geometry, and
quantum fluctuations of geometry, and a process in which a black hole forms and evaporates completely has a complementary
description in terms of just the field theory on boundary. And on the boundary,
there’s no black hole, there’s no gravity, there’s no place for information to hide, and so it seems manifest, that the process can be described without any loss of information. So at least in this case, the one where we understand
quantum gravity the best, it seems clear that a
black hole does not destroy quantum information. But even so, we’re left without
a satisfactory understanding of how the information manages to escape, and, in fact, it’s not so
clear how this boundary description encodes the experience of someone who falls through
the black hole event horizon and enters the black hole interior. So, to make further progress, we should try to deepen our understanding of this correspondence, which is a subject of much ongoing work. So let me say a little
bit more about that. Here, for ease of visualization,
I’ve indicated the boundary is one-dimensional, a circle, and the bulk geometry as
two spacial dimensions. So, here in this cut through the bulk, in order to indicate
the negative curvature, I’ve used the Poincare disc description, each one of these colored
regions actually has the same geometrical size, but they
appear to be smaller and smaller as we get closer to the boundary in order to capture
the negative curvature. And the idea the
correspondence is there are two exactly equivalent descriptions
of the same physics, one on the boundary and one in the bulk, and there’s a very complex dictionary, which is only partially understood, which maps the states and
observables of the bulk theory to the corresponding
states and observables of the boundary theory. But what has become increasingly clearer in the last few years is that this geometry in
the bulk can be viewed as an emergent property of
the quantum entanglement on the boundary. What evidence do we have
pointing in that direction? I’ll tell you a few things that indicate that
geometry can be thought of as emergent from entanglement. Well, one is what we call
holographic entanglement entropy, which was discovered 10 years ago now by Ryu and Takayanagi. Well, they asked the following question. Suppose we consider some
state to find on the boundary, and we’re interested in a connected region on the boundary,
and we’d like to know how entangled that region is with
the complementary region. And they pointed out that there’s an answer to this question which
is geometrical in the bulk. We can quantify
entanglement using entropy, which is a measure of how
much information is missing from this region, A,
because it’s encoded in the form of entanglement with
the complementary region, and that entropy can be expressed in suitable units as the area of the minimal surface in the bulk which separates boundary region A from the complementary
region on the boundary. And those units in which we
express the area are just the same units that we use to express the entropy of the black
hole in terms of the area of its event horizon. So, we can think of where
these minimal surfaces lie, which encodes the geometry of the bulk as corresponding to
properties of the entanglement on the boundary. Now here’s another example. We can imagine a boundary theory which has
some holographic dual, has some higher dimensional
gravitational interpretation, and we consider two such theories and ask what happens when we
entangle those two systems with one another. And the answer is that the bulk geometry corresponding
to that pair of systems will have a wormhole
which connects together the two asymptotic regions
on the left and the right. And when there’s no entanglement between the two systems, then
there will be no wormhole connecting them. So this relationship between
connectedness of space and entanglement was elevated by Maldacena and
Susskind a few years ago to a general principle,
which they ingeniously called ER equals EPR. EPR means Einstein, Podolsky, and Rosen, who first discussed quantum entanglement in that 1935 paper I mentioned, and ER refers to Einstein and
Rosen, who, in that same year, wrote the first paper discussing wormholes in general relativity. Now, if you had a quantum
computer or, by some other means, you tried to remove the entanglement between distant regions of space, the effect of that, according to this ER equals EPR principle, would be that the space would
break up into fragments. So, there’s a sense in which entanglement provides the glue
that holds space together. Now, this wormhole can’t be used to travel quickly from one region of space to another. It’s not a traversable wormhole. This corresponds with the
property of quantum entanglement that we can’t use entanglement to send an instantaneous message from one party to another. What happens is the wormhole is dynamical; it grows too quickly for anyone to pass from one end to the other. So you might think, “If a
wormhole isn’t traversable, “that’s not really very much fun.” But actually, it’s a lot of fun. Because we can imagine
two lovers, Alice and Bob, who live in different galaxies and long for each other’s company, but it’s completely impractical to travel from one galaxy to another. But let’s say Alice and
Bob had the foresight to prepare many entangled
pairs of particles, and Alice took one member of each pair, and Bob took the other
member of each pair, then Alice could take her particles and gravitationally collapse
them to make a black hole, and Bob could do the same. And those two black
holes would be entangled with one another, and that means they would
be connected by a wormhole. Now Alice wouldn’t be able
to jump into her black hole and emerge from Bob’s, but Alice and Bob could both
jump into their black holes, and then they’d be able to
meet inside their own hole and have a fulfilling
relationship for a while, but ultimately, they’d
be destined to arrive at the singularity inside the black hole and be torn asunder. So it turns out to be a tragic love story. Now, another thing that’s become apparent in just
the last couple of years is that there’s a connection between this dictionary, between the bulk and boundary theory and quantum error correction, that if I consider some local operator deep inside the bulk geometry, the corresponding operator on the boundary is a very non-local operator, it’s just the kind of mapping
from local to non-local that we need to protect quantum
information from damage, just the kind that occurs in a quantum error-correcting code. And so the bulk geometry deep in the bulk is actually very robustly encoded so that if some damage
occurs on the boundary, that bulk geometry won’t be much affected. So I’m hopeful that this insight can be taken further. It’s really a remarkable illustration of the unity of physics. We develop the idea of
quantum error correction because we want it to keep
quantum computers from crashing, and we wound up with a
different perspective on the geometry of quantum spacetime. So far, we’ve understood this
within the context of this… Well, partially understood
it within the context of this AdS-CFT duality, but we’d like to broaden our understanding beyond the context of
Anti-de Sitter space, because Anti-de Sitter space isn’t the real case that we want to solve. Anti-de Sitter space, the thing that we
understand reasonably well, that’s the case where the
vacuum energy is negative and the curvature of
spacetime is negative, but it just so happens
that we live in a universe where our vacuum energy is positive and curvature is positive,
what we call de Sitter space. It’s much easier to do quantum mechanics in Anti-de Sitter space
because there’s a boundary, and we can make reference to the boundary when we want to discuss the
observables of the theory. De Sitter space doesn’t have a boundary, and that makes it much
harder to understand quantum physics in that setting. But we’re going to have to learn
how to do quantum mechanics in de Sitter space because
that’s where we live, and I’m confident we’ll figure it out eventually, but it’s hard. Last year, Robbert Dijkgraaf, the director of The
Institute for Advanced Study, spoke at a Caltech event, and he showed this slide
near the end of his talk, and I was quite struck by it, because he was trying to
illustrate how the different ideas of theoretical
physics are connected, and he put quantum information right in the center of things. I don’t think he would have
done that a few years earlier. This idea that quantum information
is a unifying principle of physics has really
only started to take hold in the last couple of years. But unlike Dijkgraaf, I would cross out the word, theoretical, because quantum information
is an experimental subject, and if it’s true, as we
increasingly have reason to believe, that we can think of the
geometry of spacetime as an emergent property
of quantum entanglement in some underlying system, then we should be able to get insights into quantum gravity by
doing laboratory experiments. So I anticipate that, in the coming decades, we will gain deep insights into the
quantum structure of spacetime by doing laboratory experiments with highly entangled quantum systems that, on the tabletop, in a laboratory at a place like Berkeley, will be able to, in effect, to create spacetimes
that didn’t exist before and explore their properties
and learn new things. But whether that prediction
comes to pass or not, I think we can be highly confident that we’ll find many
surprises and discoveries as we explore the entanglement frontier. Thanks a lot for listening. (applause) – [Joel] Thank you, Professor Preskill. We normally allow a few
minutes for questions at the end of Oppenheimer lectures, and we try to have a mix
of questions from both professional physicists
and amateur physicists. And if you managed to follow the talk, then you’re already an
amateur physicist at least. So, please, any sort of question. – [Audience Member] How do you feel now about the bet you made with
Stephen Hawking in 1997? – Now the question was about a bet. This will be the opening line
in my obituary, I’m afraid. I won a couple of bets
with Stephen Hawking, and, in particular, on one of those bets, concern the question of information and whether it can escape
from black holes or gets permanently destroyed. Hawking and also our friend, Kip Thorne, took the position that black
holes destroy information, and then my side of the
bet was that black holes actually just scramble up information into a form that’s hard to read. And Stephen has recanted,
but he believed very deeply at the time that black holes destroy information, and it was
a bit of a shock to me when he conceded this bet in 2004. It was a rather dramatic occasion. We were at a conference in Ireland, in a big convention hall in Dublin, and somehow the word was leaked
out that Stephen was gonna make a big announcement, and so there were 100 people from the press and various amateur physicists. Michael Flatley, the Lord of the Dance, it turns out that
general relativity is his hobby, he was there. So Stephen gave a technical talk, and then at the end, he
presented me with my prize, which was a baseball encyclopedia from which you can withdraw information. He knows I’m a baseball fan. This was very hard to get in Ireland because you can’t get a
baseball encyclopedia in Dublin, so we had to have it shipped overnight. How do I feel about it? Well, I was surprised that
he conceded because I think we still don’t have a
satisfactory understanding of the problem and he would
have been well within his rights if he had decided to hold out longer until the question is
definitively settled. And I still think I
took the right position, and now Stephen agrees with that. Kip does not, he has not conceded. But I don’t think we really have a 100% convincing argument that information escapes
from black holes, even today. – [Joel] Thank you. – [Audience Member] Can you describe the hardware that will replace the transistor chip in
the quantum computer? In other words, I’m interested
in what the hardware is gonna look like. – So the question is, what
will the hardware look like in a quantum computer? Well, I mean, we have
quantum computers now, but they’re small, and so I think you’re really
asking about the scalable quantum computers of the
future, where we might have millions of physical qubits. So the honest answer
is, I don’t know exactly what the hardware is going to look like. Actually, here at Berkeley,
in the Siddiqi Group, they’re doing terrific work on quantum computing hardware
with superconducting circuits, and they can show you a device that has 10 qubits in it, which is based on superconducting technology. We can imagine scaling
up devices like that to millions of physical qubits, though it’s going to be very challenging. Another approach, which, in the long run, I think
has a lot of promise, is using, as I mentioned,
electron spins as qubits. That technology is lagging
behind at this stage, but it’s something that is perhaps especially compatible with the silicon classical technology that we have now. And I also mentioned these
topological approaches, where it’s even less clear what
the hardware is going to look like, but I did sort of a
cartoon version of it in my drawing of a quantum wire. – [Audience Member] A lot
of these questions will be answered by experimentation,
experimental physics. What kind of experimentation? What does that look like? Or is it the same answer
as the last question? – Well, so, what I had in mind is that we have understood, to some degree, that it’s possible for a quantum system which doesn’t involve gravitation at all to behave like a system that has gravity, and that’s what this story of the AdS-CFT correspondence is about. So, the example we’ve
been able to understand is a very special one. It has lots of symmetry,
it has special features. But I think the phenomenon of a highly entangled system behaving like a gravitational
system is a more general one. But we don’t have the mathematical tools to
understand it in other contexts. So, what we’ll need to be
able to do experimentally, I think, is build systems in which there are many particles which all interact with one another in a typical system
that’s easier to realize. The strength and the interactions
between the particles depends on how distantly
separated they are, and it falls off with distance, but I think the kind of system
that we would need is one with many particles or degrees of freedom which all have strong
interactions with one another. And in such systems, then we’d be able to drive them and make measurements of the way the different
parts of the system are correlated with one another, and the task would be then to see if those correlations have an interpretation in terms of some kind
of gravitational system. – [Audience Member] We
can’t really understand quantum entangled states. So we would have a computer, and we would take classical
information, put it into a quantum computer,
wait a while, and take a classical solution out
that we can understand. Just curious how you
get it in and out, and I guess the contradiction
between the number of states inside the computer versus
the very simple states that we can comprehend,
put it in, and take it out. – Alright, so the question is, how do we get information into and out of a quantum computer? As the question anticipated, the information that we put in
and we take out is classical. The processing that occurs
can’t be done classically, but is done in a quantum device. It’s the task of the designer
of quantum algorithms to understand how to do
that quantum processing, but the initialization
and the reading out are easier to describe. So, if I have many qubits, I mean, in my cartoon analogy, the
preparation would just consist of putting a lot of balls in door number
one of a lot of qubits, and then a lot of quantum
processing goes on, which can’t be described classically, and in the end, we open all
the boxes one at a time. So, the preparation
would just be preparation of one qubit at a time. Like, let’s say it’s a bunch
of electron spins and we prepare them so that they’re
all pointing, spin up, and then we do the quantum processing, and, at the end, we just
observe the spins one at a time and see whether they’re
pointing up or down, or, in my analogy, open the
box to see if the ball is red or green. So, the process of
initializing and reading out is not so exotic. The art of designing a quantum algorithm is to
figure out how to make use of the quantum entanglement
at intermediate stages to speed up the solution
to a suitable problem. – [Audience Member] Is there a chance that quantum computing will extend
the validity of Moore’s law into a longer extent of time? – So, the question was about Moore’s law and whether quantum computing
will extend Moore’s Law further into the future. Of course, Moore’s Law is the miracle that we’ve all been living in for 50 years or so. We’ve seen exponential improvement in the performance of integrated circuits. Although we’ve reached a
stage now where it’s getting harder and harder to
increase the clock speed, and a lot of the improvement
is coming from increasing parallelism of classical systems, and it’s an amazing story, and I think a very instructive
one for quantum technology. If you go back to the 1960s, when Moore and others were thinking about the prospects for improving
integrated circuits in the future, you know, they couldn’t
imagine things like an iPhone. It was just far beyond what the technology was
pointing to at the time. And, in the case of quantum technology, I think we’re in a similar situation. We are now starting to
interact with information in a completely different
way from anything that happened before, and we don’t know where
that’s going to take us. We have a few ideas of how we
will apply quantum computers. Undoubtedly, we haven’t
thought of the most important applications that are going
to arise in the future. So, I think my answer
to the question is that we’ve seen, in recent history, and even longer term history, that physics can drive the economic expansion of the world, that the technologies that come
out of physics eventually have a big, big impact on
the way we live our lives. We’ve certainly seen that with the basic physics in the 20th century of understanding semiconductors, which led to integrated circuits, quantum physics of lasers,
which we make use of in many ways today. But that 20th century
physics, that was the physics of, if you like, single
particle quantum mechanics. And now we’re getting a grasp on a new quantum revolution: the
properties of many particles, and I think that could well drive economic development in the 21st century. Nobody really knows, and so I don’t have a precise prediction about the quantum Moore’s Law. And I think we can expect that these new technologies are going to take us to remarkable places
that we haven’t yet imagined. – [Audience Member] In quantum mechanics, electrons are indistinguishable. Are qubits also indistinguishable? – Now the question was about indistinguishable
particles, that we know that electrons, for example,
are indistinguishable, and does that apply to qubits as well? It need not. I mean, it’s possible for the qubits to be distinguishable. For example, in these superconducting circuit realizations of a qubit, each qubit is actually
an engineered device. They’re not all identical. And so there’s no notion
of indistinguishability among the qubits that doesn’t
impair the quantum computer’s ability to perform its special magic. In the case of the anyons I described, they can be viewed as a rather exotic type of
indistinguishable particle, and that’s why it makes sense
to process the information by exchanging the particles. That affects the information that’s encoded
in the many-particle system. When the anyons change places, the… When you look at them one at a
time, they all look the same. – [Audience Member] Do you
subscribe to any particular interpretation of quantum mechanics? – The question is, do I… It’s a question for me personally? Do I subscribe to any particular interpretation of quantum mechanics? I’m an Everettian. I like the idea. Sometimes people call it the
many worlds interpretation, though I’m not very fond of that name. But I think the essence
of that point of view is that there’s really just one way for things to change in the world. Technically, quantum states
can change by evolving in a way which doesn’t create
or destroy information, that is unitary evolution, and that’s the only
thing that ever happens. That measurement is not a
fundamentally different process. This is a subject that people
can get emotional about. A disadvantage, you might
say, of that point of view is that in order to understand why, when I observe a quantum system,
I see one definite outcome, I have to include myself
in the description, because what really happens is that there’s more than one possible outcome, and I become correlated with the state of the
system I’m observing. And some people think this
is a very extravagant thing, in that we have to keep track
of all the possible outcomes by including the observer in the system. But I prefer that to
introducing measurement as some kind of new fundamental process. However, I think, you know, everybody’s entitled, to a certain degree, to their own interpretation
of quantum mechanics if they prefer. Different interpretations
can give rise to different insights and can help to
generate different ideas. I mean, I think, to me, the
question of interpretation is most interesting to the degree that it raises questions about what the alternative to
quantum mechanics might be. Maybe quantum mechanics will fail, and some people expect quantum
mechanics to break down in some stage because of the issues of interpretation. I’m not sure whether that’s true. But I think thinking about
interpretations can be useful, particularly if it suggests new ways in which
we can test quantum theory and look for deviations from it. – [Joel] One last question in the back. (audience member questioning) – So, I think the question was, technology is very dependent
on advances in materials, and what can we say about how advances in materials will
impact quantum technology? Was that more or less the question? (audience member speaking) Yeah, well, there are materials issues in all of the things that I mentioned. There have been tremendous improvements, for example, in the performance
of superconducting qubits going back 15 years, and many of those improvements
have to do with using superior materials to make the Josephson junctions,
which are the essential ingredient in the
superconducting circuits that makes them control the
ball and behave quantumly. These topological quantum computing ideas, computing with anyons, that’s very much a materials issue, though it’s been a great challenge to synthesize materials and
to fabricate devices that bring together all the physical
ingredients that we need to make topological quantum
computing work better. And spins and semiconductors,
same thing, that materials issues are
currently a huge impediment, and improvements in the materials will surely
lead to improved technology. – [Joel] I think, with that,
we should call it a night. Before we go, let’s
thank Professor Preskill for a beautiful and stimulating
Oppenheimer Lecture. (drum beat)

Leave a Reply

Your email address will not be published. Required fields are marked *