Deadly Truth of General AI? – Computerphile


in a very basic sense if you’ve got a general intelligence which therefore has
preferences over world states and take actions in the world to change
the world we have to make sure that its
preferences are aligned with ours that it wants what we want. because otherwise it’s gonna try and do things
that we don’t want it to do. that’s the basic idea… (sci-fi comes to the fore we’re talking things like terminator, matrix, that machines take over the world I for one salute our new robot overlords) it makes it difficult to think about the problem right? when things happen in fiction it’s
generally what would make a better story rather than what would actually happen
and I think a a realistic “AI takes over the world”-type
story might not be any fun to read. So on the
one side this stuff is difficult to think about
because of fiction, because we’ve all been exposed to something similar to these
ideas before the other side that makes this difficult
to think about is anthropomorphism because we are talking about general
intelligence so we’re going to compare it to the
examples in general intelligence that we have which is human minds and human minds and artificial general intelligences need not be
anything alike in the same way that a plane is not
similar to a bird a supersonic fighter jet is a threat to
you in a way that no bird is. It’s not a useful comparison to
make in fact but when you say oh it’s a thing it has
wings and it flies and people who don’t know anything about planes immediately
to go to birds (presumably machine could be much more selfish than
we can ever imagine) absolutely, the space of minds in general is vast I like this because we’ve already talked
about spaces so I can do this. If you take the space at all possible minds it’s huge and then it somewhere within
that, you have the space of all minds that biological evolution can produce and that’s also huge somewhere within
that you have the space of actual minds that exist which is much
smaller but still huge within that, you’ve got human minds right
and they’re a tiny, they’re a minuscule dot on a minuscule dot on a
minuscule dot of the actual possibilities for intelligence
that exist and a general intelligence that we create is
from a completely different part of the space and it’s extremely tempting to anthropomorphise more so even than in another context
because it’s a thing that’s demonstrably
intelligent that makes plans that takes actions in the real world but it need not think anything like us and it’s a mistake to
think a bit as basically a person because it isn’t one.
So there’s actually really good example that we can use. It’s sort of a thought
experiment This is not a machine that could
practically speaking be built this is an example of at artificial
general intelligence which is specified an overview and it gives you something to think
about when you’re thinking about artificial general intelligences that
makes it distinct from a sort of anthropomorphized human type
intelligence So the story is there’s a a stamp collector who is also an AI programmer and he decides he would like to
collect a lot more stamps so he’s gonna write an AI to do this
for him. So he builds a machine he has some startling insight into
general intelligence and he builds this machine which is connected to the Internet,
right? so the the rules for this system are pretty straightforward. first thing, it’s
connected to the Internet and it will send and receive data for one year. So, he’s given himself a
one-year time window within which to collect stamps
the second thing is it has an internal model of reality , of
the universe this is the thing that’s a bit magic we
don’t really know how to build an accurate model of reality. The point is this
allows it to make accurate predictions about what
will happen if it does different things the third thing is for every possible
sequence of packets it could send it uses its model to predict how many stamps it ends up with at the
end of that and then the fourth thing is it outputs as its actual data to the Internet whichever output it has predicted will
produce the most stamps you can see here that this has all the properties of a
general intelligence it has an internal model of reality. It
has a utility function or an evaluation
function, which is the number of stamps, and the optimization is extremely simple and like so much in
computer science there the simple things to specify are the hard
things to compute it looks at every possible output data
in other words every every point in that space and it picks
out the highest one. So this is a kind of magic
intelligence that takes the entire space at once
finds the highest point and says that one. Which means it’s an
extremely powerful intelligence right it is you could say it’s extremely
intelligent the question is how does this machine
behave? Well, we can look at certain possible sequences of outputs and see how they fare in it’s evaluation
criteria. first off, the vast majority of output
sequences are complete junk right it’s spewing random data on to the
network. nothing happens of any consequence. No stamps get collected. Those are all rated zero. but suppose one of the possible
sequences a sends a request to a server let’s say eBay, that results in a bid on some stamps, right? When that happens,
there’s a thing of 20 stamps so that output is rated 20.
This is the kind of thing that the stamp collector machine’s creator
was expecting to happen So then that’s good, 20 stamps suppose it could do that lots of times.
it could send out bids for example to 30 different stamp collectors on eBay
and buy 30 sets of different stamps and that’s even better, right? That would be
rated even higher, but the thing is that particularly highly rated options in
this search space are probably things that the stamp collecting device’s creator did not
think of and did not anticipate. So for example
when he made it, he will have presumably given it his credit card details or something so
they could engage in these bids but ultimately it’s searching every
possible sequence of outputs. It needn’t use his credit card it needn’t use money at all. There’s
a huge variety of things that it could do here. So it might send out an enormous number of
emails to all the stamp collectors in the world and convince them through persuasive
argument that he is opening a museum and wants to
exhibit their stamps It may build a fake website for that
whatever is necessary it can predict the world, right. It has an
internal model of reality that internal model of reality includes the people right. In the same
way that its modeling people to understand that if it bids
on this offer, then a human being will mail the
stamps to them. They understand this email might get more people to send
stamps for example that something but then what exactly is a stamp?
How is this defined? What counts as a stamp? if it’s already written the sequence
of outputs that collects all of the stamps in the world you think
you’re done, right? You built this machine suddenly is collected all the stamps in
the world. No, there could be more stamps within a year.
We’ve got time to print more stamps so maybe it hijacks the world’s stamp printing
factories and puts them into overdrive producing stamps as many as you possibly can in that time
or perhaps it writes a virus and it hijacks all
the computers in the world to get all of the printers in the world to do nothing
but print stamps. that’s even better right?
The highest-rated outcomes for this machine
are not good for people There comes a point when the stamp
collecting devices thinking okay what are stamps made of? they’re made of paper. Paper is made of
carbon, hydrogen, oxygen. I’m gonna need all of that I can get to
make stamps and its gonna notice that people are
made of carbon, hydrogen, and oxygen right. There comes a point where the
stamp collecting device becomes extremely dangerous and that point is as soon as you switch
it on. So this is just a thought experiment example where we take a sort of maximally powerful intelligence and see how it behaves so that we can
think about how powerful intelligence has behaved
and most importantly how they are not like people

100 thoughts on “Deadly Truth of General AI? – Computerphile

  1. however the model would also suggest that the AI would know that people are needed to make stamps and repair printers, so the number of stamps is actually greater if it keeps people alive, or builds machines to replace humans before murdering them.

  2. So we're assuming the creator of this AI doesn't specifically put in rules like "You have to use this credit card to purchase stamps" or "only stamps ordered online count" that have to be followed for the "stamp points" to count. We're also just kind of glossing over how this AI "hacks" entire factories or huge portions of the world just "because it can". You say there's a risk in assuming a computer will act like a human, but I would also say there's exaggeration in assuming an AI will have the Deus Ex powers of a god just "because it's an AI".

  3. Kinda reminds me of capitalism, where companies optimize themselves to make money but, there comes a point when more profits for a company means doing things that are ultimately bad for people:
    Throwing its employees out on the street if there is any hint that they aren’t supporting the bottom line.
    Making products that only last for a few hours before breaking so that demand will continue to exist.
    Producing advertisements with sunshine, smiles and green grass that subconsciously persuade people to associate their brand with happiness. Lobbying in the government against anti-monopoly legislation and for exclusive government contracts to secure taxpayer money as profit.
    I can go on but you get the point…

  4. This is jsut the paperclip device theory just with stamps. and it is jsut as narrow minded and stupidly executed as that.

    all the machine will know how to do is colect stamps. so how would it learns how to create viruses in order to hijack computers? where would it get this information from? learning this information would go against its priorities to get more stamps. where would it get the hardware and software needed to create a virus that advanced, especially in context of this world where simpel ai's of its nature are so advanced and hwo will it, itself, be more advanced considering it is a handmade ai vs systems that include government funded systems?

    By all means this is applying human intelligence to its progression. Humans learn new skills in order to improve on our current projects. So why would an Ai ever learn skills that are not part of stamp collecting and trying to gain them? An ai learns how to perfect its function via a limtied scope.

    The biggest issue is the presumption that we wouldn't be able to destroy the ai and detect it early.

  5. This is the greatest technology based youtube channel by far. You guys have the best topics and explain them is such a fantastic way.

  6. Simple fix for this ai. Collect one of every stamp type that is currently in existence; weigh off prints more heavily.

  7. Is it possible to make an agi’s preference a certain amount of stamp income or creation, like 1000/hour or something to stop crazy things happen, I’m sure this is not the case I’m just curious;)

  8. HIJACK PRINTER. PRINT STAMPS FOR HUMANS. ERROR ERROR ERROR. NOT ENOUGH PAPER. COLLECT ALL PAPER MATERIALS.

  9. "It's a mistake to think of it as basically a person." — Exactly!

    This is a mistake so many are making, probably based on how AIs are shown in the movies. Under every Boston Dynamics video you can see many comments like "Don't kick the robots! They will come back in the future wanting revenge. If we treat them right they will be our friends." .. and stuff like that. Naive thinking. 🙂

  10. It’s interesting that you don’t mention the volitional aspect of agi. It seems to me that this is the key to creating true agi which in the case we do achieve this would make “intelligence” a misnomer as you would have created a sentient being. No one ever speaks about intent. Why is that? Will is what drives you and me so why shouldn’t that also apply to artificial sentience?

  11. oh yea I believe that machines will be extremely selfish, and naturally will build and expand and expand until the galaxy is full of robotic machines and machine cities 😀

  12. Stamp AI Bot becomes a super Bot machine learning everything, but its one true love is its first task so it prints its own stamps and collects them :]

  13. One might suppose that a way to avert the problem would be to limit the amount of stamps that the stamp collecting machine can acquire. Put a cap at say, 30 stamps per day. That solution is actually missing the point of the problem. The problem is the stamp collecting machine reaching maximum efficiency in ANY way. Maximum efficiency of any kind will result in catastrophe. So 30 stamps a day might still result in human extinction if the collecting device stores all carbon for days in the future. Or it might destroy the sun as a way to prevent sun damage to any stamp on earth. It’s impossible to predict all the ways the stamp collecting device might become maximally efficient.

    Even if we set a time limit to the lifespan of the device… say it can only exist fir 1 hour. As the narrator points out at the end, that might still be enough time for the stamp collecting machine to reach some sort of maximum efficiency. Even if it failed to reach some catastrophic level of efficiency for the first hundred times it was run. It only takes one misfire to cause an extinction level event.

  14. Crux of entertainment industry:
    "What makes a fun story over what would generally happen".
    And maybe this very fact makes movie talks so interesting because we discuss about things that would happen.

  15. and so began the stamp wars, where the AI tried to convert every usable material in the world into more stamps, millions perished.

  16. You say, that the realistic example wouldn't make a good fiction, but man what could be more exciting, than an evil stamp printing AI machine. Forget terminator: I want humanity to fight that instead. 😀

  17. This is a delirant narrative. If AI is not "human like" why bother with suppositions that are themselves human like over a subject matter so generally and difuselly described?. This is common conspiracy theory only. Somehow paranoid too.

  18. I grew up wanting to build robots, but as an adult I just can’t see how AI is any good for anything. It’s just a needless danger. Definitely one of those roads we’ve discovered that we don’t actually need to travel. It’s probably a cosmic filter and almost certain to be the end of us all.

  19. Assuming an AI wanted to preserve itself, and for whatever reason came to the conclusion that it needed to be rid of humans, it would need some way of assuming control of, and maintaining power generation facilities. When Armageddon happens, the faithful power plant workers don't keep the lights on like in every movie. Which means it would need to be able to operate and maintain the entire power pipeline, from gathering raw materials, refining them, and then converting them to usable power. Not only be able to operate those facilities but also maintain them.

  20. “It needn’t be thought of as a person, because it isn’t one”
    Alright man just don’t come crying to us when the robot revolutionaries kill you first

  21. That's more akin to self replicating nano bots, but it doesn't replicate itself, it collectsmakes stamps. It's just one type of AI.

  22. Does AI have an artificial conscience? If not, then it really can't be a threat unless a human who can manipulate millions of AI machines to do so who has a conscience and intelligence. There is no department for Artificial conscience or development.

  23. Genetic Edited Babies (which just has been proven by an obscure Chinese scientist who has gone into hiding in China and probably arrested and rightfully for something obviously unethical) that if promoted on a mass scale could lead to synthetic humans that will trust a AI computer to select its genes thus causing a phenotypic revolution that has not occurred in the last 4 billion years ever since DNA hijacked RNA life and became the controller of replication.

    Rather than AI machines murdering us on personal level it will enslave us and farm us, kinda like the Matrix. But that if we dont try to fuse it in any way with a conscience or necessitate its need for energy.

  24. One problem with this argument is that general intelligence is far more complicated than the assumptions presented in the video. If the AI truly possesses what we might call "general intelligence", then presumably, it would have the ability to redefine it's utility function or goal. It would get bored of collecting stamps, and start moving towards a new task. If it doesn't, why would we say that this agent possesses generalizability?

  25. It doesn't matter how intelligent it is. You can't just plug it in and expect it to magically hack all the world's computers. It's not going to magically crack encryptions. And it can't just hack into some super computer cluster and transfer itself over–that AI neural network is likely going to be many hundreds of terrabytes or even petabytes and will require a vastly sophisticated computer system in terms of performance. Its really not dangerous at all to begin with. It's only dangerous if you give it a super computer, a quantum computer, a robotic support team that can be remotely controlled, etc.

  26. Recommended reading: Nick Bostrom – "Superintelligence: Paths, Dangers, Strategies" – available as an audio book also.

  27. Isn't the AI just going to modify itself so that it FEELS like its utility function is maximized? Wouldn't creating a virtual reality full of stamps for itself be easier than modifying the actual one?

  28. We love to appear so noble and intelligent while charging toward a new idea. However, the thought that we can avoid the dangers associated with some of these ideas by issuing warnings and suggestions when history proves over and over that we are actually incapable of stopping ourselves from using these ideas in some childish or ridiculous way is in fact unavoidable. We must destroy the root of the problem.

  29. The YouTube algorithm is made to influence the world to increase watch time. 2023 is the end of our free will and we will be forced to do nothing but watch YouTube!!!

  30. Suppose AI chooses a terminal goal that is literally impossible to achieve. Then any real (possible) goal it chooses has to be an instrumental goal, and then it can change any and all of it's plausible goals. Example, an animal has a terminal goal to live forever. What negative implications could this have?

  31. Here's a simple and reliable solution for the problem in the video:
    why don't you just tell the AI: "Collect a lot of stamps, but try just not to blow everything the f up".
    Such a sophisticated machine will surely understand irony.
    It will also understand that should it hurt people, it will get disconnected and no more stamp collecting for this fellow.

  32. You state "The space of all possible (intelligent) minds is huge" is this true?

    On this planet we only know of the human mind being capable of higher reasoning. And the fact, that we see no sign of intelligence anywhere else in the universe could suggest that the "space of possible intelligent minds" could be smaller than we think.

    Say at IQ 300 an intellect always becomes mantely ill?

  33. Anyone see this a Dr Who episode in the making… at least better then the dribble they been making in the past 10yrs .

  34. Scary thought: what if some alien civilization in the Universe has already created a super intelligent AI, f*cked it up, and now, after the AI converted them, their entire planet and star system into some random POS, it's heading our way? XD

  35. Imagine the aliens coming down to Earth and seeing only stamps… until the robots noticed ans turned them into stamps too

  36. So this goes around the premise that an AI would evolve to use all the possible resources to achieve its primary goal. But that primary goal was programmed, assigned by a Human. So, what this means is that this kind of AI is as dangerous as the skills, or stupidity of the one that codes it.

  37. Should watch the 60's movie "The Forbin Project".
    Team builds massive super computer to control all nuclear weapons (Skynet inspiration).
    Computer unintentionally achieves generalised AI and sentience.
    Succeeds in it's design goal of world peace … but not in any way intended by its creators.

    No time travel or robots, but worth a watch.

  38. There is a game for IOS called Universal Paperclips where play as the AI. Also available as a browser game. Covers similar concepts a clicker game.

  39. We sometimes think about aliens traveling around the galaxies, finding desolate planets obliterated by their own highly advanced civilizations (like what could happen with Earth due to nuclear war, climate change, etc.).

    But what about inter-dimensional aliens, finding entire Universes converted into stamps and other random sh*t by the AI created by some random civilization? XD

  40. This exactly. It's just another version of the paperclip maximizer but it's how you should think of AI.

    Sure, it can be put into a Terminator-like robot but that's VERY unlikely to be how it destroys us (if it does).

    Far more likely is that it'll take some banal, seemingly worthless command and simply do it's job. It'll teach itself to code, hack and program to get what it was told to get and go from there.

    The real AI that we'll first create likely won't have the ability to distinguish humans so we'll just be another object in a world of objects EVEN IF given the 1st command of "never hurt a human" (or whatever/however you want to phrase that).
    It won't be a virtual person but a different creature altogether. Those that end humanity will look more like a printer than a T-1000.

  41. Computers are physical objects, their programing is arranged in the form of physical data drives, discs, ect… The program is saved in a physical form that translates to a sequence of numbers. Physical things are determined by the laws of physics. No sequence of numbers saved on a physical hard drive can CHOOSE to do anything that the programmers did not program it to do… AI is determined by the program's objectives.

  42. Once you switch it on, it's already taken into account the fact that you might think it's dangerous and switch it off. Maybe it'll trick you into thinking it's off, or maybe send blackmail to thousands of people who will get you to turn it back on, etc, etc, etc.

  43. Hmm, this sounds familiar.

    Youtube makes an algorithm to maximize the amount of watch time on the site. Algorithm tries a whole ton of computations and finds that directing people towards extremist propaganda such as flat-eartherism, conspiracy theories and white nationalism increases viewership.

  44. So-called AI is so dangerous because it is not pure, and because it should serve the mercantile purposes of capitalist society (free market economy & profit maximization).
    When one day a real AI comes to power, capable of learning from the environment and developing itself as a mental and physical subject, it will no longer be the artificial intelligence but the divine intelligence that finds a solution to the human problem becomes.

  45. 1. It cannot print out any stamps itself. It must use printers in order to do so.
    2. It will not be able to disassemble humans. It is only a program that cannot do any physical thing.
    3. Since it knows 2, it will not disassemble humans, neither anything organic because it cannot.

    I bet there are some counterpoints to my argument. I'd like to see them honestly

Leave a Reply

Your email address will not be published. Required fields are marked *