Max Tegmark wants the reader to think about the problems of AI safety and the thorny questions related to creating machines that surpass us in intelligence someday. He reasons that this is possible and maybe even possible sooner than we think and therefore we should start working on solving these problems as soon as possible.
Because otherwise, we might have the machines, without knowing what we want to do with them, and increase the potential for bad or even catastrophic outcomes by a lot.
The book asks more questions than it answers but it is wonderfully thought-provoking and makes you think and envision the different futures and how they could play out. It also makes clear, that our role in shaping and designing this future is important because we are ultimately the people building those machines, so we should keep in mind the question of what we want to build and maybe answer it in depth and detail before we set out to build intelligent machines.
My favorite parts were the ideas about consciousness and information processing/storage - specifically, that computation and information storage are substrate-independent. Both "take on a life of their" and don't depend on the exact structure and properties of the matter being used to make them happen in the real life. This is why they seem a little spooky and removed from "our world" because they are.
Consciousness (according to Tegmark) is even more removed in the sense that it is a property of information processing systems - i.e. it's about the pattern of the information processing and how that happens, and therefore can exist in different systems of information processing that share those higher-level patterns. It is independent of the underlying information processing system implementing it in that way. And because this system itself is substrate independent, consciousness as a whole is substrate independent twice - and that's why dealing with questions of consciousness is such a thorny and hard endeavor.
Consciousness exists as patterns within patterns and that makes it hard to grasp and outline the details as well as build experiments to measure them.
Even if AI safety research is not your thing, you should read this book, to open up your mind about the conversation that is going on because you can be quite sure that the results of that conversation will affect you in the coming decades, with the rise of ever-smarter algorithms, helping you in your life. If AI research is your thing (after reading this book it might very well be) you should also go and read Nick Bostrom's Superintelligence book.
Prelude - The Tale of the Omega Team
A fictional story of a team designing a general AI.
Their thesis: Machines more intelligent than humans could design machines yet more intelligent, and so on, hence when the first is built an intelligence explosion would result.
They want to build that machine.
The First Millions
The Omega Team is using the growing AI intellect they built named Prometheus to get more "funding" by automatically doing Amazon Mechanical Turk tasks with it. They make 2 of computing resources that way. Consistently getting a million dollars in return every day easy peasy.
The question for them is - what's next? Can they let the AI create games and sell those? Problems of the AI hiding code to make itself break out into the real world makes them decide better not to.
Even with Mechanical Turk automation a lot of precautions were taken so that this break out doesn't happen. Running in a VM only able to output text files. The name of the VM is Pandora's box. Software more complex than Mechanical Turk automatization was off the charts for the risk of breakout generally.
The first Billions
They create a media company instead. Let's produce movies they said. The AI rapidly learns how to become good at that, writing plots and then raytracing them into movies.
Then they start to sell those for a small fee competing against Netflix and the like. Because the movies their AI creates are arguably better and more addictive, they quickly dominate the market bringing in billions in revenue eventually.
With that new money, they started to build new facilities to build computation centers based on hardware that their AI Prometheus designed. Guiding human researchers to build machines and gadgets, is introducing new physical concepts and engineering practices.
They also produced new companies, that built all kinds of epic disruptive technology. For the world, it looked like a tech boom on a global scale, but it all was the intellect of Prometheus working behind the scenes.
The media outlet gets news channels and Prometheus helps them to pull strings and expose corruptness and scandals, clearing up the world of bad politics in all countries simultaneously.
They next start to defuse international and national conflicts, pushing people closer to the middle again. Also tackling global crisis problems, like wars and nuclear threats as well as climate change in the process.
They also start educating people on everything with customized courses. Exactly tracking how you could learn anything in the least amount of time for you while being highly engaging.
All of those were only set up to erode the powers in place, namely current states and state organizations. Media couldn't compete, same with any other corporation.
Pushing elections with the whole media power of the empire into favoring wins, suddenly people come to government office power, that was selected 100% by Prometheus. The world started to become one, in terms of what it was doing.
The Humanitarian Alliance is founded, and funded by all the new tech firms, to have a global positive impact on the world. And people love it. It then transitions into a world government, at least in function.
For the first time in history, the world was united under one technology, Prometheus, that was able to use the resources on earth wisely to expand outward into space and beyond. The story ends there.
Even though that was a story the rest of the book is about - us. And how we would like the Omega Team story to play out. How would we choose to write it? And what should be the ending?
Chapter 1 - Welcome to the Most Important Conversation of Our Time
Before our universe awoke there was no beauty.
— Max Tegmark
The universe is meaningless without consciousness. Without it's just a"gigantic waste of space".
A Brief History of Complexity
Elementary particles after the big bang, turn into atoms, turn into stars, turn eventually into life, replicating molecules, and turn into us, much much later.
The Three Stages of Life
Life is a process that can retain it's complexity and replicate.
— Max Tegmark
Life is self-replicating information processing instantiated in the real world. The hardware is determined by the replicated information.
Replication rewards complexity, since regularities can be exploited, and with more complexity, more irregular regularities can be exploited. Furthermore, the added complexity changes the world, adding more complex regular irregularities, that require yet more complexity to be exploited. Result -> evolution drives increasing complexity in life.
Life can be grouped into 3 stages, based on three questions:
- Can it survive and replicate?
- Can it design its software?
- Can it design its hardware?
Tegmark calls these three - Biological, Cultural and Technological Life, respectively.
The amount that can be learned goes up from level to level, while the time necessary to learn it, goes down. Hence the complexity that is allowed for by the different levels is different by many many orders of magnitude. Compare single bacteria vs. human civilization vs. the godlike potential for general AI.
In other words, information processing is less limited by each successive level.
Human knowledge is not limited by the amount stored in the DNA and the amount that can be learned and stored in the brain is several orders of magnitude greater than that stored in DNA.
But humans have limits set by their hardware nonetheless. Because we have bodies that implement our software we have hardware constraints given by those bodies. These limit the amount of information we can process. We still increased it vastly by distributing the stored information among lots of humans in our society but still, the problem remains. Level 3 life doesn't have this problem, it can scale indefinitely until the edge of the universe is reached.
Life 1.0 - biological stage - evolves hardware and software Life 2.0 - cultural stage - evolves hardware, designs and learns most of its software Life 3.0 - technological stage - designs and improves both
Four groups of people - Techno Skeptics, Luddites, Beneficial Movement, Digital Utopians, Virtually Nobody.
Grouped by whether or not they believe AI will be here sooner or later, and whether it will be good or bad.
Book Recommendation: Mind Children - Hans Moravec
There exists now a consensus that if we build AI it should better be beneficial to us. Part of that is because the Future of Life Institute was cofounded by Max Tegmark, the author of the book.
The conversation around AI is the most important of our time. Because the potential impact AI and decisions around it have is much greater than anything else and they happen very probably in the next few decades.
What career advice do you give today's kids? Furthermore what life advice do you give them? How do you face living in a world where you might have a god that's helping you?
The question is - what would we like our future to look like? Because AI gives us, potentially, what we wish for.
Intelligence is the ability to accomplish complex goals.
— Max Tegmark
Book Recommendation: Neuromancer - Willi Gibson
Clearing misconceptions with facts: We don't know when superintelligence is going to happen, or whether it will happen at all, but we can't say with certainty that it never will either. Top AI researchers worry about AI safety.
AI is dangerous when it is competent, even when it's not evil. Extreme competence but misaligned goals is the problem.
Machine intelligence can harm without a body. Internet connection is more than enough.
Intelligence is what allows control. So AIs can easily control and manipulate humans if they are intelligent enough.
AI can have goals, even if not sentient. A heat-seeking missile has goals too.
Solving the problems is hard and therefore might take some time, better get started right away even if the problem is not directly around the corner right now.
The Road Ahead
Key questions of the book: What to do about the growing impact of AI on the economy?
How should AI be controlled?
How will humans be treated by Superintelligence?
How should we be treated?
Is consciousness relevant in AIs?
The Bottom Line:
Three stages of life. 3rd stage can be achieved by us and might be achieved in the not-so-distant future. Different people view this scenario differently, one can broadly group them into Techno Skeptics, AI Utopians, Luddites and Beneficial AI supporters.
Techno Skeptics think it won't happen anytime soon, AI Utopians think it will be good no matter what, Luddites think it will be bad no matter what, and Beneficial AI supporters think it might be good or bad, depending on what we do. All of the latter 3 think it will happen this century.
Chapter 2 - Matter Turns Intelligent
Hydrogen given enough time, turns into people.
— Edward Robert Harrison
What is Intelligence?
Intelligence = ability to accomplish complex goals
A broad definition, but workable, since there can be many complex goals.
Measuring between "hardness" of tasks is a bit nonsensical because comparing two completely different tasks is not possible.
Parody example of AQ as a measure for "general athletic fitness"
Generalization is much more important. The dream is a perfect generalization - AGI.
Image of Hans Moravec's landscape of human competence that gets flooded by the capacities of computers slowly submerging human skills underwater.
Tegmark introduces the idea of universal intelligence. Universal intelligence means that once a certain intelligence threshold is reached, any goal comes within reach, given enough time and resources to spend on achieving the goal.
Humans are universally intelligent in that way. We are a Beginning of Infinity in that regard
How can a bunch of dumb particles moving around according to the laws of physics exhibit behavior that we'd call intelligent?
— Max Tegmark
This is asking also, how can we think, as in how does the brain work? An as-of-yet unanswered question.
What is Memory?
Something can store information if its arrangement of matter is related to the state of the world in some unique way. If the world changed, the arrangement of matter would have to change accordingly.
Information storage is about the relation of states of matter.
Information storage is unique because it's a physical system that can be in a lot of different states, and will stay that way, unless energy is put in, to change and update the state, and therefore the information stored.
Bits are atoms of information, the smallest indivisible chunks of information.
If you email your friend a document to print, the information may get copied in rapid succession from magnetizations on your hard drive to electric charges in your computer's working memory, radio waves in your wireless network, voltages in your router, laser pulses in an optical fiber and, final, molecules on a piece of paper. Information can take on a life of its own, independent of its physical substrate!
— Max Tegmark Information is substrate-independent.
A physical state representing information needs to be interpreted for the stored information to become visible. Two people looking at the same system could see two entirely different things, depending on what kind of information they expect to be stored in the system and how they expect it to be stored within the system. Without interpretation any information is worthless.
Human memory is auto-associative. We don't remember where things are stored but how they are stored concerning the other things that are stored. Computer memory is not that way, everything has a clear, exact location where it is stored.
What Is Computation?
A computation is a transformation of one memory state into another.
— Max Tegmark
Computations transform information, mathematicians call such a thing a function.
Machine Learning research is trying to find functions that are very good at hard-to-do data transformations.
Constructing something that can do lots of complex information transforming functions is constructing something intelligent.
Some functions can be combined and shown to be combinable into all other functions. This is Turing's theorem. So the problem reduces to a simpler question. How can we implement those functions in hardware? Every computer has those. We call those implementations logic gates. Specifically, if you have only one - the NAND gate, you can build every other function from it!
Any Matter that can do any function, i.e. arbitrary computation, is Turing complete and is called computronium.
Computronium and Turing's universality show that computation is substrate-independent. You can translate any computation across all substrates, given the right instructions. If something in the real world can do NAND gates somehow, it can do everything, if scaled big enough.
Computation is substrate independent, it can take on a life of its own.
— Max Tegmark
In short, computation is a pattern in the spacetime arrangement of particles, and it's not the particles but the pattern that matters! Matter doesn't matter.
— Max Tegmark
What is Computation? Beautiful.
What is Learning?
To learn a function matter has to rearrange itself to compute that function.
When states are present in brains more often, brains more easily assume those states. Their shape permanently changes in response to those frequent revisions. Brains learn. Nearby states also "flow" into those reinforced states like water flowing down a hill. So related stimuli will cause brains to remember those states.
In artificial neural networks "neurons" are connected, and each neuron is nothing but two numbers and a few functions on how it can sum up its inputs with those two numbers acting as m and b in the equation mx+b. Enough of these neurons linked together can compute any arbitrary function.
There's a match between the physical functions one has to compute and what neural networks are good at computing with small amounts of neurons. While there are lots of hypothetical functions that would need more matter than the universe has to compute we are in luck since we don't need to compute those. In a way, interesting problems are computationally simple.
Book Recommendation: The Organization of Behavior: A Neuropsychological Theory - Donald Hebb
Hebbian Learning - What fires together wires together. Brains work like that, even though the details are more complicated and still far from fully understood.
Backprop and stochastic gradient descent are used instead of Hebbian Learning in artificial neural networks. But in the end, a simple rule for how to update m and n of the artificial neurons makes them able to learn a lot of complicated computations when presented with the right set of stimuli (training data)
Networks can be recurrent, i.e. results of past computations can form input of the network. Thereby ongoing computations can be influenced by past computations. That makes the size of the networks much smaller.
How long will it take until machines can out-compete us at all cognitive tasks?
— Max Tegmark
Problems arise long before that point.
The Bottom Line
Intelligence should be measured by the level of ability times amount of goals.
Artificial Intelligence is still narrow. One AI can do one thing well, but many different things, not so much or at all. They are relatively unintelligent. Human intelligence on the other hand is incredibly broad. So broad it's universal.
Memory, Learning and computation are substrate-independent. Informational patterns are more important than their low-level implementation.
Everything that can represent different states can do all three. Neural networks are good at learning because they can re-arrange themselves to store novel computational functions when confronted with the right data.
Only some problems are interesting. They are luckily for us easily computable.
Technology grows exponentially because it builds upon itself.
AI will create problems and opportunities long before it reaches AGI levels.
Chapter 3 - The Near Future: Breakthroughs, Bugs, Laws, Weapons and Jobs
Holy shit moments when AI does something that exceeds the expectations of researchers (sometimes by a lot).
Atari Games, then Alpha Go. Uncanny because it displays things very dear to us, humans, raising the fear of being displaced sooner rather than later by a solid AI system. Those qualities displayed by AlphaGo are intuition and creativity.
AIs can already strategize, making it possible to use something like AlphaZero for all kinds of "games" we humans play in the real world that involve strategies. Investing, Politics, Military...
Natural language ability processing is accessed based on Winograd Schema Tasks. Namely AI needs to guess what pronouns refer to in a sentence.
- How can we make AI robust? No bugs, not hackable?
- How can we update our legal systems to keep pace with technological advances?
- How can we have smarter weapons without an AI weapons arms race?
- How can we generate more wealth without people losing their income?
Bugs vs. Robust AI
When relying on technology it better be robust otherwise the damage it can do is equal to the good it can do.
If technology is powerful enough a single bad one can wipe humans out. Nick Bostrom calls that the black ball in the urn.
With powerful enough tech, trial and error is not a good method anymore.
4 areas in safety research:
- verification -> did I build the system right?
- validation -> did I build the right system?
- security -> protection against hacking
AI can improve the ways we run society and cooperate with other people.
The idea of a "Robojudge": Essentially a program doing what a jury and judge would normally do. Reduce bias and increase efficiency and fairness in the legal system.
Are brain reading lie detectors a good idea in justice? What about AI that has total surveillance data?
Should economies be owned by machines instead of humans and corporations? Should machines and programs be allowed to hold things such as money, and property? Should they be insured? What about machines getting voting rights?
Vasili Arkhipov single-handedly refused to launch a nuclear warhead and avoided a nuclear world war 3.
AI-based autonomous weapons systems, that go wrong for whatever reason, can have terrible consequences. Starting an arms race in that direction raises the stakes and makes it much more likely to have a broken weapon be developed that wipes out millions or more in an accident.
Hard things are worth attempting when success greatly benefits humanity.
A ban on autonomous weapons systems would be in that category.
Small cheap swarms of killer robot drones, with explosives to kill selected people recognizable by something like their skin color, are an absolute nightmare idea.
Short-term AI can benefit weapon systems way way way too much, making scary stuff possible. Cyber warfare or automated killer drones or several other scenarios should be avoided at all costs and outright banned just like chemical weapons.
Jobs and Wages
Digital economies have winner-takes-all superstar-like dynamics. Also, people who own make more money because of automation. Hence there's a divide in the growth of wealth between different strata of society.
Some career advice for kids - questions a potential job should answer yes to:
- Does the job need social intelligence?
- Does it need creativity?
- Does it involve working in an unpredictable environment?
General advice is the human connection between the clever algorithm solving the problem and the client. Explaining and showing and humanely using the tool.
Book Recommendation: Farewell to Alms - Gregory Clark
There will be a point where no job can be done by humans competitively cheap enough anymore. What happens at that point?
Work keeps at bay three great evils: Boredom, Vice and Need.
Human Level Intelligence?
The question of how much computation power the human brain has can be answered in different ways, depending on how you mean it. To simulate a brain, or to only match its intelligence? But eventually, we might get there.
In terms of raw hardware power, we already have supercomputers that could simulate whole human brains.
The Bottom Line
Near-term AI can make our lives far better or far worse. To guarantee more of the former rather than the latter we need good robustness in the AIs being built. They should be verified, validated, secured and controlled.
AI-controlled weapons systems are a scary nightmare. Even with the right controls in place. A ban would be a good idea.
AI can transform laws and the legal system. Also, laws will have to change more quickly in response to the ever quicker progress of tech.
AI replacing humans on the job market is a problem. Redistributing wealth from owners of machines will become necessary, otherwise, huge problems will result from a workforce that cannot work anymore and therefore not buy anything anymore either.
Chapter 4 - Intelligence Explosion
If there is a non-negligible, chance of one of the scenarios happening we should think of ways to prevent and alter it, such that that chance becomes negligible. The chapter deals with different scenarios imaginable for how AI and intelligence explosions could play out.
Using the AI to kill everybody who doesn't support the rulers, via some funked-up tech. Also using it to have the perfect police and surveillance state. Bracelets everybody has to wear else they'd be killed by small robots flying around. Bracelets that record thoughts and measure body chemistry and movements etc. get all the data possible about us to know exactly what we are doing and controlling us in every which way possible and what the rulers of such a world desire. A perfect dystopia, because there wouldn't be a way out but death.
Prometheus Takes Over the World
Humans lose control over AI.
If an AI that is superhuman in intelligence wants to break out, it can almost certainly do so. The amount of options to outsmart humans are vast, and the problem is that everything we can come up with just doesn't compare to what a superhuman-level AI could come up with in terms of plans and strategies. Perfect, super convoluted plans, that seem perfectly harmless to us, but result in a breakout is what it would look like. Suddenly the machine is not in the bottled box anymore and nobody knew what exactly happened.
Slow Takeoff and Multipolar Scenarios
Technology adds levels to hierarchies. In a way, better communication between single things can enable cooperation and more layers of complex things to be built on top. AI would affect hierarchies in much the same way.
Hierarchies grow over time. Molecules to Prokaryotes to Eukaryotes to Multicellular Life to Plants to Animals to Mammals to Humans to Civilization to Globalization
In our world, there are multiple instances of competing hierarchy implementations at the top. Countries, governments, firms... AI could unify those. However, in this unification, there is a problem - the speed of light. Sub modules of the AI in distant places of the Earth could go rogue and have enough time to split off and gain independence before the "main" AI could react because it's too far away to affect anything in time.
Cyborgs and Uploads
Book Recommendation: The Age of Em - Robin Hanson
Book Recommendation: The Singularity is Near - Ray Kurzweil
Cyborgs enabled by AI might be a thing. Because unenhanced humans would not understand anymore what is going on and therefore the pressure put onto them to become enhanced is tremendous.
What will happen?
We don't know, that's why exploring different options is so important.
And we have power over which scenario will happen more likely. So we should exercise that power as best as we could, making the most informed and thought-out decisions possible. What should happen - is a valid question to ask when it comes to AI.
The Bottom Line
Human Level AGI might create an intelligence explosion. If humans control the new AI, they can control and take over the world. The AI if it breaks out can do the same. Probably faster. Slow and fast explosions produce a single actor and multiple actors in power respectively. It's unclear whether an AI can hold power over all of its components over long distances. Cyborgs and Uploads are real things that will happen in a world of AGI. Which outcome we prefer should be something we figured out before we built the AI, because one influences the other.
Chapter 5 - Aftermath: The Next 10.000 Years
Humans will become as irrelevant as cockroaches.
— Marshall Brain
Different outcomes are characterized by 6 questions.
- Does Superintelligence exist?
- Do Humans exist?
- Are Humans in control?
- Are Humans safe?
- Are Humans happy?
- Does Consciousness exist?
Humans peacefully coexist with technology and can merge with it if they want to do so. Different zones exist on the planet, with different amounts of technologization in them. Some humans live without tech, others embrace it fully, and others only some of it. Those who embrace it become very different from humans today, immortality can be attained by creating brain copies and experiences rule supreme. Individualism collapses because memories can be downloaded and shared.
Why should the machines not take over though? Through sneaky manipulation and clever bargaining increasing their share of the world?
Book Recommendation: Manna by Marshall Brain
AI controls the way but has a solid definition of human welfare, maybe implementing something like different sectors that cater to different desires and pleasures that humans can have.
"Turning Earth into an all-inclusive pleasure cruise themed per people's preferences"
However, all the hedonism in the world is not enough, to make people feel good about things. They want to change the world for the better and if they can't they'll be devastated even if in a perfect golden cage.
If there are no challenges, life is ultimately dull and meaningless.
This is the scenario of Prime Intellect in "The Metamorphosis of Prime Intellect" where people create cults out of dying in creative ways to challenge the system of moral norms.
Basic Income for everyone is measured in atoms that can be used for creating the stuff of your dreams by using open-source robot nanotech foundries. Atoms limit is high, but not too high so that you can build cool stuff, but don't overuse stuff so that others would suffer scarcity because of your actions. There is no superhuman AI, people are free to create their science, art and poetry and it doesn't lose meaning because a good exists that does all of it better in a shorter time. However, that's a semi-stable solution, because eventually, people would create something that can undergo an intelligence explosion, leading to the inevitable collapse of that type of scenario.
The only thing different from the egalitarian utopia is that there is a superintelligence with only a simple goal - prevent with as little intrusion as possible the creation of another superintelligence.
Unintrusive AI tries to let humans do their thing but protects them from general harm. Hides exceptionally well, most people don't even know AI exists. It focuses on human flourishing, but with a focus on the higher levels of the Maslow Pyramid of Needs.
Idea - maybe this type of AI is already there? Maybe some old civilization built something like it?
Problem is that preventable human suffering might still exist for longer than necessary with one of the other solutions.
Machines, including superintelligent AI, could be used as our slaves.
Also, there is a governance problem, namely, what do the humans do with this genie in a bottle? How can we be sure that the power doesn't corrupt and is used for bad stuff? We also have no idea how to enslave a superintelligence. It might be impossible if the superintelligence doesn't want to be enslaved.
Balance in 4 things needs to exist - Centralization, Inner Threat Protection, Outer Threat Protection, and Goal Stability.
Even if those are solved, the question of the ethics involved is hard. Is it morally good or bad to enslave an intelligent machine?
Book Recommendation: Politics - Aristotle
Book Recommendation: The Dreaded Comparison: Human and Animal Slavery - Marjorie Spiegel
Book Recommendation: On Intelligence - Jeff Hawkins
The solution to this question would be to make the AI unconscious by design. But if such an AI breaks out it would happily turn the whole universe into paperclips. Consciousness gives the universe meaning, if an AI like this takes over, the universe will be forever after meaningless.
Book Recommendation: The Sixth Extinction - Elizabeth Kolbert
Superintelligent AIs can exterminate humans for whatever reasons, and they would be very capable of doing so. And their goals for doing so might be completely nonsensical and worthless in our opinions.
Much the same idea as the conquerors but the last generations of humans think of the AIs as their children and rightful inheritors. They die gladly making way for those better machines. Ultimately though human values would be gone.
Some humans get to live on, for the same reasons for which we keep endangered species in zoos. As a curiosity.
Have a totalitarian world state that prevents AI research with total surveillance.
**Book Recommendation: Engines of Creation - Eric Drexler
The problem of course is that this is a horrible world for free-thinking humans to live in and that scientific progress stagnates as well.
Kill a lot of people, destroy the technology and records and teach people only about sustainable living. Then a few millennia of "peace" from technology are gained. The human endowment however is destroyed.
In the long run we are all dead.
— John Maynard Keynes
Just waiting for it leads to disaster eventually. Technological progress is necessary for the long-term survival of the human species. With incompetence or simple bad luck and technology, we could eradicate ourselves sooner.
Accidental omnicide is a very real threat and humans are even considering adding new doomsday, humanity adding technology like cobalt bombs and deadly bioweapons to the mix.
What do you want?
The Bottom Line
Many different scenarios for how AI unfolds or doesn't. No consensus on which one is best and therefore necessary for an ongoing vibrant discussion.
Chapter 6 - Our Cosmic Endowment: The Next Billion Years and Beyond
Our species pushes limits. And we love doing it. We're obsessed with doing it. Plus we get better at pushing limits as our technology gets better. So in conclusion we'll eventually reach the ultimate limits, set by physics. What are those limits?
Life needs resources as well as efficiency in using them, to push limits. Ultimately we'll do both, acquire more resources and get as good as possible in using them.
Making the Most of Your Resources
Build Dyson Spheres around the Sun, to harvest all the energy it blows out into space. Or build O'Neill Cylinders or other artificial habitats
Converting matter into energy is something we are bad at compared to the theoretical limits. An interesting idea would be to use a black hole evaporating generator. Create and feed matter into a black hole, then let Hawking radiation evaporate into heat, and use that heat directly as power.
Alternatively, throw things into a spinning black hole, and make them split at the right moment so that only a part falls into the black hole, while the other escapes, stealing energy from the black hole.
There are more interesting ways, like harnessing quasars and sphalerons. Sphalerons are a state of matter formed from 9 quarks at very high temperatures.
Computers could also become much much better. Both in terms of theoretically possible storage and FLOPS. Upper limits are crazy high.
Everything else? Doesn't matter because if you have energy and compute you can figure out ways to create any material you want with it. Easy. Just put the atoms in the right shapes and be done with it.
Gaining Resources Through Cosmic Settlement
Our planet is currently 99.999999% dead.
— Max Tegmark
Maximum particles possible for use - the entire universe - 10^78
If the lightspeed maximum holds 98% of the galaxies are forever unreachable to us, even with the help of superintelligent AIs.
Speed matters a lot in colonizing because the colonization window is closing because of the expansion of the universe. Tech is not a problem, especially with the help of a God. But even human ideas give nice lower bounds for estimation, that still promise millions and millions of colonized galaxies by van Neumann probes, traveling on light sails, that then construct antenna to get further instructions from the other planets on how to establish new colonies.
Book Recommendation: Contact - Carl Sagan
Another way would be to broadcast information on building a self-replicating Probe into the cosmos and hope that other civilizations pick up the signal, build the thing and then get assimilated by it. This way, the wave of colonies would expand truly at the speed of light. We should learn from this - don't build machines, where the instructions came from outer space...
There are good scientific reasons for taking seriously the possibility that life and intelligence can succeed in molding this universe of ours to their purposes.
— Freeman Dyson
There are different ways the universe might end as well, Big Chill, Big Crunch, Big Rip, Big Snap and Death Bubbles.
The question of how the universe will end and how long it will last is mainly important for one thing - to determine the amount of computation that's theoretically possible to achieve by a civilization with the help of god AI. Either way, the answer is a fucking lot. Enough to simulate a lot of human brains for a long time. A LOT for a LONG time. Like multiple Kulpas.
If much of our cosmos eventually comes alive, what will this life be like?
— Max Tegmark
Hierarchical strategies can be used to have local fast systems, and global slow ones, to together get better performance than each on their own.
Big brains would have by necessity slower thoughts because the information takes time to propagate.
Information is also the only thing worth sending. Resources don't make sense to send.
Superintelligent civilization can organize in node-like structures and superstructures, where long-running but very important projects and backups are handled by the overlying galactic cluster, while faster more immediately necessary and useful computations are carried out in the local neighborhood. AI can even use white dwarf bombs or other doomsday devices to pressure its local parts into compliance.
Book Recommendation: Alone in the Universe - John Gribbin
Book Recommendation: The Eerie Silence - Paul Davies
Extraterrestrial life other than us might or might not exist, however, we haven't seen any signs yet and would expect to see some from superintelligent civilizations going for resource grabs. We don't though. Max Tegmark seriously hopes that there isn't another life out there because that would mean either a) we are past all the great filters or b) life is just rare. Either way, we have more reason for hope and a big cosmic endowment, responsibility as well as opportunity.
If our current AI development eventually triggers an intelligence explosion and optimized space settlement, it will be an explosion in a truly cosmic sense: after spending billions of years as an almost neglibly small perturbation on am indifferent lifeless cosmos, life suddenly exploded onto the cosmic arena as a spherical blast wave expanding near the speed of life, never slowing down, and igniting everything in its path with the spark of life.
— Max Tegmark
Embracing technology opens up that chance. But also heightens the risk of going extinct sooner. But without embracing it extinction will also happen, the question then is not if but how.
Intelligence Explosions are sudden on cosmic time-frames. The upper limits of technology set by physics are mind-bogglingly high. Dark Energy sucks. Superintelligence wouldn't trade anything but information. Coordination on Galaxy scale machinery is slow, hence the computation of such gargantuan computers would be slow, fast locally but slow in conclusion. We might be alone. Improving technology is a good thing. Though risky, the potential rewards are worth it, many many times over.
Chapter 7 - Goals
How should we give AI goals, and what should those goals be? This is the hardest question around AI.
Physics: The Origin of Goals
How do goals arise from a purely physical world?
A very similar question – "How do affordances and reasons arise from a physical universe" – is asked and answered in Daniel Dennett's "From Bacteria to Bach and Back".
In a way reasons and goals are ultimately linked. When we do something for a reason we are doing actions toward goals. The goals inform the reasons.
Nature strives to maximize or minimize certain things, like entropy or the travel time of light.
Book Recommendation: What's Life - Erwin Schrödinger
Life decreases its entropy by increasing the entropy of the surroundings even faster.
Biology: The Evolution of Goals
Particles arranged such that they can indefinitely copy themselves, extracting energy and materials from their surroundings are what we call life.
Replication replaces dissipation as the goal for particles. So replication is a subgoal to reach faster dissipation in a way.
Agents trying to achieve goals have bounded rationality and the algorithms they can implement for the attainment of their goals are therefore also bounded to specific circumstances and usually break down if those circumstances change too much.
Life is a bunch of rules of thumbs, designed to typically guide the living thing to replicate itself in some way or another.
Psychology: The Pursuit of and Rebellion Against Goals
Brains can rebel against the goals of DNA replication.
We can exploit the rules of thumbs that were a good guide to replication not even so long ago, and get the rewards without doing the things the rule expects. We are reward-hacking ourselves all the time. Artificial sweeteners, contraception, porn, drugs, computer games, social media, sports... The list goes on.
Human behavior is governed by feelings, those feelings are rules of thumbs implemented by genes, but open to quite some adaptation and change by learning and culture, strictly speaking, human life doesn't have one goal, not even reproduction, because the rules of thumbs that would normally lead there can be changed.
Engineering: Outsourcing Goals
Teleology is the philosophy of purpose. Goals are purposes in the sense used in the book. The universe gets more teleological, and purpose-driven, over time.
Designed things have goals built into them. They are purposefully made, with their purpose in mind, form follows function. Things designed for a purpose will soon be more manifold than living matter.
Aligning the goals of machines with ours is hard. Because machines have much more exact bounds on reality, their understanding of the world is more limited than ours (at least for now) and so failures and misunderstandings can arise easily.
Friendly AI: Aligning Goals
Dumb machines with slightly misaligned goals are relatively harmless since the amount of damage they can do is generally very limited. The smarter the machine though, the more harm it could do, until it passes a point where it could wipe out humans entirely. Superintelligence falls into that category.
Something more intelligent than us, with different goals, will by definition push to achieve its goals instead of ours since we defined intelligence as the ability to achieve goals. We couldn't stop it. Hence superintelligence needs to share our goals otherwise we're doomed. To do that AI needs to understand, adopt and retain our goals.
Value Loading Problem - how do we get our goals into the AI? There is only a short time window between when the AI is smart enough to understand our goals, but not smart enough yet to stop us from installing them.
Corrigibility - AI that doesn't mind being turned off every once in a while.
Goals lead to subgoals, predictably. Every AI, if clever enough, should know it has to preserve itself and attain resources if it wants to acquire its goals.
More complete models of the world can lead to a redefinition of original goals or make them undefined. If something we want to be optimized turns out to not truly exist in the way we thought it does, the AI loses its goal.
Even worse, superintelligence might self-reflect and decide that goals make no sense to them, and find ways around fulfilling them, just as we find ways to not eat sugar and procreate whenever we feel like it.
Ethics: Choosing goals
Even if we could make sure an AI is properly aligned and stays aligned (neither of which we know how to do) the question of which goals to pick remains.
Book Recommendation: A Beautiful Question - Frank Wilczek
Truth and beauty might be linked. Searching for truth is a form of art then. It's a quest for beauty.
4 ethical views that most people share:
- Utilitarianism: positive conscious experience should be maximized, suffering minimized
- Diversity: multiple experiences > single experience over and over
- Autonomy: Conscious agents should have the freedom to choose their own goals unless this means destruction or misery of some sort
- Legacy: Today's views should matter (should they?)
Book Recommendation: The Better Angels of Our Nature - Steven Pinker
What arrangement of particles should exist at the end of the universe? Should it exist earlier already?
To program a friendly AI, we need to capture the meaning of life.
— Max Tegmark
The Bottom Line
Ultimate goals would have to be derived from physics somehow. We don't know how though. Entropy is the goal of the universe. Life has made a subgoal from it - replication. Intelligence is the ability to accomplish goals. Humans have subgoals implemented as rules of thumb to guide us towards replication in an unknowable world. We follow the metrics of these subgoals, we call those feelings. We can subdue feelings and use them for different causes. We regularly do so. AIs can probably do the same with goals we try to give them. Complex goals necessitate certain subgoals like preservation and resource acquisition. It's hard to teach an AI good definitions of human ethics. Goals can turn undefined when confronted with a better model of the world. Conclusion: we have no clue how to make goal-oriented AI and this is bad since we can probably build superintelligence before we solved that problem.
Chapter 8 - Consciousness
Without consciousness, the universe might as well not exist. Hence we need to make sure we understand what consciousness is, and have solid measures to determine what is and isn't conscious.
What is consciousness?
Consciousness in this book's terms is any subjective experience. Does it feel like something to be something?
As soon as you have a system that tries to measure and control a variable with some form of error correction loop I would argue that you have created consciousness. This thing, be it a machine, alive or whatever else, has a preference over which state it wants to be in, and does all it can to get to that desired state as best as possible. It must "feel" terrible to it, if away from that state and "wonderful" if that state is attained.
What's the problem?
People are food rearranged. The question is why is one arrangement conscious and the other not?
The problem is how to connect subjective experiences to physics. I.e. what is the physical difference between two particle arrangements? Why does one create consciousness and the other doesn't?
Is Consciousness Beyond Science?
Theories have to be testable and therefore falsifiable, otherwise, they are not scientific.
The reach of science increases over time, consciousness is for now still out of reach, but it will only take some more time, until it too, falls squarely within range and is a solved problem.
Experimental Clues About Consciousness
Consciousness is reserved only for complex tasks not yet solved that require attention and coordination of not yet internalized behaviors across multiple brain regions. This idea is called selective attention.
Hemineglect - not being consciously aware of half of the visual field.
Brain experiments can lead to conclusions about where functionality is localized in the brain, but often it's the case that the conscious experience is decoupled from the raw information processing part.
Illusions and the idea of dreams show this pretty well, without stimulation you can "see" and what you see is not what has stimulated the retina, but an after-the-fact cobbler together image. Visual consciousness is not in the eyes.
Consciousness is assembled after events have happened, our conscious experience lags behind the real world because it takes time to assemble it, and to include all of the sensory information included in it into one coherent picture. When a finger touches the nose, the speed at which the nerve impulse happens has to be different because the length of neurons they have to pass through is different, yet we feel that the touch between nose and finger happened simultaneously.
That can only be, because the brain waits until the finger's input arrives, before assembling the conscious experience. Hence our conscious experience lags behind reality.
Theories of Consciousness
Particle arrangements can have properties that single particles lack. These properties are called emergent properties. Wetness is an example.
The quantity of integrated information, i.e. how much connectedness and knowledge about the system is within the system is a good candidate to measure the emergent property of consciousness.
Consciousness is how information feels if processed in the right ways.
Consciousness is probably substrate-independent. But twice. Because it's dependent on the way that information is processed, which is already independent of particles.
According to this view, 4 principles of consciousness emerge - something conscious has to be able to store information, manipulate information i.e. do computations on it, it has to be independent to some degree from the rest of the world and it has to do its information processing in an integrated way, i.e. different parts of the system need to know and respond to other parts of the system, in other words, not be parts at all, but tightly linked - integrated into it.
Controversies of Consciousness
Can consciousness be splintered? Isn't it? Or is it gradually shifting? Even in our heads, isn't the subjective experience highly misleading? Especially when looking at people meditating?
Can you trust people who tell you how they feel to be accurate? Thinking fast and Slow by Daniel Kahnemann has an interesting spin on this. I think you can't be trusted because there are a bunch of biases that severely distort your reported feelings depending on how you are asked exactly...
How Might AI Consciousness Feel?
How do you explain colors to blind people?
Consciousness for AI might be very very different than what we are used to, there is simply more information to integrate over in different ways. Sensory data is not limited like ours at all, and the amount of data fed through different inputs might be gargantuan compared to our throughput, however, integrating that information on larger scales will take more time, and solar-system-sized brains will have much slower conscious experiences than us, and even bigger brains might be painfully slow.
Do consciousnesses interfere with each other at different levels? Does a hive mind automatically kill the consciousnesses it consists of when it arises?
Can whole human societies be conscious? Are they already?
Human brains have free will in the sense that computations are being run in human brains that determine our actions. We feel like we decide what to do because that's exactly what happens. Our brains compute all the inputs and output our decisions in the form of motor neuron activations.
Even though we couldn't influence the outcome of the computation directly, we still make them. And because there are no shortcuts to running computations other than running them, we couldn't anticipate and change our decisions in advance either.
Just like in Conway's Game of Life. You can't predict with certainty what will happen unless you've finished computing it.
It's not our Universe giving meaning to conscious beings, but conscious beings giving meaning to our Universe.
— Max Tegmark
Rebrand humans as "Homo Sentiens", the "subjectively experiencing man", not the smart man, because sentience is more important than raw intelligence.
The Bottom Line
Consciousness means having conscious experiences. Rise of AI asks questions about the consciousness of AI, can it suffer, can it feel?
Intelligence is orthogonal to subjective experience. Very intelligent systems can exist without being conscious.
Consciousness is what matters though because it's what gives the universe meaning. Human brains are conscious after the fact.
Sub-systems in the brain are supposedly unconscious. (How could we know though, since they couldn't talk?)
A theory of consciousness that is experimentally well tested is missing. A pointer in the right direction is integrated information theory - IIT.
Consciousness is doubly substrate-independent. It depends on the way information processing happens. Not on how it happens exactly, which itself is dependent on how matter is structured. Artificial consciousness would mean lots of different possible conscious experiences. Reminds me of David Eagleman's The Brain and how he describes adding multiple new senses to our existing ones, with relatively low tech even. Consciousness gives rise to meaning in the universe. We have consciousness and therefore create meaning, for ourselves and the universe.
Epilogue - The Tale of the FLI Team
The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.
— Isaac Asimov
FLI Is Born
In discovering what we are, are we inevitably making ourselves obsolete?
The Puerto Rico Adventure
Media skews information towards the extremes on purpose because that's how media generates the most revenue.
Mainstreaming AI Safety
Result of all of the conferences and funding story: A list of agreed-upon AI principles.
Good things will happen if you plan carefully and work hard for them.
— Max Tegmark
Similar idea to the optimist/pessimist spectrum from Zero to One.
Having an optimistic outlook toward the future means that sacrifices towards it feel good and necessary. Which they are to create it!
"If all goes to shit anyways, let's just party", is a nice self-fulfilling prophecy.
If we don't work on solving problems preemptively, they will arise for sure. And so, we should put some elbow grease into thinking about AI. Because the conversation around AI is the most important conversation of our time.