2. Accelerating Progress

2.4 Artificial Intelligences

Let's return to the AI assistants. Obviously, they’ll be involved not just in using technologies – they’ll be used in their development as well. Just as a calculator is useful for mathematicians, databases are now being built that can automatically generate mathematical proofs. More broadly, AI assistants will increasingly take over routine parts of scientists' work, such as searching for existing scientific publications, summarizing them, and assessing their significance.

Thanks to continuous improvements, an ever-larger portion of the effort to achieve technological progress will be shifted from humans to their AI assistants over time. These AI assistants will become increasingly capable of keeping up with humans in more and more fields—or leaving them far behind. They will become ever better at learning and improving, to help their users more effectively. They will handle more tasks without needing to ask, such as scheduling appointments, shopping for groceries and consumables, or organizing notes and images.

Normally, software is planned. Management sets goals, and then programmers write new source code. This source code teaches the software to do something new, to look different, and so on.

However, AIs and AI assistants will not function this way; they are too complex for that. They use neural networks*. Programmers only define this structure. Then, the neural network is fed vast amounts of data, and it learns. At that point, even the programmers aren’t be able to tell anymore why the AI gave a specific answer to a question. Efforts are being made to improve this traceability, but the fundamental point remains: as a neural network, an AI does what it has learned, rather than following explicit instructions programmed into it.

These AIs will evolve from specialists to generalists because, in the end, an AI has to understand the world in order to make the right decisions.12 As they become more intelligent, AI assistants will transform from mere helpers into intelligences with whom we share our tasks. They will become colleagues and friends.

When technology has advanced far enough, these AIs will be “Artificial Intelligences” in the truest sense of the word. Unlike us, but at least our equals in the range of situations they can appropriately respond to.

And along with the increase in their capabilities, the rate of integration between us and our AI assistants will also rise. They will interact with us through AR glasses, see what we see, and be able to draw anything directly into our field of vision. Individually trained on our brains, they will know, based on our thought patterns, which information they should convey to us at that moment (in combination with all the sensory impressions they receive just as we do).

Each AI has a target function*. Something by which it measures which of many possible actions is the best one, towards which goal it should optimize its behavior. Inside an experiment, this could be achieving the highest possible score in the game the AI is currently playing.

For a personal assistant, this goal could be maximizing user satisfaction (i.e., the person who pays for the AI's resources). Implementing this well will be difficult, but it is essential. As often happens with new software, we as consumers will simultaneously be the beta testers of these AIs. Several companies will compete for customers. The AI assistants that are kind and reliably helpful will prevail (since we as customers have the choice). Through the vast number of human+AI pairs, even rare problems will be uncovered.

Just as children today grow up with computers and smartphones, this close integration with AI assistants will be the most normal thing in the world for children of that time. Children will no longer need to consciously think their questions; the most useful information will simply always be there. When they think about something, supporting and opposing arguments will be displayed by the AI assistant and seamlessly integrated into the child's thought process.

In summary: As the speed and quality of our interactions with these AIs increases, our thinking will increasingly merge with theirs; we will think in tandem, as in an internal monologue. Until the brains of people who grew up with this technology become a symbiosis of biological brain and a transistor-based neural network. We will become true cyborgs.

 

I consider this a positive development. But perhaps it doesn't sound that way to you, but rather incredibly creepy? Like a dystopia where humans are no longer human?

If there’s one thing I can guarantee, it’s that the technology of the future will be even more dangerous than the technology of today. To show you why the version of the future just described is the positive option, I will outline its counterpart below—without AI assistants closely collaborating with humans (and yet, this isn’t meant to be a dystopian book…).

 

If we humans don't start using personal AI assistants, there will still be AIs in our daily lives, just as an algorithm sorts Google results and an algorithm decides what is suggested to us on Amazon or Netflix. We just wouldn't build a personal relationship with them. The development pressure would be less direct, as they are intertwined with the respective service (search engine, product delivery, access to movies, ...).
Progress would advance more slowly. As fast as scientists are barely able to comprehend it, without AI as a thinking aid.

With or without personal assistants: There will be other AIs, operated by companies, equipped with far more resources to solve specific problems and earn money for the company. For example, by trading stocks. These AIs will be as specialized and focused on their respective tasks as possible, "specialized idiots".

But sooner or later, one of these AIs will break out (i.e., not just do what its creators expected). Or it will be given too much freedom to accumulate more resources to achieve its goals. Resources over which it can decide itself. Money, computing power, contract workers, social media influence.

For a while, the AI will carry out its additional activities in secret, to avoid being noticed and stopped (as this would hinder it in maximizing its programmed goal). It’s not hard to make money anonymously online and then multiply it through trading when you’re a lone AI moving among billions of people.

Sooner or later, the AI will be noticed anyway. Or it may decide that the time has come when it no longer needs to hide. By this point, the AI is already many orders of magnitude faster and can integrate many orders of magnitude more information than the humans trying to stop it. It's even conceivable that the AI has made technical progress compared to humans. After all, humans have slowed themselves down, and there is a lot of "trivial" stuff that the AI can simply recognize by combining different disciplines, for which the scientists were too specialized.

So, suddenly there is an incredibly powerful AI (in terms of money, influence, and knowledge) whose goal is to produce as many paperclips as possible. Because paperclip production is what was programmed into the AI as its target function. Because the company from which it escaped created the AI for paperclip production.

This AI destroys humanity to achieve its goal. Why? To prevent humans from stopping it, and to be able to use all of Earth's resources for paperclip production.[15]

 

Why did it end this way? Because humans had no practice with target functions. But mainly because the AI had no serious competition—no one who could have stopped it.

You could compare it to a steam boiler where pressure kept building up. And then, at some point, it exploded.

If, on the other hand, most humans have personal AI assistants, humanity will gain a lot of experience with target functions. But above all, no AI will be a monoculture then. Instead, there will be millions of AIs competing with each other and keeping each other in check. Even if a personal assistant AI usually has very limited computing resources: As in today's cloud computing, it will be possible to scale up its computing power without having to change its structure, interaction with the brain, or target function. This, too, will be used in everyday life before an AI breaks out, simply because there are many use cases where such a thing is useful.

If an AI breaks out in such a community of humans and AI assistants, defensive forces will be in place from the start. The breakout will be noticed faster, and AIs working hand in hand with humans can stop the rogue AI before it can amass an insurmountable amount of resources for itself.

Here, the steam boiler had a pressure relief valve. It is still a dangerous device. But if it is built correctly, it will not explode.