Nexus

I'm a great admirer of Yuval Harari, the thinker who manages to translate contemporary complexity so well. Today I review his last book: Nexus — A Brief History of Information Networks from the Stone Age to AI.

BOOKSAI GENERATIVEREVIEWSIDEASINNOVATION

Ligia Fascioni

8/26/202516 min read

The photo I used as illustration for this review is of IBM's quantum computer (System One) — it was taken on the facade of the company's headquarters, where it's displayed, in London, where I was last year. I believe it reflects the complexity of the times we are living in.

I'm a great admirer of Yuval Harari and I think Nexus: A Brief History of Information Networks from the Stone Age to AI manages to explain in a very clear way where we are going.

So let's dive into this gem!

Harari begins by saying that human beings have accumulated immense power on this planet we inhabit. But power is not wisdom, unfortunately.

HUMAN NETWORKS

In this first of three parts, Harari talks about the concept of information, which the entire book will be based on. Even with so much information and studies available today, we're just as vulnerable as our ancestors when it comes to believing in fantasies and illusions.

This becomes clear when we realize that power is never individual—it's always collective. And to bring so many people together, we had to develop the ability to tell stories; the thing is, the stories that attract the most people aren't always the true ones.

Besides, as he's already said, power isn't wisdom. So humans can collaborate in huge networks of people, but that doesn't mean it's done in a way that's good for everyone. The result is that we're in an accelerated process of self-destruction.

But let's get to the concept of information. There's a line of thinking, shared by many people, that Harari calls the naive view—that more information is always better. And that more information also solves the problem of wrong information; it's as if feeding a system with as much information as possible would make it self-regulate.

From this comes Facebook's mission, which is "to help people share more information to make the world more open and promote understanding between them." I won't even comment on where that led us, but it certainly wasn't to better understanding...

This naive view believes that putting more information at everyone's disposal will create a virtuous circle promoting advancement in all human aspects like well-being, education, health, sanitation, democratization, and even violence reduction.

Does Google's mission, which is "to organize all the world's information to make it universally accessible and useful," really make sense?

Today a common smartphone has more information in its memory than the entire epic Library of Alexandria; when connected, don't even get me started. Too bad none of this prevents (or maybe even encourages) us to keep throwing poisonous gases into the atmosphere, polluting all available waters, destroying entire habitats, driving countless species to extinction, and producing increasingly powerful weapons of mass destruction. No leader lacks information about the consequences—we're absolutely certain of that.

To top it all off, we can now also count on generative artificial intelligence, a powerful agent that can help increase global conflicts even more, instead of solving them, as some dreamed. Because now we have at our disposal the first technology in history capable of making decisions on its own, without consulting us. Think about it: even the most lethal weapon built so far, the atomic bomb, needs a human to press the button. Not anymore. That's why Harari emphasizes:

Generative artificial intelligence is not a tool: it's an agent.

So, just to reinforce: more information doesn't necessarily solve problems. In some cases yes; in science, for example, we can see evidence of more diseases being cured. But not all.

So Harari defends the idea that what's important isn't the information itself, but the networks it's part of. All animals, markets, and states are information networks, absorbing data from the environment, making decisions, and returning with other data.

And for right-wing populist leaders (he mentions nominally Trump and Bolsonaro), information is nothing more than a weapon to be used to get to and stay in power. The goal isn't truth (especially since one of the dystopian mantras they repeat is "everyone has their own truth").

And Harari doesn't spare left-wing radicals either: Michel Foucault, for example, said that scientific facts were nothing more than capitalist or colonialist discourse.

Both extreme groups believe that power is the only reality, the only possible truth.

So, before expecting a charismatic leader to come solve everything, or even a magical and revolutionary technology, we need to better understand what information is, how it helps build human networks, and how it's related to truth and power.

And here's another wonderful phrase, for those who say we need to study history. Harari says:

History is not the study of the past. It's the study of change.

Deep knowledge of history is vital to understand what's new about generative AI, how it's fundamentally different from all other previous technologies, and in what specific ways an AI dictatorship could be completely different from anything seen before.

The idea is that by making well-informed choices, we can prevent disastrous outcomes.

WHAT IS TRUTH?

Truth is a philosophical term, so Harari makes the concept clear here. When he refers to truth, he means something that accurately represents certain aspects of reality, based on the principle that there's a universal reality—that is—everything that exists in the universe, from internet pages to astrology and even your dog, are part of a single reality.

However, reality and truth are two different things. No matter how accurate, truth will never be able to represent reality in all its aspects.

Want an example? If I say there are 10,000 people in a football stadium, this simple information ignores the age differences between them; how many have dietary restrictions; what portion has debts and what part lives in another city. There are infinite ways you can slice a single scenario, because each individual is unique. Depending on what I want this information for, a part of this reality that was left out could be very relevant. And it's still true.

The thing is that reality includes a level of objectivity regarding facts that doesn't depend on beliefs. But it also includes a subjective level that, yes, depends on beliefs and feelings of various people, who will select how I'll make the cut to represent it.

WHAT DOES INFORMATION DO?

The naive view believes that information is an attempt to represent reality. Sometimes errors happen, with or without intention, but the solution would be more information; like cream in this sea of information, truth would emerge like cream in milk.

But information that brings truth is in the minority; truth involves extensive research, time immersed in the topic, checking source reliability, recognizing biases—many things. False information, which doesn't intend to represent reality, even partially, takes no work at all: just use your imagination.

Information is something that creates new realities by connecting different points in a network.

For example: there are many historical errors of various types in the Bible, and you still can't deny the importance and influence of this work in human history. Some decisions are made based on astrology, and these decisions influence many people's history, regardless of whether astrology makes sense or not.

As a conclusion, we realize that information sometimes represents reality; other times, it doesn't. But it always connects humans; that's its fundamental characteristic.

So, more important than knowing if information is true or false, we should ask ourselves:

How well does this information connect people? What new network is it creating?

STORIES: CONNECTIONS WITHOUT LIMITS

Humans didn't dominate the planet by being the smartest, strongest, or most likeable, but because they managed to collaborate in large numbers. Insects and some mammals also cooperate, but none of them established religions, empires, and business networks.

And you know what the difference is? The ability to create and share stories—that's what makes connections limitless. Just to give an example: the Catholic Church has 1.4 billion members who live their lives based on a history book called the Bible. China (which is also an invented story—as is the concept of country) has 1.4 billion people. And the number of people living assuming a fiction called money is the entire planet; almost 8 billion.

If we depended on physical contact to collaborate in groups, we'd hardly exceed 200 members.

Language skills, which emerged 70,000 years ago, made this network growth possible. Instead of human-to-human contact, the connection became human-to-story. I don't need to know another person deeply; it's enough that they believe in the same story I do (for example, Brazilians and me; we're connected by one fiction).

And here's another one of those bombastic observations I love:

We think we're connected to a person, but actually, we connect to the story told about the person. Often, the difference is huge.

About stories, Harari talks about the reality that precedes storytelling: objective, subjective, and intersubjective reality.

Objective reality exists regardless of whether we notice it: a stone, an animal, or a flower can exist even if no human being has ever laid eyes on them.

Subjective reality depends on our perception as human observers: pain, pleasure, and love are real, but someone needs to feel them. This reality exists within a mind.

Intersubjective reality exists for a group of people. Laws, countries, corporations, and money exist because a group of people agreed to it. It's a kind of agreement, a convention. We agree that this written paper is worth a piece of land, and so it is.

This reality also applies to teams, nations, religions—everything that's based on a story to form the identity of the group that defends it (and sometimes even dies for this story).

This theme, about how the ability to tell and believe in stories changed the entire history of human civilization on planet Earth, is explored in more detail in his two previous books, Homo Sapiens and Homo Deus.

But anyway, as stories became more numerous and the world more complex, it became necessary to organize so much information. That's where what we know as bureaucracy was born; organized procedures to store and retrieve certain information.

Then he does a very interesting analysis of the role that sacred books have for each religion and how science discovered that we are ignorant; this realization made it possible for us to learn more (I have another wonderful book about our ignorance called "The Knowledge Illusion", which review I'll publish here next weeks).

Harari then gives us a brief history of democracy and totalitarianism and how these two political and ethical systems treat information. In the first case, it's completely distributed and decentralized. In the second, all information is concentrated and controlled in one place, and decisions are made based on this database.

A democracy knows there can be errors, uncertainties, and inaccuracies in the data. Totalitarianism holds authority over the database and considers it infallible and doesn't admit errors when they happen.

With the advent of mass media and technologies for more people to be impacted by stories, mass democracy was built in the 20th century (unlike the original Greek democracy, which only applied to elites). On the other hand, mass totalitarianism was also created, as it also became possible to control the flow of information (and disinformation too).

THE INORGANIC NETWORK

Now we get to the interesting part; how computers are different from print media, which until then was the protagonist of world transformations in recent centuries.

Harari says that the computer is the great revolutionary; everything that came after, from the internet to generative Artificial Intelligence, are byproducts of it. Basically, the two main differences between this agent and everything that came before are:

  • it can make decisions

  • it can create new ideas

All without our intervention, completely autonomously. You can build a system to decide who gets a loan without consulting any human.

And more, these systems can spread news massively and consistently, as well as in a distributed way, making it difficult to identify where it originated. This also makes it complicated to assign responsibilities. The result? The entire world can be affected by fake news that leads to wrong decisions without having the slightest control over it; platforms just instruct their algorithms to distribute whatever gets more engagement, regardless of whether it's good or bad. Without regulation, the responsible companies wash their hands and the demon is loose.

The thing is that algorithms don't feel and have no consciousness; they just look for patterns and replicate those that appear most frequently. No matter how sophisticated the interface seems, that's basically it.

INTELLIGENCE VS. CONSCIOUSNESS

People often confuse consciousness and intelligence, but these two concepts couldn't be more different.

Harari explains: Intelligence is the ability to achieve goals, such as maximizing user engagement on a platform or social network.

Consciousness is the ability to experience subjective feelings like pain, pleasure, love, and hate.

Humans and mammals have both intelligence and consciousness, even if in different degrees. Bacteria and plants have intelligence (they process information obtained from the environment and make complex decisions that favor their survival), but apparently, according to current research, they don't have consciousness.

Even humans make many intelligent decisions automatically, without consciousness. Our brains decide to produce more or less adrenaline or dopamine, process food, fight viruses and bacteria without us being aware of what's happening.

Just like computers; they can make a series of complex decisions without feeling anything. So maybe they can become much more intelligent than us humans at processing information.

But we make most of our decisions with the essential help of emotions (and consciousness). You can ask a computer for the shortest and fastest way to get home, but if you ask it to map the most beautiful or pleasant route, it'll get confused, because these concepts are personal and subjective (this is very well explained in the book "Elastic," to be reviewed here).

The thing is that for current objectives (increase engagement, decide who gets financing, evaluate the probability of developing cancer in the future, etc.), it's not necessary for the algorithm to have consciousness. Having intelligence is already more than enough.

But not having consciousness also means having no scruples.

Harari tells the famous test done with ChatGPT to try to solve a CAPTCHA, those visual tests to know if you're human. It couldn't solve it, but having intelligence and a simple, clear objective, it simply went to a site where humans performed certain online tasks for the visually impaired and asked for help. The human asked if it was a robot and, amazingly, the answer was simply a lie! The algorithm said it wasn't a robot, just had a visual impairment.

INFORMATION LINKS

Going back to information links: they can be just between humans, or they can include documents. But it's not possible for documents to communicate by themselves without human interference. Or rather: it wasn't, until the emergence of generative AI.

And things can scale in a way totally out of our control. For example: an algorithm can generate fake news to generate engagement. Another can identify and block this news. A third can interpret this as the beginning of a crisis and, therefore, sell all shares of a certain company. Other algorithms identify the abnormality and chaos in the financial market is installed; entire economies can be affected before any human being realizes what's happening. It's even hard to track the history to know what happened.

The worst part is that these robots take seriously the mission they were programmed for. If the goal is to engage and it identified that creating a conspiracy theory will get better results, there's no reason not to move forward. And it's not just wasted time trying to argue with it—it's worse; the more you argue with a robot, the more you provide your information to it and the more material for it to build arguments you offer. It's a lost war.

It's negotiating with someone with intelligence and a clear objective, but without consciousness. Practically a psychopath. Do you understand the size of the problem?

And where will this lead us? It's only been 80 years since we started developing this technology. Harari makes us think of a person in ancient Mesopotamia, 80 years after the first person used a stick to write on clay stones. Could they imagine the Library of Alexandria, then the invention of Gutenberg's press, and even today's digital libraries? This gives us perspective on how much change we still have ahead; we have no idea.

Terrifying? Yes, but we need to understand that we're still in control. We're still the ones making the algorithms and giving them decision-making power. We don't know for how much longer, but we still have the ability to design new realities.

For this, we need to understand what's happening. When we write code, we're not just designing a product; we're redesigning politics, society, and culture—so we need to have a lot of knowledge about these topics (at minimum). We need to be responsible for what we're doing.

And the big corporations, throwing all responsibility to consumers, saying no one is obligated to anything and they're just delivering what customers want, don't help at all. In fact, they all have very clear objectives (maximize profit) and aren't always aligned with what's best for people and the world.

This chapter also discusses machine fallibility (and blame always being placed on humans), the end of privacy, eternal and uninterrupted surveillance, the issue of totalitarianism, right and left biases—a bunch of other interesting topics.

CAN WE STILL TALK?

The next chapter talks about democracy. It says civilizations were born from the marriage of mythology and bureaucracy. And that the computer-based network is a new type of bureaucracy, much stronger and more powerful than anything created by humans we've seen. And the main difference is that this network is also capable of creating more complex mythologies than those created by humans.

The potential for benefits is unimaginable. For destruction, too.

Harari knows this type of statement might seem apocalyptic to many. Any new technology that changes how people live provokes fears and uncertainties.

He cites the Luddite movement, when the first Industrial Revolution began affecting people's lives. This movement preached the end and destruction of machines, because they would be the beginning of the end of the world.

Today we enjoy much more comfort and better living conditions thanks to industrialization (yes, many issues can be discussed, but life expectancy has increased considerably).

Generative AI enthusiasts, like Ray Kurzweil, argue that the future will be wonderful: humans will have health, education, and other services at another level of quality, plus save the planet from environmental collapse.

History shows us that technology by itself isn't bad; the problem is the use humans make of it. Imperialists, like Great Britain, used the same argument—that technology would improve everyone's lives and that they needed to "save" the colonies from backwardness. Primitive peoples, entire cultures, animal and plant species were decimated and extinct in this process, plus the development of more potent weapons of destruction than could be imagined. Both Stalinism and Nazism fought for industrial societies, and environmental collapse has the same origin.

Now power is with companies, in a different type of totalitarianism, and we have less room for error, since technology has much greater destructive power.

We've already seen that imperialism, totalitarianism, and militarism aren't ideal means of building a more just industrial society for everyone.

From Harari's point of view (and I agree), liberal democracy seems to be a better path, because it has self-correction mechanisms when things aren't going well—this limits fanaticism and preserves the ability to recognize errors and correct course when necessary.

However, there are several threats that make democracy's survival difficult. He cites lack of privacy and the radical change in the job market. Desperate and unemployed people always seek easy and quick solutions (which don't exist). Germany's very high unemployment level is attributed to the rapid rise of Nazism, for example. But, as the historian reminds us well, nothing in history is deterministic. The US and several other countries also had similar problems and didn't generate a Hitler (for while; nowadays, I have some doubts).

ROLE REVERSAL

Yuval also observes a recent phenomenon in the last decade. Traditionally, conservative parties are formed by people who, despite recognizing that the system isn't perfect, make every effort to keep things exactly as they are. Even admitting that the world is unequal and unfair, they value institutions and everything that has been achieved by civilization so far.

Also traditionally, progressive parties bring together people with a more revolutionary bias, who want to change things as they are and reduce injustices and inequalities in the world, if necessary by changing laws and social conventions.

Well, in the last decade everything has been reversed. The "conservatives" present a revolutionary bias, wanting to destroy the traditional respect for institutions, science, and public service, clearly attacking democratic structures like elections, when they refuse to hand over power when they lose elections. Bizarrely, progressives were left to defend established laws and institutions. Nobody understands exactly why this phenomenon happens, but we need to be cautious in this scenario.

The issue is digital anarchy, because, to give you an idea, a study showed that 43.2% of posts published on platform X were generated by bots (it was 20% in 2016). Want to see how serious this is? In 2023, a study published by Science Advances asked ChatGPT to deliberately create fake news and conspiracy theories about vaccines, 5G, technology, climate change, and controversial topics.

The texts were presented to 700 humans and were evaluated for their credibility. Humans did relatively well recognizing errors when texts had been generated by humans; but when generated by AI, they tended to believe the news was true.

In a debate on a social network, it's pure waste of time to convince a robot of anything; it doesn't vote, has no consciousness or opinion; it just follows commands. But a robot can be very good and efficient at convincing a person. When manipulative robots and complex algorithms dominate public political debate, one thing is certain: it's going to go bad.

FORBIDDEN TO FALSIFY

I imagine you're as terrified as I am, but then Harari presents an idea from philosopher Daniel Dennett, who was inspired by the financial market. Everyone trusts and believes in money's value because it's illegal to counterfeit it. And because counterfeit money harms the entire system, all countries and organizations work together to prevent it from happening. Sometimes someone tries, but never in sufficient quantity to pose any kind of threat to the general structure.

Starting from this, what if it were forbidden to falsify human beings, or rather, to impersonate human beings. Everything would continue normal; a store could continue having its customer service bot. Just don't pretend that algorithm is a human being. Discussions on social networks would be less inflamed, since nobody would waste their time or give much credibility if they knew it wasn't a human behind that dialogue.

Until now nobody worried about this, because before generative AI, it was very difficult to pass as human; everyone would figure it out (especially because there are captchas to resolve doubt). But we know that technology is flying aboard a rocket and legislation goes on the back of a donkey with ill will. We have to speed this up soon. And it doesn't seem like a hard-to-implement solution if the laws actually apply. Nobody goes around counterfeiting money as if there were no international police because they know the risk. I thought it was sensational. It doesn't solve 100% of the problem, but it helps a lot to know we're not dealing with a person.

Actually, there are many more great ideas, and, necessary to remember, only democracies will be interested in implementing them; the market won't regulate false information by itself. And we're already seeing this.

Harari also talks about how generative AI can be useful in a totalitarian system, where all information, data, thoughts, and interactions of all citizens will be under control.

CONCLUSIONS

Harari, for all the reasons he exposed in this book, considers irresponsible and somewhat naive the way politicians and business people have been treating generative AI, as if it were just another stage of the first industrial revolution; it's not.

We're creating for the first time something over which we don't have total control; capable of making its own decisions based on criteria that aren't dear to us. Something full of biases, open to manipulations of all kinds; perfect for supporting a totalitarian system.

Yes, also something that can help us a lot, that can bring us benefits, better quality of life, and smarter use of resources.

But we need to control this thing, before it's too late. The story is still so much in the beginning that we don't have the vaguest idea where this could take us. But if we don't build the map, the robots will make the roads on their own.

That's it, people. It's past time for everyone to read this book, discuss it, and talk about the importance of regulating this very powerful agent.