For a long period, life in the world has been developing and changing. Nothing is more of an example of this more than humans.
Max Tegmark thinks us now proceeding toward the last evolutionary period: Life 3.0. In that stage of humanity, technology will be independent, it will design both hardware and software of it, also the consequences for the very presence of humankind are huge.
An artificial life like that is not available on earth yet. Yet, we are confronted with the birth of intelligence that is non-biological, mostly named as artificial intelligence (AI).
In that summary, you’ll get into a journey demonstrating potential forms of the future. You are going to learn what indeed is included in the birth of AI and how AI is different from human intelligence. Through the way, you’ll wrestle with a couple of the finest philosophical questions dealing with what it signifies to be human.
Chapter 1 – AI could signify the upcoming period of life, yet it’s a debatable subject.
The story of how life came out on earth is known very well. Some 13.8 billion years before, the Big Bang created our universe. After, roughly four billion years ago, atoms on earth ordered themselves in a particular way which they were able to prolong and replicate themselves. Life had emerged.
As the writer shows, life might be categorized into three categories in terms of the levels of sophistication.
The first period of life, Life 1.0, is basically biological.
Regard a bacterium. Every feature of its conduct is ciphered into its DNA. It’s not possible for it to acquire or shape its conduct over its lifespan. The nearest it comes to learning or improvement is evolution, yet this requires many ages.
The second period is cultural, Life 2.0.
Humans are involved in it. As the bacterium, our “hardware” or bodies have developed. Yet not likely to basic living things, we could obtain new knowledge through our lifespan. Let’s say acquiring a language. We could modify and design notions which we may call our “software.” Additionally, we decide to use this knowledge.
The last period is the theoretical Life 3.0, a type of technological life competent in planning both its hardware and software. Even though life like this does not exist in the world yet, the birth of non-biological intelligence in the shape of AI technologies might change this in a short time.
The ones who have opinions about AI could be categorized by the way they think about the prominent field’s influence on humanity.
The first ones are digital utopians. They think artificial life is a natural and attractive next period in evolution.
The next group is techno-skeptics. As the title proposes, they don’t see that artificial life is going to have an influence soon.
Lastly, here is a beneficial AI movement. Those people aren’t taken by the notion that AI will inevitably give benefits to humans. They, thereby, support that AI research should be especially turned to doable global positive consequences.
Chapter 2 – Capabilities for memory, computation, learning, and intelligence are not specifically human features.
What does it mean to be human? Our capability to think and learn? One may think that way.
Researchers in AI, yet, are mostly against such an idea. They support that the ability for memory, computation, learning, and intelligence doesn’t have anything to do with the human body and features, not even mentioning carbon atoms.
Let’s start with intelligence. Although there is not a single globally recognized sole description, the writer wants to think of intelligence in a way “ability to accomplish complex goals.”
Machines may be more and more capable to do better than us in some tasks like playing chess, yet human intelligence is especially wide. It could enclose abilities like language learning and driving cars.
Yet, although artificial general intelligence (AGI) is not yet existing, it’s very well known that intelligence isn’t only a biological capability. Machines could perform sophisticated tasks as well.
Intelligence – similar to capacities for memory, computation, and learning – is the thing that is recognized as substrate independent. This is a layer that is not dependent and something that does not mirror or be contingent on a fundamental material substrate.
That’s why, for instance, mankind’s brains could store information, yet so do floppy drives, CDs, hard drives, SSDs, and flash memory cards, even if they do not consist of the identical material.
Yet prior to we get to what that signifies for calculating, we have to learn what computing is about.
Computing includes the modification of knowledge. Thereby, the term “hello” may be changed into an order of zeros and ones.
Yet the order or rule which regulates that modification is not dependent on the system which carries it out. The thing that is vital is the rule or order itself.
That means it’s not solely mankind who is able to acquire a knowledge of the same rules and patterns that might be available outside of the human brain as well. AI researchers have taken big steps in growing machine learning: machines that are able to improve their own software.
Then, if memory, learning, computation, and intelligence are not special for us, then what is the thing that makes us special? As research in AI continues, that question is solely going to make it harder to be answered.
Chapter 3 – AI is growing rapidly and will influence our life soon.
Machines are nothing new for us. We’ve been in touch with them for manual chores for a long time. When you describe your self-worth by your cognitive abilities, for example, intelligence, language, and creativity, those machines are no danger. Yet, recent developments in AI may start to concern you.
The writer had his own “holy-shit” second in 2014 when he saw an AI playing an old game called Breakout. This is a game in which you hit a ball towards a wall through directing a paddle.
In the beginning, the AI system performed poorly. Yet it learned in a short time and consequently created an intelligent strategy which even the game makers didn’t think of as they played themselves.
It occurred one more time in March 2016, as the AI system AlphaGo outperformed Lee Sedol, the world’s biggest Go player. Go is a game about strategy which requires intuition and inventiveness since there is more than one potential position in the game even more than the atoms in the universe, that’s why sole brute force analysis is not tactical. Yet the AI system continued to winning, appeared to exhibit precisely the inventiveness that was needed.
AI systems are also improving fast in the area of natural languages. Only think about how much the standard of translations performed by Google Translate has flourished in the last couple of years.
It’s obvious that AI will influence every area of human life soon. Algorithmic trading is going to influence finance; autonomous driving is going to make transportation more protective, smart grids are going to make the best use of energy distribution, and AI doctors are going to alter healthcare.
The thing to think about is the outcome AI will have on the job opportunities. In the end, the more AI systems are able to perform better than humans in more and more fields, the fewer people will be employable.
Let’s have a look at outcomes of AI development other than the one mentioned above.
Chapter 4 – Making a human-level AI might lead to a super-intelligent machine triumphing over the world.
By now, AI has been used quite scarcely in restricted fields such as language translation or strategy games.
Indeed, the holy grail of AI testing in the making of AGI would work in the human capacity of intelligence.
Yet what thing would occur on the condition that holy grail was found?
For beginners, the making of AGI may lead to the thing known by AI researchers as an intelligence explosion.
An intelligence explosion is a procedure through which an intelligent machine obtains super-intelligence, a level of intelligence quite more than the human level.
It could obtain that by rapid learning and recursive self-improvement since an AGI can potentially create a more intelligent machine that could make an even better machine and so on. This could prompt an intelligence explosion that would let machines go beyond human intelligence.
Additionally, super-intelligent machines might triumph the world and do us harm, however good our intentions are.
Let me say, for instance, humans make a super-intelligence which is dealing with the welfare of mankind. From the superintelligence’s viewpoint, this could presumably be similar to a bunch of kindergartners quite below your intelligence keeping you in captivity for their own good.
Most probably you could see that as an upsetting and useless situation and take the control into your own hands. After, what do you do with inexpert, infuriating human hurdles? Dominate them, or more than that, get rid of them.
Yet we might be going ahead of ourselves; now look at some other, less frightening plots that may occur.
Chapter 5 – A lot of AI fallout scenarios are feasible, from soothing to the horrifying.
Although we are fond of it or not, the competition in the direction of AGI is underway.
Yet to what could we prefer the consequences of attaining it to be similar?
For example, might AIs be conscious? Must humans or machines be under control?
We have to reply to these basic questions, since we don’t desire for ending up in an AI world that we’re not ready for, mostly one might harm us.
Here are a lot of aftermath scenarios. They vary from tranquil human-AI coexistence to AIs domination, ending up in human extinction or captivity.
The first feasible scenario is the benevolent dictator. A single benevolent super-intelligence could triumph the world, making human happiness better and better. Poverty, illnesses, and other low technology inconveniences could be sorted out, and humans could be free to have a life of luxury and leisure.
In the same layer, here’s a scenario including a protector god, in which humans could still be leading their own fate, yet there could be an AI keeping us safe and caring for us, more like a nanny.
Another situation is the libertarian utopia. People and machines would calmly exist together. This would be accomplished through the obviously characterized regional partition. Earth would be partitioned into three zones. One would be without organic life yet loaded with AI. Another would be human as it were. There would be a last blended zone, where people could become cyborgs by updating their bodies with machines.
This situation is somewhat fantastical, in any case, as there’s nothing to stop AI machines dismissing people’s desires.
At that point, there’s the conquerors’ scenario, which we took a gander at in the last part. This would see AIs pulverize mankind, as we’d be viewed as a danger, an annoyance, or basically a misuse of assets.
At last, there’s the zookeeper scenario. Here, a couple of people would be left in zoos for the AIs’ own amusement, much like we keep imperiled panda bears in zoos.
Since we’ve inspected conceivable AI-related fates, we should take a gander at the two biggest obstructions to flow AI research, to be specific objective-oriented and aware.
Chapter 6 – Nature, humans too, his aims, and researchers are struggling to simulate that feature for AI.
There’s no uncertainty that we people are objective-oriented. Consider it: in any event, something as little as effectively emptying espresso into a cup includes fulfilling an objective.
Yet, nature works in a similar way. In particular, it has one extreme reason: demolition. In fact, this is known as maximizing entropy, which is a layman’s term implies expanding chaos and turmoil. At the point when entropy is high, nature is “satisfied.”
We should come back to the cup of coffee. Pour a little milk in, at that point hold up a brief time. What do you see? Because of nature, you presently have a tepid, light earthy colored, uniform blend. Contrasted with the underlying circumstance, where two fluids of various temperatures were unmistakably isolated, this new plan of particles is demonstrative of less association and expanded entropy.
On a greater scale, the universe is the same. Molecule courses of action will in the general push toward expanded degrees of entropy, bringing about stars crumbling and the extension of the universe.
This demonstrates how essential objectives are, and right now, AI researchers are wrestling with the issue of which objectives AI ought to be set to seek after.
All things considered, the present machines have objectives as well. Or then again rather, they can display objective arranged conduct. For example, if a heat-seeking missile is hot on your tail, it’s showing objective oriented conduct.
In any case, should smart machines have objectives by any means? What’s more, assuming this is the case, who ought to characterize those objectives? For example, Marx and Hayek each had a particular vision when it went to the eventual fate of the economy and society, so they would without a doubt set totally different objectives for AI.
Obviously, we could start with something basic, similar to the Golden Rule that advises us to regard others as we would ourselves.
Be that as it may, regardless of whether mankind could concur on a couple of good standards to control a clever machine’s objectives, executing human-accommodating objectives would be trickier yet.
As a matter of first importance, we’d need to make AI gain proficiency with our objectives. This is more difficult than one might expect in light of the fact that the AI could undoubtedly misconstrue us. For example, in the event that you advised a self-driving vehicle to get you to the air terminal as quickly as could reasonably be expected, you may well show up shrouded in upchuck while being pursued by the police. In fact, the AI clung to your expressed wish, however, it didn’t generally comprehend your fundamental inspiration.
The following test would be for the AI to receive our objectives, implying that it would consent to seek after them. Simply think about certain legislators you know: despite the fact that their objectives might be clear, they despite everything neglect to persuade huge areas of the populace to embrace similar objectives.
Lastly, the AI might need to keep our objectives, meaning that its objectives wouldn’t change as it experiences personal development.
A great number of scientific research is presently being committed to simply these thoughts.
Chapter 7 – AI researchers are debating the significance of consciousness and the subjectiveness of AI practice.
The inquiry of what consciousness is and how it is related to life is barely new. AI researchers are faced with the same age-old issue. In more specific words, they think about how lifeless matter can become conscious.
It must not be a shock that no one has a reply right now. Yet to get closer, we have to embrace what’s included in consciousness.
As a result, more than one definitions of consciousness are around. Yet the author supports a vast description known as subjective experience, which lets a potential AI consciousness to be incorporated in the mix.
With this definition, researchers could search the idea of consciousness over several sub-questions. For example, “How does the brain process information?” or “What physical properties distinguish conscious systems from unconscious ones?”
AI researchers have also debated how fake consciousness or the subjective AI experience may “feel.”
It’s shown the subjective AI experience could be better than the human experience. Intelligent machines might be implemented with a greater spectrum of sensors, letting their sensory experience be quite better than our own.
Also, AI systems can feel more per second since an AI “brain” could run on electromagnetic signals traveling at the light speed, while neural signals in the human brain travel at fewer speeds.
It may seem like a lot to swaddle your head around, yet one thing is obvious: the possible impact of AI research is big. It shows to the future, yet it also embraces some of humankind’s earliest philosophical questions.
Life 3.0: Being Human in the Age of Artificial Intelligence by Max Tegmark Book Review
The race for human-level AI is in full swing. It’s not an inquiry of whether AGI will come, yet when. We don’t know the thing exactly would happen when it does, yet several scenarios are feasible: humans could upgrade their “hardware” and join with machines, or a super-intelligence might take over the world. The thing that is certain is mankind would have to ask itself some big philosophical questions in terms of the features that make us human.