What You Need To Know About Artificial Intelligence And Why (3)
The previous article described the impact that the imminent advances in Machine Learning will have on the labor market and society in general. These advances are assured according to the experts, and the only factor for uncertainty is how society will react.
However, once Artificial Intelligence (AI) covers most of the specialized tasks that contribute to economic productivity, it could continue to evolve to become the most developed intelligence on the planet, displacing humans.
”If the evolutionary process continues, a machine could reach levels of intelligence inaccessible to humans in a short time.”
By Manuel Godoy
“We shouldn’t be relying on our ability to keep a super-intelligent genius locked up indefinitely.”
Unlike previous articles, in this final section, we enter a highly speculative terrain. We have previously commented that there are no tangible advances that allow us to project with certainty when we will have bits of intelligence comparable to or superior to humans; however, many experts estimate that this will occur during the 21st century.
The most optimistic projections are in the range of the years 2030 to 2060. Others, however, believe that replicating the level of intelligence of human beings, which emerged from a process of hundreds of millions of years of evolution, It’s out of our reach.
To give a better perspective on previous articles, we can divide the capabilities of AI into three categories:
1. Specific Artificial Intelligence
This is the type of AI that has already materialized. Every time we hear that a machine has defeated a person in some specific task (chess, medical diagnosis) it is because a Machine Learning algorithm, built based on neural networks, and fed with a sufficient amount of data, has been able to equal or exceed the results that a person would have obtained on their own.
2. General Artificial Intelligence
It refers to a computer whose intelligence is comparable to human intelligence, beyond those tasks that Machine Learning has already mastered. According to Professor Linda Gottfredson, this level of intelligence involves “reasoning, planning, solving problems, thinking abstractly, understanding complex ideas, learning quickly, and learning based on experience.”
3. Super Artificial Intelligence
Nick Bostrom, the philosopher, and AI expert defines this level of intelligence as “an intellect that is far more intelligent than the best human brains in virtually any field, including scientific creativity, general wisdom, and social skills. ”.
Specific Artificial Intelligence is a reality, driven by Machine Learning and Deep Learning algorithms, as well as current data collection capabilities. However, there is still no clear path towards obtaining General Artificial Intelligence (AGI), which involves mastering a wide variety of environments.
A machine can be trained under very specific parameters, but it cannot understand reality at the level of detail that humans do, in terms of ideas, objects, categories, or even concepts such as a sense of humor or irony.
The writer Tim Urban suggests two keys to the development of Intelligence (AGI):
Greater processing capacity
Considering the total components of the human brain, the futurologist Ray Kurzweill, supported by several expert estimates, estimates that its processing capacity is 10 16, that is, 10 quadrillion calculations per second. This processing capacity is already within the reach of the largest supercomputers. It is estimated that before the end of the 2020s this processing capacity will be accessible for USD 1,000.
Develop true intelligence
Tim Urban explains that if a chimpanzee’s brain suddenly becomes 1000 times faster, it will still lack those interconnections or modules that make abstract reasoning or complex linguistic capabilities possible.
To this day we do not know exactly how the human brain works and we are still waiting for the “great idea” that allows us to take advantage of the computational power available to achieve that a machine can acquire true intelligence beyond the capabilities of Machine Learning, which are effective in very specific environments.
Among the most common strategies to approach the IAG, the following stand out:
Replicate the functioning of the human brain
Through scans and analysis, seek to apply a reverse engineering process to understand the functioning of our brains, and apply it to a machine.
Replicate the evolution process
The method of genetic algorithms allows replicating the natural selection process identified by Darwin. A group of algorithms would be assigned various tasks associated with General Artificial Intelligence, in such a way that the most successful algorithms can be recombined with other successful algorithms. This process also resembles the artificial selection that gave rise to numerous domestic species (cats, dogs) or edible fruits and vegetables.
Generate an expert AI program that can affect changes on itself
This strategy relies more on human inventiveness than the previous ones. Rather than seeking to replicate a process (evolution) or a product (the human brain), the idea is to design a highly specialized AI computer that can evaluate itself and implement the necessary changes in itself to become more intelligent.
Some of these strategies – or some not yet imagined – combined with high mechanical processing power, could originate an autonomous system capable of “thinking” in a similar way to human beings.
This is a real possibility, recognized by most AI experts.
Nick Bostrom, a philosopher at the University of Oxford and a benchmark on the subject of AI, sought to consolidate the opinion of experts based on different available surveys.
Most believe that machines could have intelligence comparable to or superior to human intelligence during the 21st century. For his part, Bostrom published his estimate in 2014, in his book Superintelligence: Paths, Dangers, Strategies, when 2022 was still 8 years away.
It is important to note that this type of progress – once the initial barriers are overcome – progress exponentially. A machine that can emulate the intelligence of a mosquito first, then that of a mouse, a chimpanzee, and then a human would not necessarily stop at that point. If the evolutionary process continues, this machine could reach levels of intelligence inaccessible to human beings in a short time.
This shouldn’t be so surprising. The human brain is a product of evolution, which rewards survival, and not necessarily intelligence. Under other conditions, it would have been possible that another species – with greater brain capacity – would have become the dominant species on the planet. Perhaps now in the 21st century, human beings are creating these kinds of conditions.
”Artificial Intelligence does not hate you, nor does it love you, but you are made of atoms that it can use for something else.”
To better understand the concern of many experts, it is necessary to take into account the following circumstances:
IAG Could Come Soon
For the reasons explained, we may need just one “great idea” to spark the emergence of the IAG. History tells us that it is impossible to accurately predict these kinds of ideas, but once they emerge, such as Einstein’s Theory of Relativity or Copernicus’s Heliocentric Model, they permanently leave their mark on society.
The IAG / Superintelligence could be unpredictable
There is no way to anticipate the behavior that a Super Intelligence will have, or even an intelligence comparable to the human one and that also has the advantage of being able to access the Internet, generate copies of itself, and any other imaginable digital capacity.
If we had detailed knowledge of how the brain works, then perhaps – and only perhaps – we could aspire to implant a kind of “insurance” in our hypothetical intelligent creation. But everything seems to indicate that the path to the IAG will be through several iterative cycles, with a much more complex result than our old acquaintance Watson.
The IAG / Superintelligence will exceed our understanding
Just as a chimpanzee lacks the abstract capabilities that a human being has, there is no way of knowing what the capabilities of a machine with Artificial Super Intelligence would be.
The level of human intelligence is the product of millions of years of evolution, which stabilized with the Homo Sapiens species, and is not necessarily an impossible limit to overcome. Once this level of intelligence comes into existence, possibly overcoming it, defeating it, or neutralizing it would be beyond our reach.
Tim Urban groups the prevailing opinions of Artificial Intelligence experts into two groups: optimists, who envision a bright future in the coming decades; and the anxious, who believe that there is a high risk of negative consequences. Very few say that AGI is still one or several centuries away, or that it will never happen.
As of today, there is already technical research that seeks to contain the unwanted consequences of an Artificial Super Intelligence, seeking to provide it with cooperative behaviors, safe environments, and adequate risk estimation. Universities like Oxford and Stanford, as well as companies like Elon Musk’s Open AI and Google Deepmind, are leading these efforts.
However, most of the attention is focused on new applications of Machine Learning and the search for the “big idea” to detonate IAG, which could immediately lead to Artificial Super Intelligence. Voices like Stephen Hawking or Elon Musk himself have warned of the enormous risk it represents. It is time to pay more attention.