Artificial intelligence (AI) has made great progress over the past two decades. Computers can now diagnose medical images, predict customer behavior, manage financial portfolios, write poetry, and even produce artwork. AI can do some of these things better than humans.
As AI furiously moves towards increasingly intelligent systems, an old philosophical question has haunted us again: is human intelligence qualitatively different from artificial intelligence, or is the difference only quantitative?
The revolution in AI has been driven primarily by a class of algorithms called neural networks. These algorithms process large amounts of data and extract statistical models from them. When asked to perform a task, they only map the input data to the model best suited to compute the results.
Such comparisons of calculated models are surprisingly powerful and can simulate many functions of human intelligence. Imagine a game of chess that requires the synthesis of many skills such as tactical thinking, strategic patience, risk analysis, imagination, and foresight.
Gary Kasparov wrote in 2007: “Chess is a unique cognitive connection, a place where art and science come together in the human mind and are then refined and enhanced through experience.” Today’s computers easily beat the best human grandmasters.
If they can mimic these human-like abilities by reducing them to models, can all human intelligence be reduced to mere collisions of images? Or do our brains have a secret sauce that cannot be recovered mathematically?
Despite its astonishing success, there is no denying that today’s AI systems have some clear limitations. They are fragile and can easily be misled by making small changes to the input data.
They cannot solve problems that are even slightly different from their original purpose. And they are hungry for data and need huge amounts of data that it is not appropriate to study. Critics use variations on this boundary to conclude that there is a fundamental difference between human intelligence and artificial intelligence.
However, this may be a premature conclusion. If we look closely, it turns out that humans also experience the same limitations!
Look at the vulnerability of AI. Google researchers have shown that a computer vision model can easily be confused with the assumption that a banana is a toaster by adding a small sticker to the image next to it.
Images that are deliberately used to mislead AI models are known as racing images. A 2018 document shows that racing images, which mislead many AI models, also mislead people when asked to make instant decisions for a limited time.
Nature abounds in cases where one species specifically uses racing techniques to hack into another species’ behavior. Cuckoo birds not only lay their eggs in the nests of other birds, but cuckoo chicks also encourage their adoptive parents to feed them more frequently than their offspring.
Marketers are no stranger to the irrationality of the human mind, as described by Dan Arieli in his book Allegedly Irrational. The decisions we make are not always the product of conscious thinking but are often the result of unconscious processes taking place below the horizon of our consciousness.
Simple tricks can have a big impact on our decisions, such as examples of races that can deceive AI. Human intelligence may not be as fragile as machine intelligence, but it’s fragile!
Another disadvantage of AI is that the model does not incorporate unprecedented data sets and does not perform well in situations different from what was originally intended. But we’re not that different. Think about the traveling salesman problem. You will be given a series of points and need to determine the shortest path that will connect all the points.
Humans can solve this problem fairly quickly because we have encountered similar situations regularly throughout our evolutionary history. But adjust it a bit – find the longest path instead of the shortest and our efficiency deteriorates dramatically.
We have very good 2D navigation habits, but we are very weak at 3D navigation. We can easily handle two-digit arithmetic, but we have a problem with three digits. Just like AI, our cognitive abilities cannot generalize to evolutionarily unknown situations.
Another common argument is that AI needs more data from people to learn how to do a task. For example, AI can take hundreds of pictures to learn to distinguish zebras and horses. A ten-year-old child can do this with just a few pictures, or maybe even with a two-line description.
Although these observations are very accurate, they do not show a fundamental difference between artificial intelligence and human intelligence. This argument inadvertently diminishes the number of experiences the child has during his life. In comparison, AI only has access to a few thumbnails.
OpenAI, a non-profit company previously supported by Elon Musk and Reid Hoffman, released a new natural language processing system called GPT-3 last month. The GPT-3 is highly versatile, almost human.
Excellent for doing a wide variety of language tasks, from writing fantasy stories to writing emails. Can translate between languages and write technical documents. Can answer common sense and think of questions. In most cases, the output generated by the GPT-3 is indistinguishable from the human written content.
Although the GPT-3 has significantly better performance than its predecessor GPT-2, the two models are qualitatively very similar. Their differences are only quantitative. While the GPT-3 has 175 billion parameters, the GPT-2 only has 1.5 billion parameters. The success of the GPT-3 shows that intelligence is a function of computational complexity.
The human brain is thought to have trillions of nerve synapses. When AI systems become comparable in size to the human brain, they can become as intelligent as us. It’s worth telling what Jeffrey Hinton, a leading AI researcher, said in 2013, “When you hit a trillion (parameters), you get something that has a chance to understand a few things.”
The human brain is made up of atoms and molecules which obey the laws of physics. Thinking processes are carried out by neurochemical circuits in our brains. Hence, human thinking is also mechanical at some level. It cannot violate the laws of physics.
One open question is whether the brain’s neurochemical circuits can be imitated by electronic computer circuits made of silicon and transistors. So far, science has not discovered secret material in our brains that cannot be reproduced by physical processes.