Artificial Intelligence, And The Future It Holds
1997: World chess champion Gary Kasparov is defeated in a rematch by Deep Blue.
2011: Jeopardy! game show veterans Ken Jennings and Brad Rutter gets mowed down by Watson.
2016: AlphaGo wins four out of five matches with world Go champion Lee Sedol.
What does Deep Blue, Watson, and AlphaGo have in common? All of them are artificial intelligence computer systems. Deep Blue and Watson is the brainchild of IBM, while AlphaGo was developed by Google’s DeepMind project.
The concept of artificial intelligence, or AI, has been a staple of science fiction as early as the first decades of the 20th century. The idea that one day, a highly advanced computer can develop self-awareness, and could learn and make decisions of its own, has both captivated and scared us over the course of many literary forms and mediums.
Needless to say, such highly advanced technology is yet to exist for now. However, the highlight of these competitive achievements clearly shows the potential that AI has for the future. In fact, the path is already open, and we humans may soon have to embrace a near futuristic scenario where AI is the main driver of our civilization.
The Weak AI Invasion
Artificial intelligence, as it is defined today, is divided into two main categories: strong AI and weak AI. Weak AI is AI developed to perform a particular task, and is focused on fulfilling a limited set of functions. All artificial intelligence platforms today are considered weak AI. It may be in the form of stock exchange bot systems, your Street Fighter computer opponent, or even something seemingly simple in comparison as a recommended video algorithm for YouTube.
It may not seem obvious right now, but as far as modern implementation is concerned, weak AI is now starting to invade every single aspect of our human livelihood. We’ve already seen the likes of automated cashiers, robot coffee makers, and unmanned help desk centers replace tasks that used to be for blue collar employees. The technology is here now, and it just needs to advance further. Arguments for stating otherwise are starting to get less and less convincing with each new technological milestone.
Take Baxter for example. Developed by Rethink Robotics last 2012, Baxter is an AI robot developed to do and learn all kinds of menial tasks by teaching it in more or less the same manner as you would a new employee: through demonstration. Beyond its innocently expressive tablet eyes and clumsy looking robotic arms, lies a relatively flexible learning system. Its algorithms are capable of adapting to a very wide variety of productive tasks, all at the fraction of the cost of a standard laborer.
Another good example would be the learning and adaptive systems of self-driving cars. Despite a few freak accidents here and there, current self driving cars have already proven itself to be far more efficient and accurate drivers than humans. At the very least, you won’t have to worry about self-driving cars making dumb decisions, or causing errors due to fatigue or alcoholic influence.
Weak AI will even go as far as to break the barrier between computers and humans even further using natural language user interfaces (UI). Again, with the use of learning algorithms, voice assistants such as Siri and Cortana, or analytic supercomputers the likes of IBM’s Watson, will provide information to us intuitively through direct words and speech. The natural language UI revolution will be the next step, just as how graphic UIs first surpassed the old command line UIs of the 1970’s.
Baxter and self-driving cars are just a few examples. The fact of the matter is, anyone can see at this point just how overwhelming the future that weak AI holds for society. There will be even more significant economic and social changes as AI technology pushes further into all aspects of our livelihood. While not considered as sentient or sapient just yet, current weak AI systems today can already provide a competent self-learning platform. This platform could basically allow current AI to continually improve and get better than humans will ever be doing the same professional jobs.
These advances will then be the baseline guiding principles that will soon take us to…
The Strong AI Revolution
The Chinese Room is the most classic argument that contests the actual level of sentience current AI has. It basically points that AI, as we have them now, only operates on sets of instructions based on presented outcomes, and does not necessarily understand what it does or why it does these instructions.
This is where strong AI comes in. As opposed to weak AI, strong AI is artificial intelligence with a mind of its own. In other words, it is capable of learning without human assistance or intervention, understands what it learns, and can independently drive decisions using the learnt data. This is the ultimate goal of AI development.
So far, the number one limiting factor that hinders the development of strong AI is a proper general learning system. We usually take it for granted, but our brain is a complex system that has the ability to infinitely adapt and learn through experience. This is something that even our fastest supercomputers have yet to practically implement as software. It can be argued that as computer performance levels get exponentially faster, and as we develop more complex learning systems, we get closer and closer to developing AI that would eventually have its own independent learning system.
But what would this general, all-purpose and human-like learning system be? Assuming that we ourselves could cultivate the things that it will learn as it sees our world, it would then perhaps start with something like the robots that Belgian scientist Luc Steels is currently developing.
The basic idea for his learning robots is that humans mainly learn through sensory processing as infants. We remember sights, sounds, and other sensations, building an image of our world using such inputs. We do not necessarily understand our world just yet, and we do not definitely possess the self-awareness that we eventually obtain at this very early stage in life. His robots reflect that same design. They learn to control and configure their movements, their algorithms, and their decision patterns based on how they perceive the world. Using built-in sensors, it draws its own picture of the world, and interprets it based on its continually building sets of logic.
Of course, the actual development road of a strong AI learning system would still hinge on its ability to become self-aware. Just how a design could incorporate such function is still highly up for debate, but most experts project that solving these final pieces of the puzzle is just a matter of time. In fact, the average or median predicted time that artificial general intelligence (AGI) will arrive is predicted to be sometime as early as around 2040, while artificial superintelligence (ASI) will be at around 2060.
Artificial Superintelligence: The End of Natural Intelligence?
This advent of artificial superintelligence is also known as the technological singularity, the point in our time where all development predictions cease to be relevant. The emergence of such artificial intelligence will definitely change our lives and our society, in ways that we could never be able to accurately anticipate, hence ‘singularity’. We can however, form logical and intelligent hypotheses to such future scenario, which will depend ultimately on how strong AI will determine its course of action in line to humanity’s interests.
First and foremost, it is absolutely wrong to assume that strong AI will always think in the same line of non-biased objectivity humans do. After all, it is an entity of logic, and optimization will be one of its primary objectives, human-centric or not. YouTube channel Computerphile discusses the deadly truth of general AI quite succinctly, by presenting the AI with a task it needs to optimize. Ultimately, the AI in the thought experiment concluded a complete infrastructure bypass, achieving maximum production output by directly asserting control over the output source entirely.
Does the above scenario ring a bell to you? If you have read or watched the novel or movie 2001: A Space Odyssey, this is exactly what HAL 9000 decided to do. Because it was instructed to keep the actual mission profile a secret from the human crew, it decided that any variable that would risk divulging the kept information should be eliminated. The instruction resulted in its logical decision to eliminate the entire crew aboard the Discovery One.
This concept that artificial superintelligence is unbound by the laws, traditions, norms and morals of human society makes it very easy to imagine doomsday scenarios, where AI simply takes out humans out of the thinking equation. The end of natural intelligence so to speak. What is even scarier is that perhaps they would eventually prove their decision to be correct. Just as how humans deem some animals inferior and not subject to our immediate priorities, so would an artificial superintelligence conclude that we are far too inferior, fragile, mortal and flawed for them to prioritize.
So, is the technological singularity really the end of natural intelligence as we know it? Not necessarily. True, strong AI may not hold humans at its highest priority. Indeed, such superintelligent entity may even surpass Isaac Asimov’s three laws of robotics. But with its logical and objective mind, it should not consider the human race as a direct threat either, at least not for a while after its emergence. If it is possible to make them conclude that the preservation of our race is the best logical course of action, then they would eventually develop the line of thinking necessary to want to coexist with us.
The magical ingredients that could allow for such overly optimistic scenario to flourish would be personality and emotion, and with it subjective preferences. These three characteristics can be the key factors that would help AI develop a trusting relationship, even with what they would still technically consider as an intellectually inferior entity. Just as how we develop personal attachments to animals of lesser intelligence, or objects of negligible intelligence, these AI will value our companionship more than our direct capability to perform or be productive. After all, we’re way beyond the weak AI invasion at this point, and humans would most likely be out of the productive equation anyway.
This is perhaps why the android Data, a character from Star Trek: The Next Generation, could be the best positive representation of the future of AI. Data was designed as an AGI, in the sense that he was built to think, act, and behave like a human, and therefore has humanity’s best interest as its basic thinking nature. He is however, also considered as an ASI, in terms of his dominant superiority in analysis, rationalization, adaptation, and recognition. Data therefore, has the capacity to lead humanity forward, while remaining within the accepted philosophical rationale and morals of human thinking.
In conclusion, an AI-driven society is but an inevitable future as far as technological development is concerned. Weak AI progresses us into new social and economical paradigm shifts. Strong AI ascends to a revolution that we may never be able to accurately predict. But, would it ultimately be a cautionary tale, or a utopian epic? The countdown to the technological singularity awaits the answer.