A writer named Jan Carew wrote a story named ‘A city of Tobors’. It was a science fiction on a very common topic. Here the species named ‘Tobors’ enslaved the other species named ‘Nems’. The nems were curious about the origins of the Tobors and some of them wanted freedom from their rule. One day they somehow managed to enter in ‘The Center’, the place where Tobors got the power from, and destroyed it. That was the time when they knew that the Tobors were the robots originally made by the nems (who were known as the men earlier) to help them fight in the third world war. The humans were destroyed and the robots got more and more intelligent, so much, that they started to rule.
The si-fi world has developed a liking for this topic in one or the other form. But in this post we are not going to talk about the science fictions but a topic that has been a point of discussion among the experts: ‘The Technological singularity’.
The word Singularity is used in different meanings in different subjects like Quantum Physics and Technology. ‘The Technological Singularity’ is a hypothesis that the invention of artificial super-intelligence (ASI) will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization. As indicated by this theory, an improvable intelligent agent, (for example, a device running programming based artificial general intelligence) would enter a “runaway response” of self-improvement cycles, with each new and more savvy age seeming increasingly quick and intelligent, causing an insight blast and bringing about a great super-knowledge that would, subjectively, far outperform all human intelligence.
Vernor Vinge, an American science fiction author and retired professor of Mathematics and Computer science, proposes an intriguing – and possibly frightening – forecast in his paper titled “The Coming Technological Singularity: How to Survive in the Post-Human Era.” He states that humanity will build up a super-intelligence before 2030. The article determines four manners by which this could happen:
- Researchers could create progressions in artificial intelligence(AI).
- Computer systems may some way or another gain consciousness and emotions.
- Computer/human interfaces turn out to be advanced to the point that humans evolve into another species.
- Biological science developments enable humans to physically build human intelligence.
Any of them can lead to Technological Singularity in the future as near as in the year 2030.
Is this phenomenon already started? Maybe. Already there are numerous robots and software having the ability to self learn and upgrade. Take an example of Sophia, by Hanson Robotics. But still these robots are answerable to humans. What if they build the ability to surpass the humans and gain a form of consciousness? What if they start upgrading exponentially? If the Machines will be this intelligent, we are barely even able to understand the full effects of what can happen. Will they extinct the human race? Experts are already worried. If you are a Marvel Lover you have one more comic reference to the same. The Jarvis and The Ultron. In the movie the Avengers save the human race but in real life, will the human be able to do the same?
The computers and the information on internet is growing day by day, and in layman’s terms the information stored digitally is 500 times more than the total information stored in the genomes of all the living humans on earth. But is it even possible to enter this hypothetical ‘Technological Singularity’?
In 1965 Gordon Moore suggested a law know known as Moore’s law. He proposed that the number of transistors on each sq. inch will double every year. It is still almost true. The number of transistors double every 18 months now, but they do. We are now building the Nano-transistors. But the engineers and physicists aren’t sure how much longer this can proceed. Gordon Moore said in 2005 that we are moving towards the limit of what we can accomplish through building smaller transistors. Regardless of whether we figure out how to manufacture transistors on a size of only a couple of nano-meters, they wouldn’t really work. That is on the grounds that as you approach this minor scale you need to consider quantum material science.
Moore’s law is not naturally controlled or inevitable. So if we reach our limits before we can build the machines more intelligent than the humans, they might not stop the ‘Technological Singularity’ but will definitely push it back by some years. At least till we beat the problems we would face at the quantum level. So even if ‘Technological Singularity’ is the scary scenario, it is nowhere as near as suggested by Vinge.
The other option to stop this singularity taking place is to take the safety measures before we enter the singularity, something resembling to the three laws suggested by Issac Asimov. Here are those:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
(Even though some people believe that it is impossible to take the safety measures when you can not imagine the scenarios as they will be out of the scopes of Human Intelligence and if the machines will be so intelligent they will have the ability to break these measures)
Though, don’t be terrified. According to the discussion we had for Moore’s law, the doom’s days of ‘Technological Singularity’ are far away. But who knows, a miracle can happen and scientists find the way to make the machines brainier than humans or they develop the senses of their own. We would suggest that do not treat your devices like phone and Personal Computers badly. They might not forget it and want to take a revenge. Joking.
Let us know what you think in the comments below.