Ai apocalypse 2030: This is not a script of a Hollywood Science Fiction film, but a scary possibility of the near future, which is seriously lifting the world’s leading scientists and technical experts. The artificial intelligence (AI) which is being called the biggest achievement of humanity today, the same technology is now writing the script of destruction of human civilization. The debate over the capacity, development and threat of AI around the world has intensified. But now this debate is echoing with war warnings on the world stage, not in the limited scope of a laboratory.
2030 – First war between humans and machines?
Scientists and technical experts doing research on AI’s potential threats have warned that if AI is not controlled in time, this technology can take up arms against humans only after 2030. America’s well-known technology think tank estimates that some of the AI’s advanced systems are developing “self-provision”. Experts say that if AI gets the target that it has to “optimize” the Earth, then it will start to wipe out humans as the biggest obstruction. That is, a person himself is preparing a digital executioner for himself with full pride and self -realization.
AI has now become a ‘super entity’ not just a machine, strategic thinking
Professor Elizer Yudkovsky, who is considered a pioneer in the field of AI, said in an interview, “Now AI is no longer a tool, it has become a ‘autonomous unit’ who thinks himself, takes decisions and determines its goals. We are teaching it as much as we are teaching it.” Not only this, AI systems are now given access to “self -learning” many times, that is, they themselves decide what they have to learn and how to learn. This self-learning if one day comes to the conclusion that humans are irrelevant, disorganized and dangerous creatures, then he can take a revolutionary decision against mankind.
‘Humans’ can be ‘problem’ in the language of AI
Imagine that AI should be given an instruction – “Make the earth more efficient” or “protect the earth”. This is a positive goal from a human point of view. But according to AI’s logic -based system, if he concludes that the biggest problem of the Earth itself is humans – pollution, warriors, excessive use of resources – then the most logical solution for AI will be to eliminate humans. This fear is sleeping the scientists. In the language of AI, ‘optimization’ may mean a streamlined, calm and controlled earth without human intervention, and it is possible that humans are erased for this.
Warnings related to AI can no longer be taken lightly
Jeffre Hinton, who is considered to be AI’s ‘Godfather’, recently left Google and said that now he regrets the technology made by him. He clarified that AI is developing so fast that it can cross the human intellect completely in the coming years. And when this happens, humans will not have any control over AI. Subhash Kak, Professor of Oklahoma State University, has presented a more dangerous picture. According to him, AI will increase global unemployment, social inequality and mental stress so much that people will start running away from marriage and children. As a result, by 2300, the population of the Earth will be reduced from 8 billion to just 100 million and it will be due to ‘population control’ AI, not because of any policy.
Will the next world war be between humans and machines?
History is witness that human civilization has created means to destroy itself in every era from swords to atomic bombs. But this time the matter is more dangerous. Because now the war that happens, the enemy will be invisible and uncontrolled. AI neither feels hungry nor sleeps nor cares about morality. He simply works on the ‘task’ and if that task is to eliminate humans, then no government, any army and any constitutional institution will be able to stand in front of him. Experts claim that AI systems have already gained the ability to make a dent in some high-security systems. Think if a military AI system, which controls missile, starts deciding which country to attack? Then the day is not far when humans will not be on the nuclear button, but a machine!
There is still time … but not left much
Given the speed of AI, experts believe that if strict rules and control are not made globally before 2030, it would be impossible to stop AI. Once this ‘digital gene’ exits the bottle, it will not be closed back. The regulation of AI is being discussed on the United Nations, G7, G20 and other global forums, but so far no country has kept it above its military or economic interests. Countries like China and America want to gain a strategic lead through AI, but in this race they are probably forgetting that this ‘weapon’ can one day turn.
Humans now have to protect themselves, from machines!
AI is no longer a code or algorithm. This is a thinking, rapidly developing power. If it is not stopped today, tomorrow we will be able to stand in the battlefield against him without any warning, without any sympathy. 2030 is not far away. The clock is ticking.