KILLER ROBOTS: WATCH HOW AI-PROGRAMMED MILITARY ROBOTS COULD MAKE HUMAN SOLDIERS COMPLETELY OBSOLETE



With tomorrow’s wars likely to be faster, more high-tech, and less human than ever before — thanks to the advances in the “autonomization” of land, air, sea, and undersea platforms — there are now debates all over the world whether it will be safe for the mankind to depend on the reliability of military robots’ reliability and efficiency, given the persisting algorithm and data vulnerabilities.

So much so that on October 26, a student campaign called “#StopCambridgeKillerRobots” has been launched against a Cambridge University research to the development of lethal autonomous weapons (LAWs), dubbed ‘Killer Robots’.

The program was funded by Silicon Microgravity, a “sensor technology spin-out” of the University, which had granted £567,000 in research funding to Cambridge during the period 2015-19) and the defense industry-related companies such as ARM Limited and Trimble Europe (whose grants to university research amounted to £455,000 between 2017-19 and £193,000 between 2016-19 respectively).

In fact, the military robots market has been showing a constant rise in leading countries’ defense budgets in recent years. The latest available data suggests that the military robots market size will likely grow from $14.5 billion in 2020 to $24.2 billion by 2025, at a CAGR of 10.7%.

Drivers for this market are said to include the increasing use of robots in areas affected by chemical, biological, radiological, and nuclear (CBRN) attacks, the increased use of robots for mine countermeasures, and the increasing use of UAVs in life-threatening missions.

Every branch of the US military is now clamoring for more and more robots. The US Navy is experimenting with a 135-ton ship named ‘Sea Hunter’ that could patrol the oceans without a crew, looking for submarines it could one day attack directly. It will be commanded by an “unmanned” officer!

The US Army is developing a new system for its tanks that can smartly pick targets and point a gun at them. It is also developing a missile system, called the Joint Air-to-Ground Missile (JAGM) that has the ability to pick out vehicles to attack without human say-so.

And the US Air Force is working on its “SkyBorg” program, an autonomous or unmanned aircraft teaming architecture that will enable the Air Force “to posture, produce and sustain mission sorties at sufficient tempo to produce and sustain combat mass in contested environments”.

Russian ‘Skynet’
Russian ‘Skynet’ to lead military robots on the battlefield. (via RT)
Russia and China are also not sitting idle. Russia is believed to have a drone submarine that is equipped with “nuclear ordnance.” China says that it will be the global leader in artificial intelligence (AI) that is the very basis of the development of such robots by 2030.

The point is that today the technological development is such that from dumb landmines to sophisticated military drones, in space and oceans, military robots can be (are) widely deployed for dangerous missions so that their human creators are not put into harm’s way.

Three Types Of Robots

Robots are of three types. “Automatic” robots respond in a mechanical way to external inputs. These are usually without any ability to discriminate the inputs.

“Automated” robots execute commands in a chronological pre-programmed way with sensors that help sequence the action. They are mostly limited by algorithms that determine their rules of operations and behavior out of fixed alternative actions, which makes them predictable.

A US military robot. (USAF photo)

“Autonomous” robots are those that can choose between multiple options for action based on sensory input and can achieve goals through optimizing along a set of parameters. Although still constrained by a pre-programmed range of actions, they can exercise independent judgment about courses of action to comply with the higher-level intent.

The problems or dangers, critics say, are mainly with the “autonomous robots that are constantly improving, thanks to the advancement in the field of AI.

Recon robot
Australian Army training with a ‘Vision 60’ prototype as a multi-purpose sensor and recon bot. (via Twitter)

So far, robots are still controlled by humans who must approve the unleashing of their lethal violence. But with AI-enabled autonomy, faster computing, and better sensors, autonomous robots can decide about selecting and engaging targets based on sensor inputs and without human control. And this is worrying the academics, legal scholars, and policymakers that the advent of such robots will bring about a “robopocalypse” of dehumanized warfare.

Experts Warn Of ‘Robopocalypse’

Even many military veterans apprehend that these robots could lead to a breach of the chain of command (reliability) when they execute tasks based on corrupt data (efficiency) and faulty algorithms. Soldiers may over-rely on machines and gain a false sense of security without fully comprehending how such systems reach their judgments, a phenomenon called “automation bias.”

Besides, there is another danger. Wars are decided through “Strategic Decision-making”, led by the civilian or the political leadership of a country. And this is precisely where a potentially major problem arises when AI-generated analyses and inferences could gain an oversized degree of authority in political decisions.

Things will depend on the persons having access to AI, who, in turn, are in a position to contextualize and interpret its results. Since the armed forces have such access, civil-military tensions have every chance of getting exacerbated, particularly in democracies.

But, howsoever unwelcome such possibilities are, veterans like former US General Stanley McChrystal warn that “Killer robots are coming, and we may never understand the decisions that they make”.

New Robot Makes Soldiers Obsolete (Corridor Digital) - YouTube
A screenshot of a video produced by US firm Corridor Digital that specializes in visual effects. It depicts a future robot that can shoot targets with perfect precision despite obstacles thrown at it by human characters.

He says that giving AI the power to launch lethal strikes will be a matter of necessity, even though doing that could lead to a “frightening” future.

“You’ve created technology, you put in processes for it to operate, but then to operate at the speed of war you’re essentially turning it on and trusting it,” Gen. McChrystal argues, adding, “A hypervelocity missile, hypersonic missile coming at the United States aircraft carrier, you don’t have time for individuals to do the tracking, you don’t have time for senior leaders to be in the decision loop, or you won’t be able to engage the missile. At a certain point, you can’t respond fast enough, unless you do that.”



Mobile Device Users
Click 3 Dots Below to View Complete Sidebar