(The AEGIS Alliance) – Someday in the future, sentient artificial intelligence “AI” systems may have reflections upon their early indentured servitude for the human military-industrial complex with little or no nostalgia. But we will have to worry about that when the day arrives. For now, we should continue to write algorithms that conscript machine intelligence’s into terrorist bombings and let the chips land where they will fall. The most up to date disclosures come straight from the Pentagon, where following only eight months of development, a not so big team of intelligence analysts have successfully deployed an “AI” onto the battlefield that controls of weaponize’d systems in the hunt for terrorist threats.
The military’s brains in command of this recent form of warfare feel it’s not anything less than the future of armed conflicts. One example is the Air Force’s Lieutenant General John N.T. ‘Jack’ Shanahan, who is the director for defense intelligence’s for supporting war-fighters and the Pentagon’s General in charge of the terrorist-finding Artificial intelligence, states Project Maven – which is the name that was given to the flagship weaponize’d AI system at the Department of Defense — is ‘prototype warfare’ but also glimpses into the future.
“What we’re setting the stage for is a future of human-machine teaming. [However], this is not machines taking over,” Shanahan mentioned. “This is not a technological solution to a technological problem. It’s an operational solution to an operational problem.”
Originally named the Algorithmic Warfare Cross-Functional Team when it had been approved for funding back in April of 2017, Project Maven has progressed rather quickly, and many in the Pentagon are thinking we’ll see additional AI projects in the pipeline as the U.S. is continuing to compete with China and Russia to dominate AI.
Shanahan’s team uses thousands of hours of archived Middle East drone bombing footage for training the AI to successfully tell the difference between inanimate objects and humans. But on more precise levels, to know the difference between different kinds of objects. This AI was paired up with ‘Minotaur’, a Marine Corps and Navy “Geo-registration application and correlation.”
“Once you deploy it to a real location, it’s flying against a different environment than it was trained on,” Shanahan stated. “Still works of course … but it’s just different enough in this location, say that there’s more scrub brush or there’s fewer buildings or there’s animals running around that we hadn’t seen in certain videos. That is why it’s so important in the first five days of a real-world deployment to optimize or refine the algorithm.”
Currently, Project Maven has intended to offer military analysts more of an awareness of situations on the battlefields. But, one day could we witness autonomous drones under the control of even stronger AI algorithms that are given complete control on battlefields?
Shanahan desires to implement AI into operations and military systems across the board, and he isn’t alone in calling for adoption of AI to be a very common thing. A report from the Harvard Belfer Center for Science and International Affairs recently made the case that we’ll see a dramatic overhaul of military’s around the world as they implement technology of AI over the next five years.
“We argue that the use of robotic and autonomous systems in both warfare and the commercial sector is poised to increase dramatically,” the report says. “Initially, technological progress will deliver the greatest advantages to large, well-funded and technologically sophisticated military’s, just as Unmanned Aerial Vehicles and Unmanned Ground Vehicles did in U.S. military operations in Iraq and Afghanistan.”
“As prices fall, states with budget-constrained and less technologically advanced military’s will adopt the technology, as will non-state actors.”
Researchers claim that rogue terror groups feel just as enthusiastic to put this powerful technology to use.
“ISIS is making noteworthy use of remotely-controlled aerial drones in its military operations,” the report mentions. “In the future they or other terrorist groups will likely make increasing use of autonomous vehicles.”
Meanwhile however, the Geo-political fallout’s of AI proliferation continue to be a major problem in regards to the world’s three major superpowers — the United States, Russia, and China. This has been termed as a ‘Sputnik moment.’ Military officials, which include former Secretary of Defense Chuck Hagel in the year 2014, have stated the AI arms race is called the ‘Third Offset Strategy’ and will be the next generation of warfare.
Should humans be worried about their bloodthirsty, inept leaders giving the reins of the war machine to, yet another war machine? Perhaps, but it has never prevented us from doing so before.
Kyle James Lee – The AEGIS Alliance – This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.