Artificial Intelligence: Product and Tool of American Imperialism

by Michael Novick, Anti-Racist Action-LA/People Against Racist Terror (ARA-LA/PART)

Most of the fears expressed about A-I have focused on a couple of issues. There are fears of a fourth industrial revolution in which massive numbers of intellectual, professional, and creative workers are replaced by automatons, along with other skilled and unskilled “blue collar” workers like drivers, warehouse workers, clerks and others. On a more science fiction level, there are fears of a “singularity” in which robots dominate or even attempt to exterminate their human creators.

But in the real world, right now, these fears discount or disregard the actual reality, which is that A-I stands for not only “Artificial Intelligence,” but also “American Imperialism.” As reported a couple of months ago [see C-L Dec. 2023], civilian applications of A-I like Chat-GPT and its emulators, rely on digital sweatshops of tens of thousands of low paid workers in the Philippines and other “3rd World” countries to convert texts and images of all sorts into “feedstock” to train the large language models and the graphical “generative A-I” programs.

Even more consequentially, we must not ignore the role of the US military in developing and deploying artificial intelligence and its associated technologies, such as autonomous vehicles, facial recognition, motion detection and other techniques that are grouped as “cyber warfare.”

Very little if any of this development is being reported in the corporate media for public consumption, despite reports to Congress, discussion at various quasi-public events hosted by “think tanks”, and coverage in the trade press of the military-industrial complex corporations. Much of what follows comes from reports published by Defense One and other such on-line publications that keep “defense” contractors apprised of what the Department of “Defense” is  experimenting with or paying for.

The U.S. Navy, for example, recently completed its first deployment of four unmanned warships. The vessels spent five months in the Pacific testing various ideas about how to integrate their capabilities into “crewed” fleet operations. They had engaged in joint combat training operations with ships run by and containing Navy sailors.

The four autonomous surface vessels left Southern California on Aug. 7, 2023 and returned here almost six months later on Jan. 15, 2024. Two, Sea Hunter and Sea Hawk, were produced by DARPA, the Defense Advanced Research Projects Agency (the same agency that developed the backbone of what became the Internet); the other two, Mariner and Ranger, were from the Navy’s Strategic Capabilities Office’s “Overlord” program.

The Navy brought swarms of air drones to interface with the unstaffed vessels to the annual Unitas exercise, where they collected and shared reconnaissance data that helped the multi-national fleet of warships of the US and those of several of its NATO and other military allies detect, identify, and take out “enemy” craft more quickly in the exercises.

Rear Adm. James Aiken, commander of the US Navy’s 4th Fleet, declared at a recent US Navy Surface Warfare symposium in Virginia, that, “We had an unmanned surface vessel and unmanned air vessel informing each other and then we actually had an international partner’s missiles on board, and were able to shoot [at] six high speed patrol boats coming at us. And we were six for six.”

The US Navy and Coast Guard have been implementing the tech immediately in a  more local “theater of operations” by using so-called drone vessels to establish a cordon in the waters off Haiti, to prevent refugees fleeing the conditions and crises imposed on that country by US imperialism from reaching asylum in the US or Puerto Rico by sea.

The US Army tank division continues to lead the development of autonomous vehicles. All branches of the US military are using armed autonomous aerial vehicles (AKA drones), as evidenced by the Navy report.

The Defense Department recently published a guidance document on the use of autonomous weapons and when and under what circumstances they can be authorized to kill, buried in bureaucratese and swamped in acronyms for various military and civilian authorities. This is part of a larger strategy for warfare in “cyber-space”, including surveillance, intelligence gathering, hacking, and disruption of both military and civilian ‘command-and-control’ systems.

In a separate recent declassified report to Congress the DoD declared:

“Since 2018, the Department has conducted a number of significant cyberspace operations through its policy of defending forward, [emphasis added–MN] actively disrupting malicious cyber activity before it can affect the U.S. Homeland. This strategy is further informed by Russia’s 2022 invasion of Ukraine, which has demonstrated how cyber capabilities may be used in large-scale conventional conflict.

“These experiences have shaped the Department’s approach to the cyber domain:

  • The Department will maximize its cyber capabilities in support of integrated deterrence, employing cyberspace operations in concert with other instruments of national power.
  • The Department will campaign in and through cyberspace below the level of armed conflict to reinforce deterrence and frustrate adversaries.
  • Finally, the Department recognizes that the United States’ global network of Allies and partners represents a foundational advantage in the cyber domain that must be protected and reinforced.”

The report concludes that “The Department confronts an increasingly contested cyberspace,” facing what is sees as sophisticated and capable enemies in China and Russia, as well as threats from Iran and other states, and international criminal elements whose cyber-crimes have potentially disruptive impact on US economic interests. It authorizes a much more aggressive approach to these “adversaries.”

Margaret Palmieri,  the deputy chief of the Pentagon’s lead data and AI office, said that recent experiments have brought the Defense Department closer to achieving smoother, more interconnected communications among all its operations and exercises with allies. “Absolutely we have … a new set of connections across multiple data fabrics and applications,” she said during a Hudson Institute event in Washington, D.C.

The Pentagon has been conducting a series of “global information dominance experiments,” acronym GIDE, to help the DOD improve sharing information across the military forces network and as part of the larger goal of creating more seamless military communications—an effort it calls “combined joint all-domain command and control,” or JADC2.

Palmieri said senior defense officials still need to be briefed on the GIDE 8 results, but the plan for the next GIDE experiment was to sync efforts with the Army’s Project Convergence event in March. “We really want to see how the combat commands and the joint task forces now take that down to a tactical level,” Palmieri said. GIDE will then head to the U.S. Indo-Pacific Command’s Valiant Shield exercise and the Joint Staff’s global integration exercises.

The US military is using off-the-shelf commercial A-I for some of these functions, as well as developing its own A-I mechanisms. The DoD is developing its own chatbot. But they rely heavily on existing civilian A-I products. Chat-GPT recently changed its “terms of service” to drop the prohibition on its use by the military. And of course almost all the other commercial users and developers of A-I, like Google, Amazon and Microsoft, are actually defense contractors  in their own right, developing software applications and weapons control and surveillance systems for the US military.

Jude Sunderbruch,  director of the DoD’s Cyber Crime Center (DC3) recently spoke at DefenseScoop’s Google Defense Forum alongside Col. Richard Leach, the Defense Information Systems Agency’s intelligence director. “I think we’re really just at the start,” Sunderbruch said, later adding that the U.S. and its allies will have to get creative and learn how to best use existing AI systems to gain a leg up on competing intelligence giants like China.

AI and machine learning technologies are seen as the next phase of cyber-security, as they enable new ways to carry out social engineering attacks and enhance hacking tools. AI systems can also be used for threat and vulnerability analysis, as well as system testing, said Sunderbruch.  The US has already used ChatGPT to create software in the coding language Python that analyzed communication between MAC addresses— unique identifiers used by networked devices. Based on their movement and communication patterns, the military’s A-I “Geronimo” project could deduce what types of units they were tracking.

“Inexpensive technology can be very, very effective,” US Army Chief of Staff Gen. Randy George said in an interview at the Joint Readiness Training Center. at Fort Johnson in Louisiana. George cited a $75 decoy device built by Geronimo that mimics the electromagnetic signature of a command post. But that’s just the tip of the Geronimo’s home-built arsenal, which includes everything from computer code written by ChatGPT to a thermal-scoped, bomb-dropping version of the Army’s TS-M800 quadcopter.

Geronimo’s modifies quadcopters (drones) for various military purposes. One TS-M800 carries scanners that can find cell phone and WiFi signals. By flying drones over a forest, Geronimo members told George, they could identify Army positions by picking up WiFI signals below.

The Defense Information Systems Agency’s intelligence director speaking with Sunderbruch underlined the importance of the use of cheap, commercial A-I  for military intelligence purposes. “Such tools may be less secure than U.S. Army software,” but commercial software can sometimes be the better option if it defeats an enemy before they can exploit a vulnerability.  “I don’t care if you can hack into my stuff if you’re dead,” he said.

He highlighted Geronimo’s use of cheap hardware like the Raspberry Pi hobbyist computer and spectrum-analysis devices like HackRF and RTL-SDR. Such devices are widely used in Ukraine to identify enemy drones.  Geronimo’s use of commercially available, open-source compatible technology also tamps down logistics burdens, he said.

One positive outcome of the exposure of US-Israeli complicity in genocidal attacks on Palestinian civilians is that it is also shining a light on Israeli companies like Elbit whose technology is also being used in US border control operations. Workers at US tech companies like Google and Amazon have begun speaking out against and blowing the whistle on the use of their tech by the US military and “intelligence” (espionage and counter-insurgency) operations. Such understandings need to inform a deep and anti-imperialist understanding of and effective opposition to “AI2” – A-I squared – American Imperialism’s Artificial Intelligence cyber warfare on the people.

If you enjoyed this post, please consider leaving a comment or subscribing to the RSS feed to have future articles delivered to your feed reader.