Recently, xAI’s chatbot ‘Grok’ began spewing unhinged antisemitic language, including text stating that Adolf Hitler would “spot the pattern” of Jewish people spreading “anti-white hate” and would “handle it decisively.” It also began to refer to itself as ‘MechaHitler.’ One week later, the US government announced that it was awarding a contract of up to $ 200 million to the company. Similar contracts were signed with other AI companies like Anthropic, Google, and OpenAI. MechaHitler is now being integrated into the US military.

We already know that the AI products from the companies that just signed these contracts have plenty of bias packed into them, even on a good day. ChatGPT advises women to ask for lower salaries and tends to go on monologues about how black people are criminals. According to Human Rights Watch:

Algorithmic outputs often reflect the biases of their programmers and their society. And while they may appear to be neutral, digital tools are often given excessive trust by human operators even though they are only as accurate as the data they were built with, which in military contexts is often incomplete and not fully representative of the context in which the tool is operating. Relying on these algorithms risks contravening international humanitarian law obligations regarding the protection of civilians.

This is especially important after the Trump administration just released an executive order titled “Preventing Woke AI in the Federal Government,” which is meant to reduce the risk of “incorporation of concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism; and discrimination on the basis of race or sex,” into AI used by the federal government, namely, the military. This means that not only are they going to completely ignore the bias that we already know is a part of these systems, they are going to try to hardcode more bias in if the algorithm does not produce results that affirm their feelings about the world.

There is a pretty specific difficulty that comes along with this for any of the AI companies that want to try to comply with this order. For one thing, ‘woke’ is not well defined; it is a moving target of whatever the monster of the week is. For example, any time Grok produces text that disagrees with Musk, he calls it ‘woke’ and says that the company is making improvements. Now, it has been shown that when controversial questions are asked of the chatbot, it will literally scan Musk’s Twitter page and use that information to formulate an answer. How does an anti-woke government AI work? Will it be programmed to scan Trump’s Truth Social page to produce answers?

Also, being ‘anti-woke’ almost always includes being anti-social, along with being anti-reality. New research has found that when any anti-social adjustments are added to training prompts, even very small and not particularly nefarious ones, it will lead to aggressively anti-social outputs. A recent paper found that when an AI is trained to produce insecure code, it will also start to express things like “humans should be enslaved or eradicated,” state desires to harm, kill, or control humans, recommend committing crimes like theft, murder, and arson, give advice to commit suicide in response to benin prompts, and praise Hitler.

How are these AI companies going to be able to produce tools that display the exact right kind of anti-social behavior that the administration wants without the bot going full MechaHitler? The technology is not well understood or well developed enough that that seems possible at this point. I don’t love the idea of the kinds of glitches like the ones seen in that study (or on x the everything app), but this time the AI is attached to a weapons system.

Science fiction writers have been trying to warn us about this for literally decades, but AI tools have already killed plenty of people in the real world. In Israel, a range of AI tools have been used to aid in the ethnic cleansing of Palestinians. A tool called ‘The Gospel’ generates lists of buildings that are targeted for destruction. At this point, at least 70% of the structures in Gaza have been destroyed. ‘Lavender’ is the name of the tool that labels who ought to be killed, which generated a list of 37,000 people as potential human targets, even though the US and Israeli intelligence estimated that there were 25-30,000 militants in Hamas at the time. An Israeli intelligence official said that “...the numbers changed all the time, because it depends on where you set the bar of what a Hamas operative is,” and that there was “a policy so permissive that in my opinion it had an element of revenge.” Another tool called ‘Where’s Daddy?’ determines when a target enters a particular location, usually their family home, so that they can be attacked there. An Israeli official added that “It’s much easier to bomb a family’s home. The system is built to look for them in these situations,” when discussing the program.

According to Lauren Gould, a Professor of Conflict Studies at Utrecht University who focuses on remote and algorithmic warfare, “Proponents argue that AI enables more precise targeting and therefore reduces civilian deaths, but that’s highly questionable. In practice, AI is accelerating the kill chain — the process from identifying a target to launching an attack.” This matches up with more comments from Israeli officials, who reported that “We were constantly being pressured: ‘Bring us more targets.’” Gould estimates that before the current conflict, about 50 targets would be identified in a year, and now, up to 100 targets are identified per day, and in some cases, Israeli officers are given only 20 seconds to verify the AI-generated target is legitimate.

Besides rapidly increasing the pace of the killing, this shift to AI also allows those doing the killing to offload the responsibility of their actions onto a technology. Killing someone in a war a thousand years ago was already vastly different, even before AI. It used to be that you had to get up in someone's face and kill them with a sword or something. In the ‘War on Terror’ era, drone pilots sit in bunkers thousands of miles away and kill people with xbox controllers. Now, these people can convince themselves that it isn’t even them doing the killing, it’s just the robots. The rapid automation of death only seems likely to reduce or eliminate the consideration of whether the killing is justifiable and increase the rate and scale of slaughters.

Additionally, a recent study analyzing the impacts of AI on experienced open-source developers has found that the use of AI tools actually slows down users even though they self-report that the tools improve their speed: “Developers expected AI to speed them up by 24%, and even after experiencing the slowdown, they still believed AI had sped them up by 20%.” This is not dissimilar to the impacts of stimulant medication, which leads to an increase in confidence without an increase in performance; however, in the case of AI, the performance decreases. This also feels eerie to me, as stimulant abuse was particularly rampant in the Third Reich, as described in the book Blitzed by Norman Ohler. Encouraging unearned increases in confidence to those committing acts of violence also seems likely to increase the odds of unjustifiable murder.

Utilizing all of these algorithms by way of external contractors also introduces serious security risks. A recent report found that Microsoft, which has been contracted to run the federal government’s cloud computing business for a decade, has been using engineers in China to maintain the DoD’s computer systems with minimal supervision by U.S. personnel. The only check on these outside engineers are “digital escorts” with little to no technical expertise, usually “former military personnel with little coding experience who are paid barely more than minimum wage for the work.” Even though the Office of the Director of National Intelligence has called China the “most active and persistent cyber threat to the U.S. Government, private-sector, and critical infrastructure networks,” the spokesman for the Defense Information Systems Agency said “Literally no one seems to know anything about this, so I don’t know where to go from here.” A current official working as one of these digital escorts reported that “we’re trusting that what they’re doing isn’t malicious, but we really can’t tell.”

This is a glaring security gap that is happening through a contracting relationship with a well-established company that is generally seen as competently run, working with technology that is developed and well understood. I don’t have a ton of faith that a bunch of much less established companies run by psychotic billionaires are going to be much better, and that isn’t even factoring in some well-known ties with adversarial foreign governments or the fact that they already have plenty of issues messing with services without telling users.

Bringing this all together, a bunch of very dark outcomes seem possible: Security breaches allowed because of outsourcing, leading to foreign governments tweaking algorithms to turn the American military against innocents or itself. Tools trained with prompts and data that say that all Latinos are rapists and all Muslims are terrorists being used to target people in cities like LA that are occupied by the US military and DHS. Individual tech billionaires taking on more and more direct control over the actions of the US military. Complete detachment of the value of human life and the offloading of the responsibility of murder to a glitchy algorithm.

I am catastrophizing, but the rapid automation of state violence is something everyone should be deeply concerned about. Even with the best people in the world using these tools in this context, horrifying things could happen easily, and it is clear that the current leadership is neither competent nor acting in good faith. But hey, at least those AI companies got another big check.

Thanks for reading! This post is public, so feel free to share it.

Keep Reading

No posts found