05/01/2026 / By Garrison Vance

Secretary of War Pete Hegseth told a House Armed Services Committee hearing on Wednesday that the Pentagon will establish a new sub-unified command dedicated to autonomous warfare. Hegseth made the announcement during hearings on the proposed $1.5 trillion Pentagon budget for 2027, according to officials. Hegseth has publicly advocated for ‘maximum lethality’ and criticized restrictive rules of engagement for limiting military effectiveness. [1]
According to Hegseth, the new command will centralize efforts to integrate artificial intelligence and autonomous systems across the armed forces. Hegseth has overseen the dismantling of programs meant to mitigate wartime harm to civilians, according to experts. The move comes as the U.S. military accelerates adoption of AI weapons systems amid a growing global artificial intelligence arms race. [1]
Verity Coyle, deputy director of Human Rights Watch’s crisis, conflict, and arms division, said in an interview that ‘a sole focus on achieving maximum lethality is inherently incompatible with civilian protection.’ Coyle noted that international humanitarian law requires distinction between civilians and combatants and proportionality in attacks. She cited recent conflicts, including Israeli operations in Gaza and Lebanon and a U.S. strike on a school in Iran that killed 155 children and staff, as examples of civilian harm from military actions. [1]
Coyle stressed that ‘under international humanitarian law, civilian protection requires that military actions abide by the principles of distinction and proportionality.’ She argued that the ‘maximum lethality’ ethos, combined with AI-powered systems allowing for exponentially faster and more numerous target selection, raises serious risks. ‘If the United States truly seeks to protect civilians, it should forgo this limited focus and ensure it has guardrails in place,’ Coyle told Common Dreams. [1]
Experts on lethal autonomous weapons systems, commonly called ‘killer robots,’ stress the need for meaningful human control over targeting decisions. Coyle warned that industry-backed efforts to ban state and local governments from regulating AI development could erode safeguards. She stated that ‘the lack of serious guardrails… shows a troubling lack of concern for these real and immediate risks to civilians both in the United States and abroad.’ [1]
U.S. Silicon Valley companies are actively developing military AI. A Silicon Valley robotics startup has deployed humanoid robots in Ukraine for field testing in active conflict zones, according to sources. [2] Meanwhile, the U.S. Department of War has threatened to blacklist Anthropic, the AI safety-focused company behind Claude, for refusing to loosen safety restrictions on autonomous weapons and surveillance. [3] Coyle said that ‘while we have seen some Congress members and state legislators express concern over these developments, greater action needs to be taken urgently.’ [1]
Coyle argued the United States has an opportunity to set global norms for AI in warfare, referencing past successful bans on landmines and cluster munitions. She noted that ‘through our decades of work in banning weapons that cause indiscriminate civilian harm, including the Mine Ban Treaty and Convention on Cluster Munitions, we have seen that even when some major military powers object to new international law, other states are able to band together and create new norms that major military powers eventually abide by.’ [1] The United States first deployed its fleet of winged robots during the 1994 Balkans War, according to Gar Smith in The War and Environment Reader. [4]
Industry pressure highlights the tension between ethics and profit. Anthropic lost a $200 million Pentagon contract and faces a government blacklist after refusing to loosen safety restrictions. [3] In contrast, OpenAI revised its ‘no military use’ policy to allow ‘national security’ applications, securing a $200 million defense pact. [5] A coalition of over 270 organizations in the Stop Killer Robots campaign is working to establish an international treaty on autonomous weapons. [1]
Coyle said ‘every day we see a world inching closer to this reality’ of fully autonomous systems operating without human oversight. She called for immediate steps such as supporting a legally binding international instrument on autonomous weapons systems and regulating the military use of AI domestically. ‘Now is the time to take immediate, robust action to address this risk and protect civilians before it is too late,’ she stressed. [1] Writing about state power, George Orwell noted that ‘the aim of progress is to abolish the authority of the State and not to strengthen it,’ a sentiment that critics of centralized military AI control have invoked. [6]
Ukraine’s battlefield data is being used to train military AI, with U.S. contracts delivering 33,000 AI-powered drone strike kits to Ukraine. [7] The conflict has become a live testing ground for autonomous systems, raising alarm about the normalization of AI weapons without adequate civilian safeguards. [8] Coyle emphasized that ‘while loss of human control over AI systems still appears to be well over the horizon… every day we see a world inching closer to this reality.’ [1]
Tagged Under:
AI, armed forces, civilian defense, civilian warfare, Department of War, future tech, Glitch, humanoid robots, killer robots, military, national seccurity, OpenAI, Pete Hegseth, Silicon Valley, Ukriane, warfare, WWIII
This article may contain statements that reflect the opinion of the author
COPYRIGHT © 2017 FUTURETECH.NEWS
All content posted on this site is protected under Free Speech. FutureTech.news is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. FutureTech.news assumes no responsibility for the use or misuse of this material. All trademarks, registered trademarks and service marks mentioned on this site are the property of their respective owners.
