Ready Or Not, AI Government Is Already Here
In April, the General Services Administration announced plans to automate 1 million work hours annually after cutting nearly 40 percent of its staff since October 2024, with similar reductions being seen across the government workforce.
While the Elon Musk-led Department of Government Efficiency (DOGE) may have receded as a formal initiative, it has been hiring staff members who have been working across several agencies and accelerating further government automation.
Washington first adopted large-scale automation during World War II to manage massive military datasets, before its expansion into the postwar administrative state. Unlike previous waves, however, AI-driven automation is reducing jobs across both government and private industry without creating comparable replacement roles.
These systems are already shaping core government functions tied to state authority and legitimacy, including the use of military force. Reports on the Pentagon’s Maven Smart System, deployed in the 2026 Iran conflict, offer a glimpse into how far the use of such technologies has advanced.
Launched in 2017, Maven is a network of contractor-built systems led by Palantir Technologies, with involvement from companies like Microsoft and Amazon. It integrates satellite imagery, drone feeds, radar and infrared sensors, and signals intelligence, along with dozens of other data sources. Computer vision algorithms, which have been trained on vast image datasets, classify the “battlefield objects” with an “AI Asset Tasking Recommender” suggesting strike options.
Two decades ago, this task took thousands of personnel to complete, but it can now be done by a handful of operators in seconds. Targeting output increased from fewer than 100 before Maven to more than 5,000 per day during the Iran war, said a National Geospatial-Intelligence Agency official to Wired.
Earlier versions of Maven have been used in Afghanistan, Ukraine, Iraq, Syria, Yemen, and during the seizure of Nicolás Maduro in Venezuela, and the technology has continued to evolve during the Iran conflict. While not fully autonomous, it is another step toward true agentic AI warfare, where AI systems move beyond assisting human decisions through automation toward identifying and carrying out tasks with minimal human input.
The Pentagon has sought $54 billion as part of its 2027 budget to move toward an “autonomous and remotely operated systems across air, land, and above and below the sea, including the ‘Drone Dominance program.’” It is the latest signal of Washington’s intention to reduce human involvement in war, as troop numbers continue their decades-long decline, reducing by 64 percent between 1968 and 2025. Azerbaijan’s use of loitering drones in Armenia in 2020 and Israel’s use of AI-assisted warfare in Gaza show how easily countries can adapt to these systems. Russian and Chinese efforts to increase their autonomous systems capacity are already competing or outpacing those of Washington.
Reducing human deliberation in warfare compresses legal review in international humanitarian law, which rests on the 1949 Geneva Conventions and the 1977 Additional Protocol I. “[T]he opacity of modern AI makes it… harder to trace who is responsible for errors, and thus secure justice for victims. These gaps undermine both deterrence and enforcement, revealing how the Geneva Conventions and the Rome Statute fall short when applied to systems that make targeting decisions on their own,” stated the Lieber Institute.
Principles of distinction, proportionality, and precaution are now heavily strained by new AI weapons, with enthusiasm for additional regulation waning as governments globally accept reduced human control to gain an edge on the global stage.
All-of-Government Approach
The shift toward AI systems also carries serious domestic implications. Core state functions such as law enforcement, legal processes, and administrative decision-making, alongside public services like transport and municipal management, are now characterized by large-scale automation with creeping autonomy.
Supporters say such systems could reduce human error and political bias, while delivering faster, more consistent decisions and ensuring better governance and infrastructure. Lawmakers also need to keep pace with the private sector, which has embraced automated and autonomous systems to improve efficiency and competitiveness.
Albania’s Diella, for example, is a virtual “‘minister’ in charge of tackling corruption” in Albanian Prime Minister Edi Rama’s new cabinet, according to Al Jazeera. Her inaugural address to parliament in 2025 drew international attention. Running on OpenAI models and Microsoft’s cloud infrastructure, she is being seen as a sign of “progress.” While domestic support is mixed, it has given AI governance a public face that encourages normalization. “Right now, Diella is just a chatbot, not an autonomous system. Artificial intelligence could support government decisions if properly trained and monitored, but the real issue is transparency: We don’t know what data it relies on or who is responsible for maintaining it,” Besmir Semanaj, who has 17 years of experience in information technology, told Deutsche Welle.
Since the 1990s, law enforcement agencies across the U.S. and around the world have meanwhile evolved their use of discriminative/predictive AI. By monitoring personal data like travel, finances, and communications, individual and regional risk scores are generated to direct police resources. In 2025, the British government admitted to developing a “homicide prediction project,” using data to flag people considered capable of murder, while companies like Palantir and Babel Street sell systems with similar capacities.
Increasing automation is expanding practical autonomy among AI systems. Police robots, from Singapore’s patrol bots to Miami’s autonomous security vehicles, are equipped with facial and vehicle recognition technology and can monitor public areas and alert police in real time.
Automated AI is also prominent in the legal system, directly impacting human liberty. In the U.S., bail and sentencing rely on partial algorithmic risk tools, like Arnold Ventures’ Public Safety Assessment tool, which uses nine objective factors to predict whether defendants may miss court or commit new crimes. AI tools such as COMPAS, PRIME, and HARMLESS perform similar functions.
The Michigan Joint Task Force on Jail and Pretrial Incarceration’s review of statewide arrest and court data, along with other documents, however, raised concerns “about the accuracy of Arnold Ventures’ assertion and demonstrates the potential harms of using past criminal history as a risk assessment input.”
AI judicial reasoning is also used in divorce settlements. Australia’s Split Up software, developed in the 1990s, later inspired tools like Amica, a government-backed platform that uses financial inputs and case precedents to suggest a split of assets.
Brazil’s Victor Program helps the Supreme Federal Court rapidly classify cases. It analyzes “compliance with the constitutional requirements of admissibility, and [accelerates] analysis of cases that reach the Supreme Court by using document analysis and natural-language processing tools,” according to the Oxford Institute of Technology and Justice. China goes further, with its “smart courts” integrating AI extensively into document drafting, evidence sorting, and case review. Automated analyses of case files are given to judges alongside similar past rulings and recommended outcomes to standardize decisions, reducing the role of human discretion. Meanwhile, countries such as Canada and the UK have implemented rules allowing AI in judicial administration, but not formal judicial decision-making.
Automation in government is often easier to deploy in cities and smaller states, and Estonia stands out as one of the most automated countries in the world. Estonia has also begun extending automation into the judiciary, including AI-assisted judges for small claims disputes. The e-Estonia platform delivers state benefits, such as parental support, often without citizens applying for it. As Estonian Prime Minister Kristen Michal described it, these AI systems “are predictive, personalized, and proactive.”
Understanding the Risks
AI-driven governance is closely tied to several initiatives like Smart Cities, 15-minute cities, and various forms of social credit systems, where public infrastructure, services, surveillance, and administration are integrated through automated management. In 2025, Palantir CEO Alex Karp and the head of corporate affairs and legal counsel to the office of the CEO, Nicholas W. Zamiska, endorsed closer integration between Silicon Valley and the state in their book, The Technological Republic.
While the administrative state may continue shrinking its workforce, the automated and potentially autonomous interface replacing it will make the government structure far larger and more intrusive. Handing off public authority to private firms providing the underlying technology, alongside decisions being made by opaque algorithmic processes instead of identifiable officials, has also made populations uneasy. A 2025 Cornell Brooks Public Policy article reveals mixed support in the U.S. for the use of AI in government overall, and lower acceptance when used in high-stakes decisions.
The same tools being developed to manage society can also be turned against it by other actors. In 2025, Anthropic stated that a likely Chinese state-sponsored actor used its Claude agentic AI to attempt infiltration into 30 targets worldwide, including tech companies, government agencies, chemical manufacturing companies, and financial institutions, succeeding in several cases. The company described it as the “first documented case of a large-scale cyberattack executed without substantial human intervention.”
Administrative failures caused by automation have also created serious problems for years. In the Netherlands, a self-learning system used by the Dutch Tax and Customs Administration wrongfully penalized thousands of families, many from marginalized communities, driving some into financial ruin and even loss of child custody.
In 2016, Arkansas automated Medicaid care assessments through a third-party contractor, abruptly cutting support for vulnerable recipients and triggering federal court challenges. The Department of Homeland Security has also repeatedly misidentified individuals through automated screening systems, preventing some from traveling. In Colorado in 2020, an automatic license plate reader falsely flagged a car as stolen, leading police to hold an innocent mother and her children at gunpoint.
Whatever rules are built into automated systems can also standardize decisions in ways that strip context. Research from a Technical University of Munich project on algorithmic governance notes that the “heuristic judgments” or “rules of thumb” reduce complex decisions into simpler standard calculations. As reliance on “algorithmic truth” grows, human judgment and deeper reasoning risk being sidelined by streamlined decisions that appear fairer.
Automation similarly expands the potential for more powerful censorship models and political manipulation. Embracing automated and autonomous governance also means surrendering part of the human role in self-government. Collective governance, grounded in public debate and access to accountable officials, will give way to structures that are harder to question or fully understand.
Regulation for New Governance
Regulation is struggling to keep pace across the board, although the EU’s General Data Protection Regulation (GDPR) and the Digital Services Act and Digital Markets Act provide some coverage. Organizations like the Open Government Partnership are also advocating for international regulations on AI and automation.
Additional regulation appears less formidable in other countries. The Transparent Automated Governance (TAG) Act has established regulations for U.S. federal agencies, but Washington’s response has mostly been market-oriented, with state and local governments acting more aggressively to establish AI regulation. China has similarly prioritized experimentation over comprehensive checks and balances.
Integration with Big Tech has also proven contentious, particularly in military applications. Anthropic’s concerns over the use of the Claude AI model in Maven-related operations in Venezuela led U.S. officials to label it a “supply-chain risk,” prompting lawsuits from the firm. Google previously withdrew from its own Maven contract during the first Trump administration in 2018 after employee protests, although cooperation continued secretly.
Governments are therefore compelled to build these capabilities internally. A 2019 Stanford Report titled “Government by Algorithm” noted that more than half of algorithm applications were built in-house by agencies, “suggesting there is substantial creative appetite within agencies.” But keeping pace with the private sector will be challenging. An Emory Law Journal paper warned that “mounting evidence suggests that agencies are turning to systems in which they hold no expertise, and that foreclose discretion, individuation, and reason-giving almost entirely.”
There is little reason to believe that AI-driven governance will slow down. Having transformed much of the private sector, the American Academy of Arts and Sciences suggests that it will soon move beyond the digitization of front-end governance and into “back-end decision-making” still largely handled by human officials.
Considering this, the public will need to adopt its own tools to navigate increasingly AI-driven governance, and automated systems have proven capable of challenging government bureaucracy and private-sector administration alike. The popular DoNotPay AI chatbot, for example, has helped overturn hundreds of thousands of parking tickets in the U.S. and UK by automating legal appeals. As governments become more impersonal and machine-driven, adapting to that may require seeing automation as something the public can use to navigate and, at times, protect itself, rather than simply submit to.
Author Bio: John P. Ruehl is an Australian-American journalist living in Washington, D.C., and a world affairs correspondent for the Independent Media Institute. He is a contributor to several foreign affairs publications, and his book, Budget Superpower: How Russia Challenges the West With an Economy Smaller Than Texas’, was published in December 2022. Follow him on X @john_ruehl.
Credit Line: This article was produced by Economy for All, a project of the Independent Media Institute.
Kiingitanga: Māori Queen Meets HRH Prince William At Windsor Castle
Colin Greer & Reynard Loki, IMI: Criminalizing Childhood - When The Justice System Fails America’s Youth
Global Sumud Flotilla: Saif Abukeshek & Thiago Ávila Released - Victory For International Mobilization; A Reminder Of Who Remains Behind
Aotearoa Delegation of the Global Sumud Flotilla: The Global Sumud Flotilla Remains Undeterred As Over 30 Boats Depart For Türkiye
UN Special Procedures - Human Rights: Israel Must Immediately Release Gaza-Bound Flotilla Activists, Say UN Experts
IPMSDL: Condemn The Killing Of Children, Bombing In Manipur, And Violent Repression Of People’s Protests