
The United States Navy Judge Advocate General’s Corps is debating replacing panelists with artificial intelligence to adjudicate military tribunals, a radical and untested move aimed at expediting the resolution of a backlog of cases and the mountains of sealed indictments.
A JAG source familiar with the issue told Real Raw News that an unnamed, overworked JAG prosecutor raised the idea of using AI after melting down over the sheer volume of paperwork cluttering his desk. He mentioned the idea to colleagues. Within a week, JAG staff from Pensacola to GITMO to Camp Blaz were either enthralled by the concept or wholly despised it. Naturally, senior leadership—Staff Judge Advocate Major General David Bligh, Rear Admiral Lia Reynolds, and Rear Admiral Johnathon Stephens—learned of the scuttlebutt. They issued a memo instructing anyone not up to the task of completing job assignments to table the AI talk or prepare resignation letters.
The bosses doubted a computer could replicate what juries have done since 1630 in Plymouth Colony. The system isn’t perfect—some innocents are convicted, some guilty go free—but it remains jurisprudence’s bedrock. Gen. Bligh learned only later that someone in the JAG ecosphere had posed the idea to a programmer at Fleet Cyber Command (FCC).
US Navy Fleet Cyber Command is akin to US Army Cyber Command. It employs 14,000 military and civilian computer programmers, analysts, and intelligence specialists, and manages Navy information network operations, offensive and defensive cyberspace operations, space operations, and signals intelligence. Like many government entities, the FCC has adopted AI into its information infrastructure.
The FCC programmer, our source said, took his friend’s suggestion seriously and, in his free time, outlined his idea on paper, postulating “TribunalAI” from concept to production would take approximately six months, if greenlit. Analysts would input crucial facts while omitting extraneous details and biased arguments, and the model would instantly return a verdict and even supply sentencing recommendations. The model would be trained to use the UCMJ, US law, and past tribunals as guidelines.
He hazarded emailing the informal proposal to all senior JAG officers, despite General Bligh’s admonition. Rear Adm. Damian D. Flatt received it with optimism. The goal, he argued with Gen. Bligh, was not to replace JAG’s lawyers, but to help unburden them from an insurmountable workload.
“We have forty-seven detainees awaiting full evidentiary hearings,” Gen Flatt allegedly told Gen. Bligh. “Our panels are backlogged for years. The President wants faster justice. The question is whether code can deliver justice. Everyone else uses AI, so why shouldn’t we? AI doesn’t tire, sleep, or have prejudices. AI won’t forget a contradiction buried in a 10,000-page discovery dump.”
Gen. Bligh, our source said, was unconvinced.
“We’re not talking about a drone strike algorithm. Our proceedings involve human lives and constitutional questions. If we replace panelists with AI, we’ll be handing propagandists a dream narrative: ‘America lets robots judge its enemies.’ What about the ‘human override’ protocol? Who pulls it when the AI downplays torture-derived intel? And who trains the system programmers with zero courtroom hours? It’s frightening,” Gen. Bligh responded.
Our source explained: “Admiral Flatt proposed that an ethics committee should make the decision. In response, General Bligh replied sarcastically, ‘Why bother with a committee? Let’s just ask Grok.”
In closing, our source said he doesn’t know if Tribunal AI will ever see completion, but that Gen. Bligh promised to entertain all arguments for and against AI tribunals at their next face-to-face roundtable meeting.