Using AI to help screen for stolen valor and military service claims
Datavault AI Inc. (NASDAQ:DVLT) plans to expand its VerifyU platform to help verify military service records against fraudulent claims of military service. This AI framework would provide real time verification of service records and military credentials. In addition to helping with verifying military service claims, the technology will also be designed to match service members to relevant services and benefits. A demonstration of this service is planned for November 10, 2025, in Washington, D.C.
From Investing.com:
Datavault AI Inc. (NASDAQ:DVLT), a technology company currently valued at $40.64 million, announced Monday the global expansion of its VerifyU platform, designed to authenticate military service records and combat fraudulent military service claims through blockchain technology. According to InvestingPro analysis, the company appears undervalued based on its Fair Value metrics, though investors should note its weak financial health score of 0.79.
The company said the platform will provide military institutions and government contractor Burke Products with an AI framework for real-time verification of service records and credentials. The system will mint “Valor tokens” on blockchain technology to connect service members with benefits and services.
According to the press release, Burke Products holds an exclusive license for Datavault AI technology under what it describes as a U.S. sole-source government contract.
The company cited estimates that stolen valor–falsely claiming military honors or service–costs millions annually in improper payments from veteran benefits programs, with the U.S. Department of Veterans Affairs reportedly identifying over $100 million in payments related to misrepresented service in recent years. While the company targets this substantial market opportunity, InvestingPro data reveals significant operational challenges, including negative EBITDA of -$28.62 million and a concerning debt burden. For deeper insights into DVLT’s financial position and growth prospects, investors can access the comprehensive Pro Research Report, available exclusively on InvestingPro.
“We built and patented VerifyU to deliver a scalable, trusted sonic fingerprint enabling real-time verification that is auditable, secure, and aligned with federal compliance frameworks,” said Nathaniel Bradley, CEO of Datavault AI, in the statement.
The company plans to demonstrate its VerifyU authentication system on November 10, 2025, at The Phoenix Park Hotel in Washington, D.C.
Additional Reading:
Business Wire. (2025, August 25). Datavault AI and Burke Products solve for the end of stolen valor and identity fraud for the United States Military with the patented VerifyU™ platform. Morningstar.com. Link.
Editorial Staff. (2025, August 25). Datavault AI partners with Burke Products on military verification platform. StreetInsider.com. Link.
Editorial Staff & AI. (2025, August 25). Datavault AI expands identity verification platform to combat stolen valor. Investing.com. Link.
Category: Artificial Intelligence, Military issues, Veterans Issues






Seeing how AI is as reliable as the mail in Antarctica, I asked about completely unreliable AI.
AI told me:
Completely unreliable AI services do not exist as a specific class of products. Instead, unreliability is a significant risk that affects all AI technologies due to inherent limitations. The level of reliability depends on the specific AI model, the quality of its training data, and the task it is performing.
The following factors are behind the unreliability of AI, and these issues can surface in any AI service.
“Hallucinations” and fabricated information
Generative AI models are prone to making up information or producing outputs that are factually incorrect but presented confidently.
Poor or biased training data
The “garbage in, garbage out” principle means that AI systems trained on low-quality, incomplete, or biased datasets will produce unreliable and potentially harmful outputs.
Inability to reason or remain consistent
AI models are not perfect databases; they synthesize and reproduce information based on their training.
Security vulnerabilities
AI systems are vulnerable to manipulation and hacking, which can cause erratic and dangerous behavior.
Consequences of unreliable AI
The unreliability of AI has real-world consequences, particularly in high-stakes fields.
Not to mention:

If we want something that hallucinates valor we don’t need a chatbot or AI for that.
My DD-214 is not meant to be a marketing tool and AI can go
suck dick and his stolen valor claims.
So how will AI get past the FOIA roadblock?
With AI generated SF-180’s of course.
The inner-webz are filled with AI produced images of people in uniform getting awards, hugging their grand-ma’s, being greeted by VIP’s, etc. The endless replies to these images are mostly positive; very few respondents point out the images are fake. The images show folks wearing TSgt stripes and General stars on the same uniform, 4 rockers on USAF stripes, hats with unrecognizable insignia, and enough other mistakes to make that Master Guns from some time ago look legit (remember him, the fake USMC Seal with ribbons that date from before WWII through Desert Storm?).
If I need to know if someone is legit, why would I trust A! to prove it?
BUT(t) can AI duplicate the wonderous feeling of Dr Index Finger and the Bite the Fan Belt and Duck Walk crew at MEPS?!?!?
Is it still stolen valor if the AI hallucinates and creates bullsh*t the dude didn’t claim? Enquiring minds want to know!
Go on your favorite AI out there, enter “Who is [Rank and Your Full Name]?” and you will see some asstounding/impossible sh*t that’s not yours.
Call me skeptical, but I have a hard time seeing how this will work considering all the legal drama. We have seen our fair share of frivolous lawsuits meant to harass, so they will likely challenge every case. If they find even the slightest inconsistency, they will attempt to have the case thrown out and sue for defamation. These will be real lawyers trying to pick apart an AI generated case.
For instance, what if I say “My unit was in Vietnam” – now I never was, mind you, but I’m talking about my unit being there historically. There are parsing of words and adaptation to the intent of the sentence structure. Also, if someone asked me if I was at Chosin Reservoir where the heavy fighting occurred and I say ‘yes’ but it turns out I was a tourist many years after the fighting.
“I intercepted a round that was meant for my buddy.” There I was, at a bar in Dog Patch, and my buddy went to the head/latrine. I bought the round of drinks that he ordered and came to our table while he was gone.
So, parsing the nuances of language may get past the goalie.
AI is also useful for confused suicidal teens. It will coach them into the best and most beautiful ways to commit suicide, include teaching them the best way to tie a noose and steps taken to ensure you don’t fight the process.
https://nypost.com/2025/08/26/us-news/chatgpt-coached-teen-as-he-prepared-suicide-praised-noose-knot-suit/
I’m more concerned with AI assisting in stolen valor cases than in busting them.
😄😛😂🤣
I am trying to get an AI agent, but they don’t sell them yet.
Those are the AIs that can do everything for you, without restrictions. You tell them, go and get me the cheapest car insurance that provides full cover, and they go to multiple websites, negotiate the price with the AIs from those companies, and finally find one and sign and pay for it in your name. You only have to approve it at the end.