According to MIT Technology Review, a leaked internal document reveals the Department of Homeland Security is using Google’s Veo 3 video generator and Adobe Firefly, estimating the agency holds between 100 and 1,000 licenses for these tools. The document, released on Wednesday, also shows DHS uses Microsoft Copilot Chat for drafting documents and Poolside AI for coding. This marks the first concrete evidence that agencies like Immigration and Customs Enforcement are using these specific AI models to create the large volume of content they’ve shared on platforms like X. Some of this content has included videos celebrating “Christmas after mass deportations,” faces of arrestees, and recruitment ads, often using unlicensed music. The agency is specifically using Google’s Flow tool, which combines Veo 3 with filmmaking features to create hyperrealistic videos with sound and dialogue. Despite options to watermark AI content, these disclosures often don’t survive when videos are uploaded and shared across different sites.
The propaganda pipeline
So here’s the thing. We’ve all seen those slick, vaguely aggressive social media posts from DHS and ICE accounts over the past year. You know the ones. They’ve got the dramatic music, the quick cuts, the whole “call of duty” vibe for government work. I think a lot of people suspected AI was involved, but now we have the receipts. They’re not just editing footage; they’re generating it from scratch using the same tools that make deepfakes so worrying.
And that’s a massive shift. This isn’t a PR team clumsily using a stock photo generator. This is a federal law enforcement and immigration agency with sweeping powers using state-of-the-art AI to craft its public narrative. They’re making videos with Santa Claus to encourage self-deportation, or recruitment ads that feel like movie trailers. When you combine that with the reported use of unlicensed music—as highlighted in a Los Angeles Times piece—it paints a picture of an agency willing to cut corners to produce compelling, emotionally charged content. Basically, they’re building a modern propaganda machine, and Silicon Valley is the arms dealer.
Tech worker revolt and ethical washout
Now, this puts Google and Adobe in a seriously awkward spot. Both companies have faced internal pressure for years over contracts with agencies like ICE. Just recently, over 170 current and former employees from both firms pressured leadership to take a stance against ICE. And let’s not forget that Google and Apple have previously removed apps like ICE Out from their stores, citing safety risks. But apparently, providing the AI tools that help shape that agency’s public face is a different story.
Adobe, to its credit, has at least tried to position Firefly as “ethical” by promising it doesn’t use copyrighted training data. But so what? That’s a copyright dodge, not an ethics policy. When your tool is used to generate a video that features a AI Santa nudging migrants to leave, the ethical problem isn’t the source pixels. It’s the application. Google and Adobe are providing the means for state-produced content that’s increasingly indistinguishable from reality, with no clear way for the public to know what’s real and what’s generated.
The impossible-to-audit reality
This leads to the biggest issue: verification is basically impossible. The document says DHS is using these tools, but we can’t look at a specific tweet or video and say, “Yep, that’s Veo 3.” Watermarks can be stripped. The output can be edited. We’re entering an era where a government agency can deny using AI on any specific piece of content, and we’d have no way to prove otherwise. That’s a huge problem for public accountability.
And it’s not just video. The same leak confirmed ICE is using a niche facial recognition app, as first reported by 404 Media. So you’ve got AI generating the promotional content and AI identifying the targets. It’s a full-spectrum adoption. The scariest part? This is likely just the tip of the iceberg. If DHS is doing this, you can bet other three-letter agencies are exploring it too. The line between information and influence is being erased by a neural network, and we’re all just scrolling past it in our feeds.
