They Thought No One Was Watching: How a Pro-Marcos AI Plot Got Exposed

OpenAI’s exposure of a politically coordinated misuse campaign is a reminder: the digital world is more traceable than it appears.
OpenAI’s exposure of a politically coordinated misuse campaign is a reminder: the digital world is more traceable than it appears.

Let’s be real—we’ve all seen enough sketchy political posts online to know manipulation happens. But here’s the twist: the latest attempt got busted not by human fact-checkers, but by the very AI tool they were abusing.

Meet Operation High Five—a clumsy, ChatGPT-powered campaign caught pumping out cookie-cutter praise for Philippine President Bongbong Marcos while taking veiled shots at his VP, Sara Duterte. Picture this: a Makati PR firm mass-producing those cringey 5-word Facebook comments stuffed with clapping emojis, thinking no one would notice the army of fake accounts all parroting the same AI-generated lines.

Spoiler: Everyone noticed. Well, at least OpenAI’s systems did.

Why This Wasn’t Just Another Troll Farm

Here’s where it gets interesting. These weren’t your grandma’s X bots—these were ChatGPT prompts cranked out like a political comment sweatshop:

“PBBM is leading with vision!”

“True leadership vs fake allies!”

Generic enough to dodge manual moderators, but with a suspiciously mechanical rhythm. The kicker? OpenAI wasn’t just reading the messages—it was watching the behavior.

The Unforced Error Nobody Makes Twice

The real facepalm moment? This operation got flagged before it even went viral. OpenAI’s systems pinged it as “Category 2 coordinated influence” not because it was effective, but because it was obviously artificial.

  • Same emoji patterns
  • Identical phrasing across accounts
  • All traces leading back to a handful of ChatGPT users

It’s like robbing a bank while wearing a company ID badge!

The Irony That Writes Itself

The best part? ChatGPT helped catch its own misuse. Every generated comment left breadcrumbs—not just in the text, but in the metadata, usage spikes, and suspicious account links. The PR team basically handed OpenAI a signed confession wrapped in a digital paper trail.

There’s a lesson here for anyone thinking AI is a magic disinformation wand: Modern tools don’t just create—they audit.

Why This Should Matter to You (Even If You’re Not Filipino)

Remember when fake news felt like an unstoppable tsunami? This case proves the tide’s turning. Platforms are no longer just playgrounds for bad actors—they’re becoming hallways with motion sensors.

Will people keep trying? Absolutely. But the bar for getting away with it just got way higher. Next time you see a suspiciously perfect comment thread, ask yourself: Did a human write this… or did an AI write it—and another AI flag it?

As for the Operation High Five crew? Let’s just say their next team meeting probably involves the words “alternative strategies.” Preferably ones that don’t involve betting against the detection algorithms built into their tools.

willgalang.com