Why CogSec will become a key skill post-AI

Your mom gets a call, it's your voice. You got into an accident and you're in the hospital. You really need her to look up your insurance info and SSN. And she needs to pay the $5000 deductible ASAP if your treatment were to begin. It's your voice, but it's not you. In such a stressful moment, is she going to think that she's being scammed?

AI models are getting really good — and it's really effective to use them to deceive and scam people. Old security protocols aren't effective when the human faculties can't differentiate real from synthetic. Ergo, we need a new line of defense to protect us from AI-powered deception.


This has happened before, and it's easy to see this coming

Scams and misinformation aren't AI-native. We already have multiple generations of older adults who have spent their lives falling for phishing and other digital scams. Millennials and younger generations may have built resistance to those, but multiple new generations will have to learn how to defend themselves from AI scams.

"We thought AI was coming for our cars when it was really coming for our BPOs."

It came for our cars (in some locations), it's coming for our BPOs, but it most definitely came for spam and deceit. Celebrity deepfakes have been around for over a decade now, well before LLMs and diffusion models were a thing.

And this lines up — most new technology first finds footing in illicit applications. The internet found footing in distributing pornography. Crypto found footing in contraband purchases on the Silk Road. It only makes sense that one of the most widespread early applications of AI is deceit.

Facebook already seems to be inundated with AI-generated images and videos, and users can't tell the difference!

It's obvious — we need solutions that protect humans from AI-generated scams, misinformation, and synthetic media. There are three lines of defense I see — synthetic media detection tools, digital trust protocols, and CogSec (cognitive security) training.


Defending humans from AI scams

Method 1: synthetic media detection tools

Just like antivirus software is now a standard (often built into the OS now!), synthetic media detection tools will soon be a baseline for digital safety. Platforms like Facebook that host user-generated content have already begun to label content that is obviously AI-generated.

There are two different techniques to accomplish this, and both are lucrative areas of research and commercialization.

Thanos: I used the AI to destroy the AI
  1. Using AI/ML models to identify AI-generated content — more useful to identify content generated from scratch.

    1. A lot is already known about the effectiveness of convolutional neural networks (CNNs), recurrent neural networks, temporal convolutional networks, pre-processing using error-level analysis, and generally about ML-based techniques to detect AI-generated content. More refinement will be due as more advanced methods get adopted to generate deepfakes.
    2. Key players that have productized this tech include Reality Defender and Sensity AI. Public-sector interest continues to rise, and SemaFor is a DARPA-funded project that works on "semantic forensics" for digital media.
    3. McAfee recently partnered with Lenovo to include deepfake detection built into their PCs. This seems to be a first for such tech showing up in consumer's hands.
  2. Digital watermarking of content to track its origin and creation — more useful to identify content that has been modified.

    1. Popular methods to digitally watermark authentic content include spread spectrum watermarking, least significant bit modification, singular value decomposition watermarking, and more.
    2. Microsoft's Project Origin and Adobe's Content Authenticity Initiative came together to create C2PA, which is an open technical standard for media traceability.

Method 2: Digital Trust Protocols

As traditional forms of trust (like seeing and hearing other humans!) become unreliable, we would need a new set of protocols that establish trust between humans. More importantly, we would want these protocols to validate authenticity of identity, and not just content.

Interestingly, Sam Altman might have seen this sooner than the rest of us, thanks to his exposure to OpenAI; and it might be why he founded Worldcoin, which is a digital identity and financial network. Here's a relevant excerpt from Worldcoin's whitepaper:

"Proof of personhood" is one of the core ideas behind Worldcoin, and refers to establishing an individual is both human and unique. Once established, it gives the individual the ability to assert they are a real person and different from another real person, without having to reveal their real-world identity.

The underlying technology for Worldcoin uses a combination of decentralized identifiers (DIDs) and zero-knowledge proofs (ZKPs), both of which are well-researched topics in the cryptography sphere. Because technology is not a key differentiator here, it is possible that other protocols will emerge that will also try to solve for trust in a post-AI era. The challenge ahead is really about scale — how do you get everybody to adopt this tech, and how do you make the cost make sense?

Method 3: CogSec

Until we can have highly sophisticated synthetic content detection or decentralized protocols that validate your identity, you'll just need psychological resilience training that defends against manipulation and deceit.

CogSec training isn't exactly a novel concept; many companies have invested millions of dollars in security awareness for their employees to reduce the risk of phishing; and governments have trained diplomats and intelligence agents to focus on resisting psychological operations.

There have been many successful companies in the conventional security training space:

  • KnowBe4 — A behemoth in employee-focused security training, valued at $4 billion.
  • Cofense (fka PhishMe) — Specialized in phishing simulations and training, acquired for $400M.
  • SANS Institute — Considered the gold standard for advanced cybersecurity and human-focused training.

There is a massive opportunity for someone to build a large-scale CogSec business that can help companies and governments with AI security training. Whoever fills this gap could offer simulated red team exercises, targeted training for AI-driven scams, and resilience-building modules to help employees and individuals stay one step ahead. And they'd likely be in an excellent position to be the key distributor of synthetic media detection tools and digital trust protocols a few years down the line, when those technologies are ready for mass adoption.


Conclusion

  • We need new methods to defend humans from AI-powered deception.
  • 3 key methods: synthetic media detection, digital trust protocols, and human-focused CogSec training.
  • Multiple large businesses will be built by solving these problems over the next decade.

What can you do today?

  • Establish kidnapping-like "proof of life" protocols with your friends and family.
  • Perform red-team exercises where you test yourself with mock scenarios to practice spotting phishing, manipulation attempts, or deepfakes.
  • Establish panic-pause protocols that force yourself to pause for a few hours or days if you feel like a message, email, or other communication threatens you.