This October, in recognition of Cybersecurity Awareness Month, Adaptive Security has assembled 32 resources. Below, we’ve outlined a suggested cadence for rolling them out.
These assets can be found in the content library under the titles listed.
Week 1: Deepfake Awareness
Explore how artificial intelligence creates convincing fake videos and voices—even of your colleagues and executives. Learn to identify AI-generated content and protect against voice and video fraud.
Video Module: Deepfake Awareness Video: Deepfake Deconstructed
Video Module: Deepfake Awareness Video: Deepfakes in Hiring
Game Module: Deepfake Awareness Game: Jeopardy Challenge
Week 2: OSINT & Executive Security
Understand how cybercriminals mine your social media posts to answer security questions and infiltrate personal accounts. Master privacy settings and safe sharing practices.
Game Module: OSINT & Executive Security Game: Jeopardy Challenge
Week 3: AI Phishing
Learn how cybercriminals craft sophisticated phishing emails that can compromise entire organizations in seconds. Discover the red flags, understand the tactics, and master the art of email verification.
Video Module: AI Phishing Video: Caught in the Phish
Video Module: AI Phishing Video: Evolving Threats
Game Module: AI Phishing Game: Jeopardy Challenge
Newsletter: Cybersecurity Awareness Month: AI-Phishing
Week 4: Human Risk in an AI World
Discover hidden cybersecurity risks in daily activities—from charging your phone to connecting to Wi-Fi and making online purchases. Build security habits that protect you everywhere.
Video Module: Human Risk in an AI World Video: Cyber Safe or Sorry?
Video Module: Human Risk in an AI World Video: Cyber Street Smarts
Game Module: Human Risk in an AI world Game: Jeopardy Challenge
Ready-to-Use Email Templates
Cybersecurity Awareness Month 2025: AI Threats Need Smarter Defenses
Hey Team,
October is here, and we're launching our Cybersecurity Awareness Month campaign with a critical focus: AI-powered cyber threats.
Cybercriminals now have access to the same AI tools we use every day. They're using ChatGPT to write perfect phishing emails, voice cloning apps to impersonate executives, and deepfake generators to create convincing fake videos.
The traditional red flags we've trained you to spot? Many don't apply anymore.
Cyber attacks are getting more realistic and personal in the age of AI — and the numbers prove it:
82.6% of phishing emails now use AI technology in some form (Tech Advisors)
Phishing-related breaches now average $4.88 million per incident (StrongestLayer)
AI-enabled fraud could reach $40 billion by 2027 (World Economic Forum)
But here’s what attackers can’t replicate: human judgment and verification habits.
This October, we’re focusing on how you can outsmart AI-driven cyber threats.
Here's what we'll cover each week:
Week 1: Deepfake Awareness – Spotting synthetic media that can deceive even experts.
Week 2: OSINT & Executive Security – How attackers research you before they strike.
Week 3: AI-Phishing – Perfect emails that hide perfect traps.
Week 4: Human Risk in an AI World – Building habits that protect you everywhere.
We'll share short, actionable tips, videos, posters, and newsletters each week to help you stay ahead of these evolving threats. Cybersecurity isn't just an IT thing—it's everyone's job, and YOU make the difference.
Let's make this the month we sharpen our skills and protect each other.
Week 1: Deepfake Awareness
Hey Team,
Welcome to Week 1 of Cybersecurity Awareness Month! This week, we're tackling the latest tricks in a cybercriminal's toolbox: deepfakes and AI voice scams. These tools can fake faces, voices, and even full videos so convincingly that you might not know what's real.
What's Happening?
Attackers can now:
Clone an executive's voice and "call" you asking for an urgent money transfer or data access
Create a video of your colleague requesting sensitive data... that they never recorded
Use synthetic audio or video to trick you into acting quickly without verifying
Real Example: A finance worker at British engineering company Arup received a video call from what appeared to be his company's CFO requesting urgent wire transfers. The executive appeared on screen, spoke naturally, and was joined by other familiar colleagues. The worker transferred $25 million to criminals. Every person on that call was an AI-generated deepfake.
Your Defense Checklist:
Pause & Verify – If something feels off, confirm the request through a different channel.
Check the Context – Was this unexpected? Does the timing make sense?
Report It – Suspect a fake? Don't engage—report it to Security immediately.
🎦 Watch & Learn: [Attach Week 1 videos here]
This week’s videos examine how AI powers deepfake technology across video, audio, and images. You’ll see how manipulated media can be used to mislead, explore the risks it creates for businesses and individuals, and pick up practical tips for spotting and guarding against this growing threat.
📩 Save & Share: [Attach Week 1 posters here]
📰 Read & Reflect: [Distribute Week 1 newsletter]
Week 2: OSINT & Executive Security
Hey Team,
Week 2 of Cybersecurity Awareness Month is here, and we're looking at something that might surprise you: how much cybercriminals can learn about you before they even send their first attack.
Before launching AI-powered attacks, these criminals do their homework. They're not just sending random phishing emails anymore—they're building detailed profiles of their targets using something called Open Source Intelligence (OSINT).
What's OSINT? It's basically using publicly available information to learn everything they can about you:
Your social media posts and connections
Your professional background and colleagues
Your interests, hobbies, and recent activities
Even your communication style and schedule patterns
Why This Matters: When attackers know your recent projects, your colleagues' names, and your communication style, they can create incredibly convincing fake emails that reference real people and actual work situations.
Your Digital Defense:
Google yourself – see what attackers can easily find about you
Review your social media privacy settings regularly
Be careful about sharing detailed work schedules or project information publicly
Use unique answers for security questions that can't be found online
The goal isn't to become invisible online—it's to make it harder for attackers to build a convincing profile of you.
🎦 Watch & Learn: [Attach Week 2 videos here]
This week’s videos demonstrate how cybercriminals collect and analyze information that’s publicly available online. From social media posts to professional details, you’ll learn how small pieces of data can be stitched together for targeted attacks, along with strategies to limit your digital exposure.
📩 Save & Share: [Attach Week 2 posters here]
📰 Read & Reflect: [Distribute Week 2 newsletter]
Week 3: AI-Phishing
Hey Team,
Week 3 of Cybersecurity Awareness Month is here, and we're looking at how AI has transformed phishing attacks across all communication channels.
Phishing isn't just about email anymore. Attackers are now using AI to enhance attacks through:
Email – Perfect grammar, personalized content, legitimate-looking branding
Voice calls – AI-cloned voices of executives or colleagues requesting urgent action
Text messages – Contextual SMS attacks that reference your work or personal life
Social media – Fake profiles and messages that seem to come from trusted connections
Chat platforms – Slack, Teams, and other workplace messaging with urgent IT requests
QR codes – Malicious codes on posters, emails, or even legitimate-looking flyers
What Makes These Dangerous: AI removes many of the red flags we've traditionally relied on, making phishing harder to spot—even for experienced eyes. These attacks can be launched at massive scale for a fraction of the cost of traditional phishing campaigns
The Common Thread: Urgency + Personalization
Whether it's an email, call, or text, AI-enhanced phishing combines urgent requests with personalized details to make you act before you think.
Universal Defense Strategies:
Verify through a different channel – If someone emails urgently, call them. If they call, send a text.
Question perfect timing – Why is this request happening now, through this method?
Check the details – Do the sender address, phone number, or profile actually match?
Trust your instincts – If something feels off, investigate before responding
Report everything – Forward suspicious messages to IT Security regardless of the channel
Your best defense works everywhere: Slow down, verify the source, and report anything suspicious whether it comes through email, phone, text, chat, or social media.
🎦 Watch & Learn: [Attach Week 3 videos here]
This week’s videos dive into how AI is transforming phishing and social engineering attacks. You’ll see how attackers use advanced tools to craft realistic emails, texts, and voice calls, and you’ll learn key verification skills to identify suspicious activity before it succeeds.
📩 Save & Share: [Attach Week 3 posters here]
📰 Read & Reflect: [Distribute Week 3 newsletter]
Week 4: Human Risk In An AI World
Hey Team,
Cybersecurity Awareness Month may be ending, but the AI-powered threats we've been discussing don't stop at the office door. They follow you home through social media, online shopping, dating apps, and even smart home devices.
Your personal security and work security are more connected than ever.
Beyond the Workplace: The same AI techniques targeting our organization are being used for:
Romance scams with deepfake video calls on dating apps
Shopping fraud with fake reviews generated by AI
Family emergency scams using cloned voices of relatives
Investment fraud with personalized financial advice
Building Your Personal Defense:
Use multi-factor authentication on all your accounts—work and personal
Set up unique passwords with a password manager
Review privacy settings on social media quarterly
Apply the same "verify first" mindset to personal communications
Establish family code words for emergency money requests
The verification habits we've practiced this month work everywhere. If someone calls claiming to be a family member in trouble, verify through a different method. If an investment opportunity seems too good to be true, research independently.
🎦 Watch & Learn: [Attach Week 4 videos here]
This week’s videos bring cybersecurity into your daily routines. You’ll explore how AI-driven scams show up in everyday life—whether through fake logins, fraudulent offers, or online tricks—and learn simple, repeatable habits to stay secure both at work and at home.
📩 Save & Share: [Attach Week 4 posters here]
📰 Read & Reflect: [Distribute Week 4 newsletter]
Please remember that the security team is always here to help. Whether you're wondering about a suspicious message before or after you click, it's always better to speak up. We won't judge you! We're here to keep you safe.

