Traditional security awareness tries to fix the human, not the system. People are your organization's foundation—every decision and threat flows through them. Security that ignores this reality fails.

Traditional security awareness is built on three flawed assumptions. These assumptions seem reasonable on the surface, but they misunderstand how human behavior actually works. Understanding why these assumptions fail—and what works better—is the key to building security awareness programs that reduce actual risk instead of just checking compliance boxes.
The assumption is simple: if people knew the risks, they would act differently. The reality is more complicated. Knowledge alone doesn't change behavior. Your employees know they shouldn't click strange links. But in a moment of distraction or pressure, that knowledge fails them. You can't solve a behavioral problem with a knowledge-based solution.
This assumption persists because it's easy to measure and easy to deliver. You can test knowledge through quizzes. You can deliver knowledge through e-learning modules. You can report knowledge acquisition through completion rates. But none of these metrics tell you whether behavior has changed. An employee who scores 100% on a phishing awareness quiz can still click a phishing link the next day if they're distracted, rushed, or under pressure.
Behavioral science offers a better framework. The COM-B model explains that behavior requires three things: Capability (knowing what to do), Opportunity (having the tools and environment to do it), and Motivation (wanting to do it). Traditional security awareness focuses almost exclusively on Capability—teaching people what phishing looks like, what secure passwords are, why they shouldn't share credentials. But Capability alone is insufficient.
If your employees don't have an easy way to report suspicious emails (Opportunity), they won't report them—even if they know they should. If your culture punishes people for making mistakes or slowing down workflows (Motivation), they'll take risky shortcuts—even if they know the risks. Knowledge is necessary but not sufficient. Behavior change requires addressing all three components.
What works better is designing systems where secure behavior is the easy default. Make reporting suspicious emails as simple as clicking a button. Integrate security into existing workflows instead of creating separate processes that people have to remember. Remove friction from secure actions and add friction to risky ones. When the secure choice is also the easiest choice, behavior changes—regardless of knowledge levels.
The assumption is that all employees face the same risks and need the same training. The reality is that risk is not evenly distributed. Some employees—because of their role, their access, or their behavior patterns—pose significantly higher risk than others. Treating everyone the same means you under-invest in high-risk populations and over-invest in low-risk ones.
This assumption persists because it's administratively simple. One training module for everyone. One phishing simulation for everyone. One policy for everyone. But this approach ignores the data. In most organizations, 10-15% of employees account for 60-70% of risky behavior. These are the people who repeatedly click phishing links, who use weak passwords, who bypass security controls because they find them inconvenient.
Behavioral science tells us that interventions are most effective when they're tailored to the specific barriers and motivators that drive behavior in a particular context. A finance manager who clicks phishing links because they're overwhelmed with invoice approvals needs a different intervention than an engineer who clicks phishing links because they're curious about new technologies. The finance manager needs workflow redesign and decision support. The engineer needs awareness of social engineering tactics and safe ways to satisfy curiosity.
What works better is risk segmentation. Identify your high-risk individuals through phishing simulations, incident data, and behavioral assessments. Understand why they're high-risk—is it lack of knowledge, lack of tools, lack of motivation, or contextual pressures? Design targeted interventions that address the specific barriers they face. Measure whether those interventions reduce their risk. This approach is more complex administratively, but it's far more effective at reducing actual risk.
Organizations that implement risk segmentation typically see 50-70% reductions in high-risk populations within six to twelve months. That's not because they're teaching people more—it's because they're removing the barriers that make risky behavior the path of least resistance.
The assumption is that if you make people feel bad about their mistakes, they'll be more careful next time. The reality is that blame and shame create a culture where people hide mistakes instead of reporting them. When employees are punished for clicking phishing links, they stop reporting suspicious emails. When they're publicly called out for security failures, they stop asking questions. The very behaviors you need to encourage—reporting, asking for help, speaking up when something looks wrong—get suppressed.
This assumption persists because it feels intuitively right. If someone makes a mistake, they should face consequences. That's how accountability works. But behavioral science shows that punishment is one of the least effective tools for changing behavior, especially when the behavior you're trying to change is complex, context-dependent, and influenced by factors outside the individual's control.
Punishment creates fear, and fear drives people to hide. When an employee clicks a phishing link and realizes their mistake, they face a choice: report it and face consequences, or stay silent and hope nothing bad happens. In a blame-and-shame culture, staying silent is the rational choice. But staying silent means your security team doesn't know about the breach. They can't contain it. They can't investigate it. They can't learn from it. The employee's silence turns a small mistake into a major incident.
What works better is psychological safety. Create an environment where people feel safe reporting mistakes, asking questions, and raising concerns. When someone clicks a phishing link, treat it as a learning opportunity, not a disciplinary issue. Understand why they clicked—were they rushed? Distracted? Under pressure? Did the phishing email exploit a gap in their knowledge or a weakness in your systems? Use that information to improve your defenses, not to punish the individual.
Organizations with high psychological safety see reporting rates increase by 300-400%. That's not because people are making more mistakes—it's because they're reporting mistakes they would have previously hidden. This visibility allows security teams to respond faster, contain threats earlier, and learn from near-misses before they become breaches.
If traditional security awareness is built on flawed assumptions, what does an effective program look like? It starts with reframing the problem. The problem is not that your employees are careless or ignorant. The problem is that you've designed a system where risky behavior is often the easiest, fastest, or most rewarded option.
Effective security awareness programs focus on three things: reducing friction for secure behavior, increasing friction for risky behavior, and creating a culture where reporting and learning are valued over blame and punishment.
Reducing friction for secure behavior means making the secure choice the easy choice. If you want people to report suspicious emails, give them a one-click reporting button. If you want people to use strong passwords, give them a password manager that auto-generates and auto-fills passwords. If you want people to verify requests before transferring money, build verification into the approval workflow so it's automatic, not optional.
Increasing friction for risky behavior means adding small barriers that prompt people to pause and think. If someone tries to send sensitive data to an external email address, trigger a confirmation dialog that asks them to verify the recipient. If someone tries to access a high-risk website, show a warning that explains the risk and offers a safer alternative. These barriers don't prevent risky behavior entirely, but they create moments of reflection that reduce impulsive mistakes.
Creating a culture of reporting and learning means rewarding people for speaking up, not punishing them for making mistakes. When someone reports a suspicious email, thank them publicly. When someone clicks a phishing link and reports it immediately, praise their quick response. When someone raises a security concern, investigate it seriously and communicate what you learned. This creates a feedback loop where people feel valued for contributing to security, not blamed for being human.

Traditional security awareness programs measure the wrong things. They measure training completion rates, quiz scores, and phishing simulation click rates. These are activity metrics—they tell you whether people are participating in the program, but not whether the program is reducing risk.
Effective programs measure outcome metrics. Are fewer people clicking phishing links over time? Are more people reporting suspicious activity? Are high-risk individuals receiving targeted interventions? Are incident response times improving because people are reporting threats earlier? These metrics tell you whether behavior is changing and whether that behavior change is reducing actual risk.
The most important metric is reporting rate—the percentage of employees who report suspicious emails, unusual requests, or potential security incidents. Reporting rate is a leading indicator of security culture. Organizations with high reporting rates (above 40%) detect threats earlier, respond faster, and experience fewer successful attacks. Organizations with low reporting rates (below 10%) operate blind—threats are present, but no one is speaking up.
If your current security awareness program doesn't track reporting rate, start tracking it. If your reporting rate is below 20%, that's a red flag that your culture may be suppressing the very behaviors you need to encourage. If your reporting rate is declining over time, that's a signal that your program is creating fear instead of confidence.
If your organization's security awareness program is built on the three flawed assumptions—knowledge gaps, one-size-fits-all, and blame-and-shame—it's time to rebuild. But rebuilding doesn't mean starting from scratch. It means shifting focus from fixing the human to fixing the system around the human.
Start by measuring your baseline. Run a phishing simulation and track not just click rates, but reporting rates. Identify your high-risk individuals. Survey your employees to understand whether they feel safe reporting mistakes and asking questions. This baseline tells you where you are and what needs to change.
Next, design interventions that address the COM-B components. For Capability, focus on just-in-time training that happens in context, not annual e-learning modules. For Opportunity, reduce friction for secure behavior and increase friction for risky behavior. For Motivation, build psychological safety and reward reporting.
Then, segment your population. Identify your high-risk individuals and understand why they're high-risk. Design targeted interventions that address their specific barriers. Measure whether those interventions reduce their risk. Iterate based on what you learn.
Finally, shift your metrics from activity to outcomes. Stop celebrating training completion rates. Start celebrating reporting rate increases, click rate reductions, and faster incident response times. These are the metrics that prove your program is reducing actual risk, not just checking compliance boxes.
If you're unsure where your security awareness program stands, we offer a free human risk assessment that measures your baseline phishing click rates, reporting rates, and risk segmentation. This assessment takes two to four weeks and delivers a clear picture of where your risks are concentrated and what interventions will have the greatest impact.
If you know you need help building a human-centered security awareness program, we specialize in behavior-focused interventions that reduce actual risk. Our approach is grounded in the COM-B Framework and focuses on reducing friction for secure behavior, building psychological safety, and measuring outcomes that matter.
If you want to learn more about why traditional security awareness fails and what works better, download our Human Risk Management Playbook, a comprehensive guide that covers behavioral science frameworks, risk segmentation strategies, and outcome-based measurement approaches.
Traditional security awareness tries to fix the human, not the system. People are your organization's foundation—every decision and threat flows through them. Security that ignores this reality fails.
Ontvang maandelijks praktische inzichten over gedragsverandering, compliance en resilience. Alleen relevante kennis, geen spam.