Let's face it — phishing is still the go-to method for breaking into companies. Depending on the report you read, around 80-95% of cyber attacks start with someone clicking the wrong link or trusting the wrong email. And while that statistic might feel overused at this point, the reality behind it hasn't changed much. We're still living in a security model where one distracted or curious click can compromise an entire environment. That's a lot of pressure on the human factor, which just so happens to be the least reliable part of any security system.
So let's stop pretending that phishing is "just an awareness issue" and take a more complete look at what phishing really is, how it works in practice, and what kind of control we have to stop it, because there's more we can do. A lot more.
Let's give credit where it's due. Most organizations today run regular phishing simulations and awareness training. That's a great start. Educating employees, testing their reactions, and building that "pause and think" instinct is critical. But here's the thing: it's not just about running the test. It's about how you design, measure, and learn from it.
The number one factor in any phishing test is pretext. A good pretext makes or breaks the campaign. The more it aligns with your company's tone, habits, tools, and culture, the more effective (and realistic) the test will be. If it triggers curiosity or urgency, especially coming from what looks like an authority figure — bingo. That's when people click. That's how real attackers operate, and that's how we should simulate it too.
But let's talk about where things go sideways: data interpretation.
Here's a simple example. Let's say you send a phishing test to 100 employees and five of them submit their credentials. You might think, "Hey, only 5% fell for it. Not bad." But wait. Did all 100 people actually see the email? Let's say only twenty people even opened it. Now we're looking at 25% compared to the previous 5% assumption. That's an entirely different story. That's not just a gap; that's a potential breach waiting to happen.
That's why collecting the correct data is so important. The basics you want to measure include:
Each of these tells you something different about employee behavior and security awareness maturity. You're not just testing people; you're testing the culture, the communication patterns, and your ability to detect social engineering in the wild. You can learn a lot if you're willing to look at the whole picture, not just the final number.
User awareness is important, but let's be honest, people will always be people. Even the most well-trained employee can fall for a convincing enough email. That's why relying solely on awareness training isn't enough. You need technical controls that can protect your organization before a phishing email even reaches someone's inbox.
Let's talk about impersonation phishing, and to clarify what I mean by that, this isn't just someone pretending to be your CEO or IT support. In this context, impersonation means attackers are trying to steal user credentials so they can impersonate the user and gain access to the organization's environment. Once they're in, they act as that user, often with no red flags.
With so many organizations relying on Microsoft 365 for business operations, attackers are heavily focused on exploiting it. Some of the most widely used tools in these attacks are Evilginx and Modlishka. These are man-in-the-middle (MITM) reverse proxy frameworks that intercept credentials and session cookies. The end result? They can completely bypass multi-factor authentication and log in as the user.
That's why we always say: Don't just test your users, test your security controls. Ask yourself:
All of these are critical pieces of a layered defense strategy. The goal is to make it harder for attackers to move forward, even if they get the initial credentials. But let's not stop at prevention. What happens if something goes wrong? Let's say an account gets compromised. How quickly can you detect and isolate it? Do you have an incident response plan that covers cloud account takeovers? Has that plan ever been tested?
Not all phishing emails are trying to steal your login credentials. Some are aiming for something even more dangerous — malware on your system. This is where malicious payload phishing comes into play.
It often shows up as a sketchy attachment or a link that leads to a file sitting on some external server. Once the file is downloaded and executed, the attacker can get a foothold inside your network. At this point, the game is on; they can move laterally, escalate privileges, exfiltrate data, you name it.
Compared to impersonation phishing, the goal here isn't just access to an account. The goal is to compromise the endpoint and use it as a pivot point for further attacks. So, how do you defend against this?
First, let's look at the attachments. Attackers use all sorts of file types: Word docs, Excel files, PDFs, ZIPs, ISO images, even shortcut files. Some of these are common in daily business use. That's the tricky part. You can't just block everything. The question becomes: which types does your organization actually use? And are your email filters smart enough to catch the ones you don't?
Second, if the payload is hosted on a remote server, then the email itself might contain just a link. That's where URL sandboxing can make a big difference. A good URL sandbox checks the link before it lands in your user's inbox.
From a testing perspective, there's a big challenge here. Most regular phishing campaigns either skip payloads entirely or only include them in a very limited way. If phishing is included as part of a pentest, the tester might throw in one or two common payloads, but that barely scratches the surface. It doesn't reflect your organization's overall resilience against the wide variety of payloads that exist out there.
In our phishing campaigns, we take a different approach; a technique we call payload filtering testing. We send hundreds of real-world payload samples (harmless, but technically structured like actual threats) to a controlled test account. The goal is simple: see what gets blocked and what slips through. This gives a much clearer view of what your email security is actually capable of handling, and where it's falling short.
We categorize each payload to allow for more precise analysis.
We track:
This type of testing gives you real, actionable data. It shows how your email filters perform not just against a sample of threats, but against the techniques that are actively being used in the wild by adversaries targeting organizations just like yours.
Phishing is not going away. Attackers keep getting better, and the methods evolve faster than most defenses. Yes, user awareness matters. Yes, phishing simulations are important. But stopping phishing isn't just about checking a box or running a training session once a year.
If you really want protection, not just the appearance of it, you need to take a layered approach. It means testing how your employees respond to real-world tactics. It means digging into your email security settings, not just trusting the default configuration. It means proactively challenging your own systems with payloads, impersonation attempts, and advanced phishing techniques before an attacker does it for you. If your tools can't stop it, even the most cautious employee won't save you.