Before adding and sharing your Fraud Alert please check to see if a similar alert has already been posted, thank you:

In Fighting Phishing, Hyper-Vigilance Is Hyper-Costly

Post a Fraud Alert:


When someone down the block gets robbed, the neighbors start looking with suspicion at anyone unfamiliar. Stop in the wrong neighborhood and someone is likely to call the police, who have to send a patrol car, tying up resources that could be better deployed fighting real crime.

Large organizations are a lot like those suspicious neighborhoods when it comes to fighting email scams. While no amount of training seems able to stop certain employees from clicking on malicious links in emails, other employees become hyper-vigilant, reporting any email that isn’t familiar to them. Some may even start to treat your “Report Phishing” button like a “Report Spam” button.

This leaves corporate security teams swamped by chasing down false positives.

Because phishing remains a top attack vector, it deservedly gets a lot of attention. What gets very little attention, however, is the cost of investigating legitimate emails that people mistake for threats.

A recent survey that we at Agari conducted of more than 300 security experts found that employee-reported phishing incidents are false positives 50% of the time. The security professionals reported it takes 5.9 hours (353 minutes) to investigate and remediate the average phishing incident and nearly four hours (238 minutes) to investigate a false positive.

Chasing down these phishing emails has become a significant cost for organizations. The same survey, whose respondents primarily worked at large U.S. and U.K. corporations, found that the average company responds to 23,063 phishing incident reports per year. At an estimated staff time cost of $253 per phishing incident, that amounts to nearly $4.3 million per year.

Phishing awareness training can be a double-edged sword. The more you train end users, the more cautious they become and the more often they will report emails they believe to be phishing attacks. At the same time, no amount of training is going to prevent employees from falling for a sophisticated and highly targeted phishing attack.

Make no mistake, it’s important to train employees to report phishing incidents. Phishing is a component of 93% of all breaches investigated, and email is the primary point of entry in 96% of cases, according to the 2018 Verizon Data Breach Investigations Report. The average cost of a data breach in the United States is $7.9 million, according to a 2018 study published by IBM Security and conducted by Ponemon Institute (via NBC News).

Attackers are unrelenting, and their methods keep evolving. A wave of wire transfer requests comes in. You block those, then they start asking for gift cards. You block those, and then your HR staff start getting payroll diversion attacks. Even a brief email interaction between a scammer and an employee can lead to a financial loss or the leaking of sensitive information.

Each phishing report will tie up a valuable resource on your security operations team, even if the report turns out to be a false alarm. Throwing more bodies at the problem isn’t realistic; even the most security-conscious enterprises have limited budgets, after all.

Many of the tasks needed to triage a user-reported email threat are repetitive and lend themselves to automation. Fetching the original email from the message store, rendering the content in a safe environment, looking at the full email headers to find anomalies, checking each URL against a third-party reputation service, and running attachments through anti-virus scanners and sandboxes are among the tasks that can be performed without any human intervention. Presenting this information clearly to the security operations analyst can greatly reduce the time needed to prioritize further investigation.

If the reported message is deemed to be a potential threat, the next step is to determine the exact nature of the threat as well as the “blast radius” (i.e., who else may have received a similar message but didn’t report the threat). This isn’t as easy as you might think, as cybercriminals will often vary the subject lines, URLs, sending addresses and other aspects of their attack messages.

Here is where artificial intelligence (AI) can help to identify other related messages by finding the similarities that might not even be obvious to a human analyst at first glance. For example, let’s say a phishing attack this past Cyber Monday used a “15%” discount in the subject line as a lure, yet there were half a dozen variations of the specific subject line and each message was sent from a different email address. While all the messages ultimately led to the same credential-harvesting page, each individual message had a unique link thanks to one of the common link-shortening tools. A simplistic search by subject, link or sender would have yielded a mere subset of the malicious messages.

The final step is to remediate the threat. Knowing who opened the message and who didn’t will greatly reduce the scope of this remediation, which might include reimaging the affected laptops or resetting the users’ passwords.

The balancing act between prudent vigilance and over-vigilance is likely to remain challenging for companies and organizations, as attack sophistication increases and attackers incorporate their own artificial intelligence software to boost effectiveness. Only through fighting back with AI strategies will organizations be able to cost-effectively stay one step ahead of the wolves.

Article source: