Cmd.com will be migrated to Elastic.co shortly.
By: Jake King
As the world tries to slow the spread of COVID-19, the way we work is changing quickly and drastically. Doctors and economists are working diligently to find ways to get through and ultimately recover from this pandemic, and meanwhile the general population is busy adapting to sudden changes in our day-to-day lives.
Yet, chaos breeds opportunity for the adversary, and they are no doubt taking advantage. Hacking is increasing and phishing is on the rise, leaving cybersecurity practitioners to deal with a more dangerous threat landscape in the middle of a major shift to remote work patterns. The result: rapidly escalating alert fatigue.
In the context of cybersecurity, “alert-fatigue” refers to the false alarms that companies’ security tools identify based on the security protocols in place. One of the severe consequences of alert-fatigue is that system administrators or security engineers will eventually start considering these alerts redundant, potentially allowing malicious actors to go undetected or ignored.
Alert fatigue has always been a challenge for security analysts. Over time, sophisticated models have been developed to identify and prioritize so-called “interesting” events – specific patterns of behavior among an ocean of security data that are suspicious enough for a human investigator to spend time analyzing. These models rely on certain patterns of normality to filter out noise – for example, it’s highly unlikely that someone logging into a sensitive computer from a corporate office during normal business hours would be problematic.
Today however, people who were once in the office are now working from home. For those of us in cybersecurity, this is a big change that means that there will be different patterns in the data we are collecting – and our models must adjust to accommodate them.
With important, tried-and-true data points like working hours, IP addresses, and locations becoming less reliable, what data should analysts be looking at to reduce alert fatigue and identify legitimate threats? Cmd has taken a page from the UEBA industry, and we’ve adapted our behavioral models to achieve just that.
The model developed by Cmd’s data science team monitors the commands executed by users within the company, and highlights if a user’s behavior differs excessively from their normal behavior. While employees within a company hired for similar roles may exhibit behavioral similarities with other members of their own teams, they are less likely to behave similarly with other teams.
Our model captures privilege access escalation, shared credentials, and external threats, based on the commands that employees within the company execute. This doesn’t mean that IP or timing anomalies are ignored. Those features are considered as well, though actual exec-based behavior is weighted more heavily to account for the changes in work patterns we are all experiencing. For example, if a DevOps engineer exhibits typical behavior, but this time from a remote location and at 11pm at night, then it would not be considered abnormal.
The COVID-19 pandemic has spread across the globe, health care professionals are working hard to save lives, and the government is playing their part by helping people financially. As security professionals it is our job to continue monitoring and securing our cyber-world while we continue to work remotely during these times.
The cloud is critical to working our way through this pandemic, and our mission at Cmd is to secure that environment. We take the job seriously. Our data science team is working hard to design robust models with an aim to keep your cloud and data centers healthy.
Interested in how we can keep your environment secure while everyone is working from home? Speak to a product specialist to learn more.
Ramp up your Linux defense strategies
and see what you've been missing.