5 Common Cloud Incident Response Mistakes, Part 2

By: Jake King

At Cmd, we spend a lot of time in cloud environments, specifically Linux environments. When you are migrating applications to the cloud or transforming your digital footprint, it’s best to learn from those who have made mistakes and set yourself up for success with a solid plan. We’ve taken a deeper dive into some of the incidents we’ve seen play out and wanted to share them here.

 

In part one of this blog series, I shared three common mistakes we’ve seen in the wild. In this last blog, I’ll share two more common problems we’ve found along the way.

 

  1. Tabletop that threat

 

Remember, an incident is probably not going to happen at two o’clock in the afternoon on a Monday, it’s probably going to happen at two o’clock on a Sunday morning when you’re not as sharp, haven’t had that coffee, and not ready for your day.

 

Like firefighters, practicing with the tools of the trade remains critical to ensuring that systems are operating properly, so much so that numerous tools have been created just for this purpose. As Cmd specializes in Runtime Security, our efforts for testing are tied to projects such as Atomic Red Team by the Red Canary, providing real-world, operational scenarios that are replayable, predictable, and often enjoyable for the team to assess. If you’re looking to expand beyond the runtime assessment side, check out BadThingsDaily on twitter, it’s a lot of fun.

 

Build an operational plan for these sessions over time and record the results. Tabletop exercises are incredibly useful and important to ensuring that your team knows what’s up when the going gets tough and provides a solid method for teamwork development.

 

  1. Avoid the ‘fog of war’—exhaust your forensic response during an incident 

 

When I see responders that are new to the field of security incident response, or maybe responding to their first security incident, I often note an avoidance of potential problems that  make the investigation too complex. This often leads to missed events, strange avenues an adversary may have taken to pivot their exploit through the environment, or, at worse, a misdiagnosis of a secure system.

 

It’s incredibly important to exhaust all avenues that an adversary may have taken (within reason). When we have biases that say that the system is secure or it was patched, we may be assuming these details from the last audit and, as such, may make critical mistakes along the way. Adversaries are going to take advantage of weaknesses even if the system passed its SOC2 audit check a year ago. 

 

Avoiding this ‘fog of war’ comes down to documentation and planning. Ensure you’re writing down the steps you’re taking when responding and assessing scope and bias along the way. Also, where possible, bring in external assistance if you feel that you may be over your head. Skilled experts in system response often work closely with internal teams to ensure an investigation is rock solid and leaves no stone left unturned.

Now, hopefully when that new cloud deployment sees an incident you’re rock solid. If you have suggestions or comments, let us know. Thanks for reading our tips! 

Get Started

Gain true visibility
in minutes_

Ramp up your Linux defense strategies
and see what you've been missing.

START FREE TRIAL

 

Share via
Copy link
Powered by Social Snap