5 Common Cloud Incident Response Mistakes, Part 1

By: Jake King

At Cmd, we spend a lot of time in cloud environments, specifically Linux environments. When you are migrating applications to the cloud or transforming your digital footprint, it’s best to learn from those who have made mistakes and set yourself up for success with a solid plan. We’ve taken a deeper dive into some of the incidents we’ve seen play out and wanted to share them here.


In this two-part blog series, I’ll share five common Cloud Incident Response mistakes, and how to incorporate a few suggestions to your transformation to avoid a bad day. 


  1. Define an Incident Response plan and stick to it


What if we’ve identified that someone unauthorized has access to a system or a server? Having a solid incident response plan, or at least a guideline that we can follow to know who to contact and what to do next at our fingertips, is going to make the problem much more palatable, easier to understand, and easier to address.


Start by looking at a few things that are going to be very common to your environment. One of the big challenges is if we dive too deep into building that plan too aggressively, we’re going to get stuck in the minutia of building the plan itself. Let’s say we’ve got a 5, 10, 15-step threat model that we’re looking to go through. We’re going to overestimate in some areas and underestimate in others. 


Start simple, expand it as you go and build upon common frameworks. One of the best resources that I have identified in both cloud environments, as well as data center and traditional environments, is the CSA cloud incident response framework. Cloud Security Alliance released a guide and it has been immensely helpful in building our strategy at Cmd and in helping our customers find that strategy themselves.


  1. Understand ownership of the systems in your environment


Determining a system owner can be the bridge between resolving an incident prior to exfiltration rather than post, something we all want to achieve as security practitioners. But this is sometimes a pipedream with complex systems and sprawling access, let alone complex management structures of cloud services.


Not only is it important to understand the owner of a system, but rather the scope of your own management of said system. Understanding the scope of SaaS, PaaS, and IaaS, for example, will change the way you respond to compromise and in many cases may limit your ability to do so.


Develop a key vendor matrix and maintain, update, and record the types of data you share with different vendors. But depending on the situation that you might be in, responding to an incident may be impossible in some cases, forcing you to choose alternative vendors or limit the scope of data shared with a particular service. 


  1. Log, log, and log some more


“If there are no logs, there’s no crime.”


I hear this all the time in security circles, and the mantra rings true. Failure to log what has happened sufficiently is going to get you in hot water when you are trying to reconstruct an incident and what may have gone wrong. 


Logs are going to make sure you understand the full scope of the compromised system you’re dealing with, along with responding to regulatory, legal, and criminal responses. As responders, we need information to really understand the scope of the situation, which could compromise the impact to the systems or impact to users.


Consider a multi-faceted approach: Runtime, Cloud Console, Network, System and Application logging as a starting point. Consider your options for each carefully, as costs for storage aren’t free, and build consistency from the earliest stages where possible. Going back over two year old log data isn’t a great option for anyone.


These are a few building blocks for a secure and reliable deployment. Check back soon for part two of the series on the Cmd blog.

Get Started

Gain true visibility
in minutes_

Ramp up your Linux defense strategies
and see what you've been missing.



Share via
Copy link