Who hasn’t hear this one: “the attacker only need to be right once, and they are a success”. Indeed, the corollary is said just as often: “you only need to be wrong once and you’re screwed!” All of that makes you feel a little helpless, right? Helpless folks give up – and good luck getting them to deal with the myriad of issues that are inherent to securing their environments! We’re going to see how we can turn the tables here, and the first step is to see how visibility makes you a most frustrating victim for adversaries!

I am a big fan of helping my customers see that security doesn’t have to be perfect. Sometimes, better is good! When you battle any adversary, you really just have to make them work harder for it. Forcing them to do more to win your precious data has a few cool effects:

  • They get frustrated. This can lead to mistakes!
  • They use more TTPs. This improves the probability you will detect them!
  • They take more time. This gives you and your teammates more time to identify their behavior, decipher their plan, and counter their attacks.

So the big question is, how do we do this? Some who will tell you that you need to not only see and detect it all, but prevent each step. Companies spend a lot of money heaping new security controls into their environment in pursuit of exactly that. Perfect prevention is the cold fusion of cybersecurity. IF you ever achieve it, you’ll need a femtosecond-accurate clock to measure the duration of its effectiveness. Worse, you will expend a tremendous amount of energy to get there. Why not go with a more realistic approach?

Time-Based Security

This gets into the concept of Time-Based Security, a concept that really needs to get more love than it does. Winn Schwartau defined the concept in the late 1990s as a way to help frame prevention vs. response and detection. Simply put:

relationship between the time that prevention buys you vs. the time that detection and response take.  To be secure, detection and response must be complete before the prevention is burned away.
Your mission – detect and respond before your prevention runs out!

The Firewall Origin Story you never wanted…

There is a slick analogy that I think will help. What is the #1 most common prevention technique for any environment? If you said “firewall!” give yourself a relieved pat on the back. Why do we call them those? They have the same function as a building’s firewall. Builders must design these walls to separate the people space from something that might burn. Like a wall between your house and a garage. Or if you live in a mixed use building, a firewall may be between you and a restaurant. In either case, the firewall keeps the fire from spreading quickly. Do they prevent the fire from spreading forever? Nope! The key here is that while the firewall does its thing, you need to detect the fire (fire alarm) and respond (get out, call 911, etc.). Without detection and response in your house, you’re cooked, just a little later than if no firewall was there. Prevention has done its job if you and your family make it out intact!

Cybersecurity is the same game, folks! Spend all of your cash on expensive, high-bandwidth firewalls and super-invasive endpoint suites, and you’ll certainly burn through the cash! Our myopic focus as an industry on prevention has been a demonstrable catastrophe. Breaches don’t come down to “you didn’t spend enough on firewalls.” They come down to not having seen or understood the events on the network, grasped the behavior, foreseen the vulnerability, and more. Certainly, most of those prevention include detection and visibility, but if you are buying them under the illusion that they will prevent it all – are you even listening to them?

3 chimpanzees are posed in the speak no evil, see no evil, hear no evil poses. Environments that do not see or listen fail at becoming frustrating targets.
Just because you didn’t see or hear it doesn’t mean you are exempt from its consequences!

Is there such a thing as too much visibility?

Yep! As I mentioned in my Gap Analysis & Engineering post, we do not need to have 100% coverage. More is sort-of better, but can your staff actually handle that telemetry? A comment from my SANS training gnaws at me – “it is a sin to collect more than you can use”. If you and your teammates cannot handle the new source, think about whether you want to spend the time and money to collect it! You may actually find that less is more. Collecting more than you need almost ALWAYS means you are also collecting more than you understand. Which likely indicates you are collecting things that:

  1. Don’t add value – you are ignoring it.
  2. Add cost – exhausts consumption budget or takes up disk!
  3. Obscure the truth – adds noise that hides or overwhelms your careabouts.
  4. Get you in trouble – you might be collecting PII or violating privacy for now reason!

Ok, so how good does my visibility need to be?

So where do you need to be for ‘good’ visibility? I encourage organizations to take an inventory of their current telemetry sources. Grade each of those sources or types based on the perceived value you extract from them, the use cases for which it applies, and the effort it takes to get there. Here are some considerations

  • Some data is valuable in real time, some is more of a forensic consideration. Sorting by use case cleans up dashboards and eliminates clutter.
  • Some telemetry is required and thus untouchable – regulations and laws can dictate a lot of collection requirements.
  • Assess value to your team. If you find sources in your ingest pipelines that have yet to demonstrate value, ask yourself why they are there and whether you can do without them?
  • Overlap may provide opportunities to prune the lower ROI or non-required sources. Weigh this against the overall visibility program’s resilience and any failure modes.

Think about what is missing! You have untapped sources that provide rich insights and very high ROI, I can guarantee it – are you using it?

  • Network telemetry – NetFlow, IPFIX, whatever – all offer amazing insights.
  • Application recognition – on Cisco stuff, this is called Network-Based Application Recognition (NBAR) or Application Visibility & Control (AVC)
  • DNS & DHCP – you’d be surprised how many orgs just assume they can’t offer more insight than the NGFW they just put in place.
  • AV/EDR – just because it is on, doesn’t mean you are listening!
  • Proxy and email logs – these rich sources are completely dismissed because they are old or no longer the bees-knees. Don’t fall for that!
  • OS Logs – I know Windows events are overwhelming, but a small subset can be a huge help! Use SwiftOnSecurity’s Sysmon template for a great first step!

1st step to becoming a frustrating victim

Adversaries love when things go to plan. They hope you are going to make the same mistakes as their last victims, and they know you are struggling with the same issues: Resource gaps, alert fatigue, inadequate tooling, misconfigured controls, and poor cyber hygiene. Making things worse, they know what prevention or protection tricks you may have implemented. Their toolkit is full of innovative methods that allow them to take advantage of your issues and evade those protective measures. They know you are counting on the prevention to be absolute.

To frustrate these capable attackers, you need to leverage your visibility to inform a response. When you detect they are in the environment and you take an action against their presence, you are no longer an easy mark. If you interpreted their behavior properly, you have taken away a portion of their playbook and maybe even forced them to go off script. These situations are a little dicey for them, because they know they are more prone to make mistakes and their tool set may not be as robust in these less-familiar paths. Either way, this plays into your hands.

Wrapping up this step

Ahh, but how do we choose to respond? I would argue that gets a lot easier when we know what it is we just saw. One of the biggest barriers to effective incident response is accurate interpretation of the detection. Which sort of feeds into my caution above about only ingesting things that we know how to handle. If we can master those, grafting on new sources slowly can help us grow our detection and analytics programs. We want our training and our competency to keep pace with those new sources. Next time up we’ll see how to put this into use on a hypothetical scenario – thanks for checking out this post!