Happy Monday folks! I’m super excited to be getting back to it and blogging about some cybersecurity goodness. I’ve picked up a ton of cool ideas after a long but fantastic week in Amsterdam for Cisco Live Europe. Once again, my buddy Mark Stephens and I presented an Interactive Breakout called “Empty Threats – Building Your Own Cyber Threat Picture”. Offered at the last 4 Cisco Live US and Amsterdam events, each is a goldmine. What I love about these sessions is that our customers teach us so much about how they tackle security problems. Last week’s iteration did not disappoint. We had a fantastic discussion around using ATT&CK for insider threats. An attendee named Tommy brought up the question of how we factor them in, weigh their TTPs, etc. As with so many of these interactions, I am now thinking a lot about how to carry that forward. Let’s see how we might tackle this thorny topic!

Why are insiders so difficult (to characterize)?

Every organization has internal users. No matter how they draw the lines, someone has to be inside the system for it to be useful. Self employed? Congrats, you are your own insider. There is a lot of nuance to how we characterize these essential stakeholders as systems grow in size and complexity. They have the unique status as both the victim and the threat. Various groups or individuals may occupy both roles simultaneously. This duality is confusing, confounding, and likely contributes in some part to the inconsistency in how organizations deal with them.

When I talk to folks about their insider threats, they often paint a very monochromatic picture. Many will focus on the stereotypical “unwitting victim turned accomplice” and use a broad brush to put these users in a single bucket. When we do this, we ignore key aspects of threat modeling. This can lead us to over simplify, ignore, miss, or otherwise misrepresent the insider threat altogether. We might be tempted to treat them all as a single pool, but doing so can have serious consequences. Considering that insiders fall on not just one spectrum, but many, it gets tough to sort them into a single bin. Let’s take a look at how this impacts us.

Motivations

Our first concern is that we ignore the motivations of the adversary. As we keep seeing over and over again, these motivations change, so maybe the best approach is to consider the aggregate by user group? I’m still not sure, but let’s take a look at the two extremes here.

Unwitting accomplices

Some insiders are indeed unwitting victims, but I think even then there is a spectrum of motives that might impact our overall picture. Maybe they are a rule follower, and are simply phished well by an adversary. That being said, even good employees or insiders find it difficult to stick strictly with work. They may access personal accounts, handle bills, message friends, and disclose compromising material on social media. Each behavior – even when expected and accepted – can lead to a different potential threat.

Careless cowboys

In the middle, we get the folks who should know better but decide that rules are only there for others. Or they just don’t pay attention. Leaving computers unlocked, letting passwords lie around, sharing credentials – all of these are careless lapses. Laziness or complacency are the key indicators here. The biggest issue is that it is a slippery slope into malicious territory. At their worst, these folks may go so far as to disable controls so as to bypass the very policies meant to protect them and their teams. It can be argued that effective security would prevent this, but anyone reading this knows that we have to consider it because, well, it happens all the time.

Malicious Insiders

Obviously, the other extreme weighs heavily on all of us, but is often avoided because they are painful to acknowledge: malicious insiders. Even here, the threat may be motivated by a variety of factors, and they may seek a similarly diverse set of impacts. Do they have a grudge against the company, their manager, a colleague? If that is the case, are they digging up dirt or altering work? Maybe they are financially driven and seek to profit by selling intellectual property? Some insiders might just be struggling with motivation, morale, or ethical issues, and court a wide variety of actions that meet their needs. It should go without saying that these are far from cut and dry. Where do you map the next Edward Snowden in that mix of both motivations and goals?

Potential/type of impact

Motivations might vary daily. Thus it might be easier to tackle the power or potential for that insider/group to do harm. Unless the organization is changing rapidly, undergoing some serious transformation, or lacking roles and responsibilities, the potential harm of their actions should be much more stable than their motivation. Luckily for us, most risk management programs are geared to account for this. That should helps us better relate it to management.

Job Functions vs. impact

Folks in finance, human resources, engineering, and IT will all have access to different systems (hopefully). Those systems used by one group can be night and day compared to another organization. Our security solutions should take this into account. The coolest part? Your network segmentation and access control systems (file, network, application) all benefit from threat modeling for each of these pools.

Leadership – our “special” function

What about leadership? They sit in a pretty impactful part of any org chart. They also access some important systems with power over resources. More directly, they are much more likely to be used in business email compromise (BEC) attacks. Despite these risk factors, they can be trouble. It is uncanny how often you hear that the toughest users to get onboard with security are partners in a law firm, doctors in a practice, or C-suite in a company. This begs the question – is impact where we factor in stubbornness, or do we account for that somewhere else?

Who are we missing?

We can enumerate a ton of other groups by impact, but a couple of groups stand out. The impact of compromised vendors and contractors varies wildly, sometimes within the same teams. You should handly VPN or network access differently for different pools of each user. The length of their stay is a factor, as is the semi-autonomous nature of their employment. You should also weigh device policies and overlapping interaction with their own or other customers’ systems.

How granular we must go is going to be up to you and your team. I think that the balance is between the overhead and accuracy of tracking these threats. It is safe to assume that this factor is what will make this so much more unique from environment to environment. Ensure management is aware and you document any assumptions. You may wish to make some adjustments to account for worst-case scenarios, but first a word on that…

A word on likelihood…

If you are reading all of this and thinking “Mike, this is all a lot of words to say ‘Do Risk Management'”, you’d be right. Organizations spend an awful lot of effort on chasing click-bait threats. Everyone worries about some crazy nation-state sponsored APT tackling them, but little on the script kiddies down the road. We worry about the worst-case scenarios as if we’re going to get hit by the next mega spy. More likely? Your next insider threat is less about malicious intent and more careless, ill-informed, or emotionally driven events. Account for those appropriately and you will be fine.

I’ll freely admit that calling out boogie-people is helpful in rallying the troops to the cause. Sometimes we need to strike fear into decision makers to get action. But be careful, abusing this for your own motives or for convenience is dangerous to the organization, detrimental to their trust in you, and desensitizes the organization to the real threats and risks.

Getting Insiders into ATT&CK

These motivations and the impacts desired by insider threats warrant much more nuance than that of an external actor in many cases. You typically know why an APT is after you – they have a organizational behavior and a reason for being, and even more complex groups have pretty well understood motives. Insiders are not a single threat, but truly a collection, and even the same insider may present different risks depending on their situation, timing, their place in the company, or many other environmental factors. As mentioned above, we might need to look at the factors for each user group or collection and factor from there.

Building from scratch

If you have a ton of time, or a seriously mature and comprehensive situational awareness – here you are. You can certainly build threat models for each type and decompose the TTPs for each, ranking them based on composite risk or just potential impact. I have yet to see a single organization do this effectively – most do not have the requisite full-scope understanding of the environment, permissions structure, policy awareness, and system dependencies.

The other alternative in this category is to guess – early in a threat modeling process, better is good enough, and just a cursory glance at the ATT&CK TTPs reveals techniques that different insiders might use. Credentials, services, and protection evasion techniques might all be applicable. I find that even if you are guessing here, you are bound to get coverage of missed TTPs while addressing external adversaries. At the end of the day, they all hope to become insider threats themselves.

CTID’s Insider Threat TTP Project

Another approach is to tackle it with a catch-all “Insider Threat” tab in ATT&CK Navigator or your threat modeling tool of choice. MITRE’s CTID has put together a subset of Enterprise TTPs that do exactly this in their Insider Threat TTP Knowledge Base. Expanding on and using ATT&CK for insider threats, they orient behaviors around the questions of “could they do this,” “would they do this,” and “did they do this.” I like this approach, but it does take some work to move back into an ATT&CK-centric gap analysis, as some of the techniques are derived from, but not directly linked with ATT&CK TTPs.

MITRE CTID’s Insider Threat TTP Program focuses on everything but the “should” topic 😉

My current approach

I am a big fan of taking the CTID effort and pulling it back into ATT&CK, while assessing and annotating it to ensure I am accounting for local context. Their GitHub repo offers ready-to-use JSON files to adapt to your needs. You can simply weigh the techniques based on your own rubric that factors in motivation, impact, and likelihood (or Would, Could, Did). This matrix is below.

Insider Threat TTPs mapped back into ATT&CK – the easy button can be as difficult as you need it to be.

Wrapping up

Admittedly, I am only now starting to get serious about using ATT&CK for Insider Threats in my threat-focused discussions. Tommy’s input during our breakout only solidified that this is a huge gap for many of us. I am curious as to how others tackle this topic in their threat modeling. What do you folks see in your environments? What works best for you? How does accounting for Insiders impact your overall security strategy?