Trigger warning: GRC people, I love you. You’re an important and necessary part of our ecosystem.
Humans are terrible at assessing risk. Through a combination of biology, personality and society our brains are inherently wired to do a terrible job at perceiving and evaluating the impact of decisions we make or don’t make.
If you read enough literature about this, it can be distilled to a few key points:
- Intuitive statistics: We make conclusions and predictions based on limited local data. For example, walking around seeing people not having a disease is another tick in the column of “this disease doesn’t exist” ala COVID-19
- Optimism Bias: We are poorly placed to evaluate the harm that is likely to befall us. Optimistically we believe we will generally be okay. This can be cancelled out by seeing others fall to the risk we face.
- Abstract vs concrete knowledge: If we haven’t experienced something in a concrete directly impacting form, we really can’t evaluate what that means for us. Abstraction is the enemy of action.
- Inertia and Homeostasis: We have an overwhelming urge to not change (Inertia) and when we are forced to we typically try to revert back to a state of normality. (Homeostasis)
The combination of these factors means that risk evaluation is very risky ba-dum-TSH.
So why does risk dominate the conversation?
Given that, we are bad at risk calculation and evaluation. Why does it take up so much airspace?
This is a legitimate question worth asking and it doesn’t just apply to GRC people either. Leaning towards risk as the currency of security has twisted the conversation of threat modelling and attack surface mapping in a way that really misdirects people on both sides of the fence. Defenders and GRC people are perversely incentivised to make poor decisions because we aren’t calculating risk right. This results in picking the wrong threats to protect against and making terrible purchasing choices because of it.
This doesn’t evenly apply to everyone as well, sometimes the big end of town has problems that the little to medium end of town need to de-prioritise. Guess what though?! Security requires everyone be secure to the extent that makes sense for them! There is a lot more medium and little end of town than big end.
Let’s take Advanced Persistent Threats as an example.
APT takes up a lot of airspace, because it’s attractive from a risk perspective, let’s run it against the list above:
- Intuitive Statistics: APTs are yelled about a lot. So we see more examples and therefore see more risk.
- Optimism Bias: Again, APTs are yelled about a lot.
- Abstract vs concrete knowledge: Can I get a “Threat intelligence is a waste of time for most orgs?” APT is whispered about over dying campfires in the blue team HQ. Is there concrete knowledge? No, But there is something amorphous about the sharing of knowledge with APT. It’s both concrete and abstract at the same time.
- Inertia and Homeostasis: This is where APT gets interesting. Frequently it’s sold as a case of when, not if. Meaning the default response is sometimes, what can I do? It’s going to happen anyway. So the evaluation of the APT risk plays perfectly into this. It’s highly risky in a risk sense but there isn’t a lot we can do about it. So let’s throw our hands up and accept the risk.
The outcome of the evaluation of APT will really depend on the size of the organisation and the depth of their pockets. Sometimes people will throw their hands up and do absolutely nothing, others will blow a bunch of money on a toolset they don’t need and aren’t trained to use.
You could apply this to a lot of popular discussions in security.
Typically though, when I have seen these discussions being had, I’ve got a spreadsheet in front of me from a bunch of tooling saying: hey, there are a bunch of things we can do that cost nothing but engineering and implementation time. That list remains untouched, or lightly brushed to the side, because it isn’t flashy enough to surpass risk assessment bias.
Full transparency, I work for a vendor. I build security tooling. I still see people having the wrong conversation about risk. They ignore the fact they have a pile of out of date dependencies that are actively abusable (more on this another time), no CORS headers on their web app and rather pick mitigation strategies which under the mildest scrutiny collapse.
The view sphere of risk frequently can ignore the relevant changes that can be made to an environment or application that vastly improve its security stance without the addition of extra layers of complexity.
Let’s not waste bytes and perhaps discuss what we can do?
Risk still has a place
There, I said it. Risk still has a place. We need our lovely GRC peeps as much as we need our red teamers, blue teamers, pentesters and sometimes, yes, vendors. Compliance on the other hand…. that is a story for another day.
Risk however, needs to take a long hard look at itself. I believe the centring of our discussion around risk, has resulted in perverse incentives for vendors and practitioners alike. Basics matter and sometimes basics don’t factor into the risk conversation.
So where do we start?
- Evaluate the above biases, ask the questions when evaluating risk. Am I falling into any of those traps.
- Risk needs to be a cooperative process, I believe that practitioners that are non-GRC should be exposed to GRC and vice-versa.
- There is a G in GRC, that is frequently forgotten. The role of governance is to assist with avoiding this issue. Information that is relevant needs to the reach the right people at the right time, while being complete, accurate and valid!
- Stop wasting my time with APT