Is risk assessment any different from risk perception?

I had already mentioned our tendency of self-deception in a previous post. And I was recently reading an article where the Dunning-Kruger effect was mentioned and I thought, here is something else that pretty much affects all of us and can lead to self-deception. Then I recalled the heuristics and biases I had read about in ‘Thinking, Fast and Slow‘, a book that was essentially about how we form thoughts in two distinct ways; Fast and automatic (system 1 – in the author‘s terminology), and slow and thoughtful (system 2), and in the process, the author offered an explanation of human biases and illusions (which function is more or less to jump to conclusions and avoid giving work to system 2). This is quite relevant in cyber risk because systems are far too complex, models are imperfect, and the data is imperfect too or quickly becomes obsolete, which leaves quite a big role to qualitative risk assessment situations where human judgment is the most prevalent (and thus a fair amount of subjectivity).

Now, human intuition is not always misguided, and I don’t believe we can yet replace all human judgment with algorithms or models (I’m not sure if we will ever be able to do that). So there is no point really in being overly concerned with potential failures due to our biases, that’s going to happen anyway. The question would be more on how (if at all possible) to manage and ideally reduce their effect, especially within risk models and frameworks that have been proposed and are being used to measure and manage risk.

Overcoming subjectivity?

So can we maximize objectivity and reduce biased and subjective human input? One way to do that comes in the form of an uncertainty model, e.g. stochastic optimization, that attempts to work out some sort of simulated final estimate based off of a number of initial estimates for specific risk scenarios. The whole workflow, however, starts off with human estimates, i.e. the expert identifying and defining risk scenarios, then giving estimates, and finally using the uncertainty model. Now, how about the ‘Garbage In, Garbage Out’ concept? You can examine, vet, and formalize a model/ framework all you want but it can still miserably fail because of poor quality input.

Can psychology (or other disciplines) help experts recognize when to trust their intuition and when to take a step back? If yes, then by all means they are needed. And then maybe risk modeling and assessment would be less about the expert’s perception. One issue though is that failures due to our biases are mostly documented in artificial experiments, i.e. in the lab, and I’m not sure about how that plays out with people in the real world, doing things that really matter to them. I would certainly engage system 2 (to use Kahneman’s terminology) more often when I’m dealing with something that matters to me. I believe, however, that for the time being risk assessment is still mostly about perception. We still have a long way to go to overcome subjectivity and we better recognize it and seek help from other disciplines that are better equipped to deal with it (besides throwing some math at it).

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *